Advances on Mechanics, Design Engineering and Manufacturing IV: Proceedings of the International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing, JCM 2022, June 1–3, 2022, Ischia, Italy 3031159276, 9783031159275

This book gathers contributions presented at the International Joint Conference on Mechanics, Design Engineering and Adv

614 146 212MB

English Pages 1645 [1646] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Advances on Mechanics, Design Engineering and Manufacturing IV: Proceedings of the International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing, JCM 2022, June 1–3, 2022, Ischia, Italy
 3031159276, 9783031159275

Table of contents :
Preface and Acknowledgements
Organization
Conference Chairs
Conference Advisory Chairmen
Scientific Committee Coordinator
Organizing Committee
Scientific Committee
Track General Chair
Track Chairs
Design Methods and Tools
Geometric Modelling and Analysis
Advanced Manufacturing and Control
X-Reality for Interactive Design
Education and Representation Techniques
ADM
Ingegraf
S.Mart
Contents
Design Methods and Tools
A Computer-Aided Tool for the Energy Optimization of Industrial Robots in Manufacturing Applications
1 Introduction
2 Dynamic Scaling for Trajectory Planning
3 Graphic User Interface
4 Conclusions
References
Experimentation and Characterization of Mycelium-Based Biocomposites for Ephemeral Product Applications
1 Introduction
2 Materials and Methods
2.1 Regulations, Tests and Specimens
2.2 Culture and Growth of the Specimens
2.3 Description of the Tests
3 Results
4 Discussion
5 Conclusions
References
Life Cycle Assessment of a Mobile Tiny House Made with Sustainable Materials and Design Implications
1 Introduction
2 Context of the Work
3 Methodology, Goal and Scope of the Study
4 Data Collection and Life Cycle Inventory
4.1 Hemp Cultivation-Stage A1a
4.2 Hemp Shives Production-Stage A1b
4.3 Hemp Blocks Production-Stage A3
4.4 Construction of the Tiny Prototype-Stage A5
5 Results and Discussion
5.1 Life Cycle Impact Assessment
5.2 CML-IA Baseline Results for the Mobile Tiny House Prototype
6 Conclusions
References
Industrial Scraps Valorization: Designing Products to Make Detached Value Chains Circular
1 Introduction
2 Industrial Symbiosis Design Approach
3 Case Study: Include Scraps in Components Manufacturing
3.1 Scraps and Components Characterization
3.2 Re-design and Innovative Applications
4 Discussion
5 Conclusion
References
Circular Strategies to Improve Ephemeral Products Sustainability Through Co-creation and Its Metrics
1 Introduction
2 Materials and Methods
2.1 Objectives Definition
2.2 Ephemeral Products and Circular Economy
2.3 Metrics and Indexes in Circular Economy
3 Co-creation Workshop
3.1 Co-creation Workshop. Preparation and Description
3.2 Case Study: Surgical Face Masks
4 Results and Discussion
4.1 Results of the Co-creation Process
4.2 Metric Test
5 Conclusions
References
A Step Forward Life Cycle Assessment to Optimize Products and Increase Company Eco-design Competencies
1 Introduction
2 Approach
3 Case Study
4 Discussion
5 Conclusion
References
Workstation Design Centered to Workers’ Safe Use
1 Introduction
1.1 Context and Aim of the Paper
1.2 Our Approach and Hypothesis
2 Design Methodology
2.1 System Engineering Approach
2.2 Workstation Layout Design for Workers’ Safety Use
3 Ergonomic Analysis
4 Conclusion, Future Works
References
Design and Manufacture of a Sustainable Recycled Plastic Shredder
1 Introduction
1.1 Object
1.2 State of Art and Justification
2 Methodology
3 Background
3.1 Market Study
3.2 Study of the Basic Model
3.3 Innovation and New Design Requirements
4 Solutions and Prototyping Process
5 Prototype Analysis
5.1 Prototype Experimental Analysis
6 Conclusions
7 Future Prototype Improvements
References
Critical Analysis on the Limitations of the Diffusion of Life Cycle Assessment
1 Introduction
2 Literature Background
3 Methodology
3.1 Used Approach
3.2 Considered Case Studies
4 Results and Discussion
4.1 Identified Problems
4.2 Proposed Strategies of Intervention
5 Conclusions
References
Implementation of a Sustainable Washing System Applied to Waste Recycling
1 Introduction
1.1 The Microplastic Problem
2 The Washing System
2.1 Experimentation
3 Premises for a Sustainable Washing System
3.1 Conceptualization and Ideas
3.2 Construction
4 Future Work
4.1 Divulgation of the Information in Open-Source Communities
4.2 Design and Testing of a 3D Printed Microplastics Filter
5 Conclusions
References
Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer
1 Introduction
2 State of Art
3 Introduction to the Foil-Sailing Theory
3.1 The Lift Generated by a General Wing
3.2 The Behavior of a Wing in Water
3.3 Introduction to the Stability of a Foiling Sailboat
4 Basic Design and Virtual Prototyping of “ROMa 33” Project
4.1 Hull Concept
4.2 Hull Design
4.3 Hydrofoil Design
4.4 Positioning of the Bulkhead Containing the Hydrofoil
5 Roma 33 Performance Simulations
5.1 Main Hydrofoil CFD Analysis
5.2 Overall Appendices CFD Analysis
5.3 Hydrogen Propulsion and Hydrides
6 Conclusion
References
Virtual Design for Repair: The Case of a Gearmotor
1 Introduction
2 The Method
3 Results and Discussion
3.1 Disassembly Analysis
3.2 Failure Causes
3.3 The Gearmotor Re-design
4 Conclusions
References
Can MaWR-Method for Symmetry Plane Detection be Generalized for Complex Panfacial Fractures?
1 Introduction
2 Methods and Tools
2.1 MaWR-Method
2.2 Asymmetry Value Index
2.3 Starting Data
3 Results
4 Conclusions
References
Mixed Reality-Based Support for Total Hip Arthroplasty Assessment
1 Introduction
2 Methods
3 Results and Discussion
4 Conclusion
References
A Semiautomatic Procedure to Assist Physicians in Paediatric Airway Stenting
1 Introduction
2 Method
2.1 Stent Characterization Procedure
3 Case Studies
4 Conclusion
References
An Engineering Method to Evaluate Care Processes and Introduce Televisits
1 Introduction
2 Scientific Background
3 Method and Tools
4 Application to a Case Study
4.1 Collection of Information
4.2 Representation of Process with IDEF0
4.3 Check of Preliminary Mapping
4.4 Discussion on the Introduction of Televisits
5 Discussions
6 Conclusion
References
Modeling of Cutting Operations for Robotic Surgery Using CAE Tools
1 Introduction
2 Materials and Methods
3 Overview on Milling and Deburring
4 Mechanical Design of the Compliant Spindle
5 Virtual Prototype via CAE Tools
6 Results
7 Conclusions
References
Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries
1 Introduction
2 Methods
2.1 Geometry Models
2.2 Scanning Imaging Algorithm
2.3 Shape Analysis: Thinning and Distance Function Algorithms
3 Results
4 Conclusions
References
User-Centered Design for Manufacturing and Assembly: An Application in the Early Design Stages
1 Introduction
1.1 Related Literature Review
2 Method
3 Results
3.1 Step1 Modular Design
3.2 Step2 Decisional Step
4 Discussion and Conclusions
References
A Medical Device for Correctly Positioning Custom-Made Implants for Fractured Orbital Cavities
1 Introduction
2 Design and Development of the Orbital Implant Positioner
2.1 Requirements and Functions
2.2 Technical Specifications
2.3 Conceptualisation
2.4 Embodiment and Detail Design
2.5 Physical Prototyping
3 Results and Discussion
3.1 Assessment of the Patient-Specific Implant Accuracy
3.2 Assessment of the Implant Positioner Orientation
4 Conclusions
References
A 3D Modeling Approach to Realize and Test Customized Knee Implants
1 Introduction
2 Method and Tools
2.1 Case Study
3 Results and Discussion
4 Conclusions
References
A Design-Based Thinking for Alarm Use and Management in the Neonatal Intensive Care Unit
1 Introduction
1.1 Communication
1.2 Alarm Fatigue
1.3 The Nurse
2 Methodology
2.1 Framework for Critical Alarms Design (Sanz-Segura, 2021) [2]
2.2 Data Collection
2.3 Data Analysis
2.4 Prototype the Concept
3 Results
3.1 Communicative Efficiency
3.2 Smart Alarm Management
3.3 Soundscape Treatment
3.4 Technical Aspects
4 Conclusion
5 Discussion
References
A Design Method for Custom Functionally Graded Lattice Orthoses
1 Introduction
2 Methods
2.1 Acquisition of the Geometry
2.2 3D Modelling
2.3 Manufacturing
3 Results
4 Conclusions
References
Automatic Segmentation of Stomach of Patients Affected by Obesity
1 Introduction
2 Materials and Methods
2.1 Dataset
2.2 Network Architecture
2.3 Training Configuration
2.4 Metrics and Evaluation
3 Results
4 Conclusions
References
Automatic Outcomes in Minnesota Dexterity Test Using a System of Multiple Depth Cameras
1 Introduction
2 Material and Methods
2.1 Instrumentation
2.2 Experimental Protocol
2.3 Data Processing
3 Results and Discussions
4 Conclusion
References
Optimal Design of Surface Functionally Graded Dental Implants with Improved Properties
1 Introduction
2 Materials and Methods
2.1 CAD Modelling and Implant Design Criteria
2.2 Numerical Simulation
3 Results and Discussion
4 Conclusions
References
The Morphological and Geometrical Segmentation of Human Thoracic and Lumbar Vertebrae: An Automatic Computer-Based Method
1 Introduction
2 The Proposed Computer-Based Methodology
2.1 Local Coordinate System (LCS) Identification
2.2 3D Semantic Features Segmentation
2.3 3D Geometric Features Segmentation
2.4 3D Dimensional Features Evaluation
3 Results
4 Conclusion
References
Interpreting Emotions with EEG: An Experimental Study with Chromatic Variation in VR
1 Introduction
1.1 The Representation of Emotions
1.2 Overview of the Color Influence and EEG Emotions Interpretation
2 Methods
2.1 EEG as a Physiological Tool for Understanding Users’ Emotions
2.2 Data Analysis and Feature Extraction
2.3 Experiment Setup
3 Results
4 Discussion
5 Conclusions
References
Evaluation Method of Modular Products for the Prosumer. Proposal and Improvement
1 Introduction
2 Methodological Description
3 Results
3.1 Optimized Evaluation Method
3.2 Products’ Evaluation
4 Discussion
5 Conclusions
References
Experimenting with the Design Thinking Methodology in Citizen Participation Contexts
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
References
Surface Grinding of Borosilicate Crown Glass Optics via a Robotic Approach Based on Superposed Trajectories
1 Introduction
2 Materials and Methods
2.1 Superposed Trajectories
2.2 Preliminary Calculations
2.3 Experimental Setup
2.4 Grinding Tests
3 Results
3.1 Material Removal
3.2 Surface Roughness and Profile Error Values
3.3 Surface Topography
4 Discussion
5 Conclusion
References
Design Requirements Assessment in the COMMAND (COmpact Multi-purpose Muon and Neutron Detector) Project
1 Introduction
2 Methods
2.1 Requirement Assessment
2.2 Proof of Concept Model
3 Results and Discussion
4 Conclusions
References
Path Approximation Strategies for Robot Manufacturing: A Preliminary Experimental Evaluation
1 Introduction
2 Path Planning in Robot Manufacturing
3 C1 Corner Smoothing Approach
4 Experiments
5 Conclusions
References
Design and Development of an IoT Enabled Device for Remote Monitoring of Hydraulic Hammers
1 Introduction
2 Proposed System
2.1 Material
2.2 Proposed System Architecture
3 Conclusion
References
Implementing Vocal Natural Language Interface to Enterprise Resource Planning System
1 Introduction: Are Current ERP Systems 4.0?
2 Related Works: Natural Language Interface to SQL Database
3 Proposition: A Deep Learning Vocal Interface to ERP
3.1 Speech Recognition
3.2 Text-to-SQL Parser
3.3 Post Process of SQL
4 Use Case on an Open ERP
5 Future Works and Conclusion
References
Collective Intelligence Application in a Kitting Picking Zone of the Automotive Industry
1 Introduction
2 Problem Description
2.1 Use Case, Problematic and Objective
2.2 State of the Art
3 Current Process Description
4 System Modeling Proposal
4.1 Identified Target
4.2 Architecture Proposal
5 Conclusions and Future Work
References
Application of Neural Networks for Smart Tightening of Aeronautical Bolted Assemblies
1 Introduction
2 Experimental Campaign
3 Neural Network
4 Results
5 Conclusion & Perspectives
References
Towards a Digital Factory in the Leather Goods Sector: The Case of an Italian Company
1 Introduction
2 Towards Smart and Digital Factories
3 The Proposed Methodology
3.1 STEP 1 - Evaluation of the Internal and External Situation
3.2 STEP 2 - Digitalization Level Measurement
3.3 STEP 3 - Business Digitalization Roadmap
3.4 STEP 4 - Strategy Application and Analysis
3.5 STEP 5 - Scalability
4 Case Study
4.1 Company Background and Motivations
4.2 Preliminary Method Application
5 Conclusions
References
Digital Continuity Based on Reinforcement Learning Model Transformations
1 Introduction
2 Related Work
3 Discussion
4 Overview of the Proposed Approach
4.1 Input Data: Dataset Presentation
4.2 Extraction of Graph Structure and Graph Information
4.3 Reinforcement Learning of Transformation Rules
5 Experiment
6 Conclusion
References
Enhancing Human-Robot Collaboration in the Industry 5.0 Context: Workplace Layout Prototyping
1 Introduction
2 Previous Works and Purpose of the Current Work
3 Suitable Tools for 3D Visualization
4 Integration of Software Tools Aiming at Digital Prototyping of HRC Workplace Layout
5 Case Study
6 Conclusions and Future Works
References
Requirements Management in Master Model Development: A Case Study in Fusion Engineering
1 Introduction
2 Master Model Procedure
2.1 Requirements and Design Parameters Identification
2.2 Parametric Digital Model Development
2.3 Geometrical Variants Generation and Verification
3 Case Study: Design Progress of a DEMO Divertor Subsystem
4 Conclusions and Future Works
References
Towards a Knowledge-Based Additive Manufacturing Approach
1 Introduction
1.1 Knowledge Modelling: Opportunities and Needs for Design and Manufacturing
1.2 Objectives and Scope
2 Knowledge Management for AM
2.1 Knowledge Elicitation as a KM Process Step
2.2 Elicitation Techniques Tested in AM Context
3 Proposition of a Knowledge Model Related to EBM
3.1 RKM Definition
3.2 RKM Structure Illustration
3.3 RKM Application to Part Orientation
4 Conclusion
4.1 Towards a Knowledge-Based Manufacturing Solution
4.2 Further Opportunities
References
Systems and Requirements Engineering: An Approach and a Software Tool for the Interactive and Consistent Functional Requirement Specification
1 Introduction
2 State of Art
3 Syntax Framework for Requirement Statements Construction
4 Tool Support and Evaluation of Theoretical Framework
4.1 Description of the Requirements Writing Tool (RWT)
4.2 Tool Testing for Framework Evaluation
4.3 Results
5 Conclusions and Future Works
References
Analysis of Guided Waves Dispersive Behavior for Damage Detection in Flat and Curved Composite Panels
1 Introduction
2 Test Articles and FE Modelling
3 Dispersion and Slowness in Flat and Curved Panels
4 Damage Detection
5 Concluding Remarks
References
Analysis of the Tensioning Field Induced by Stress Fibers in Nanoindented Stem Cells Adhered to a Flat Substrate
1 Introduction
2 Materials and Methods
2.1 Modelling and Simulation
2.2 Optimization
3 Results and Discussion
References
Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train with Gear or Bearing Defect
1 Introduction
2 Main Concepts Definition and State of the Art
2.1 Integration Methods
2.2 Coupling Definition
2.3 Modeling Framework
3 MGB Modeling Framework
3.1 Heterogeneous Sub-models
3.2 Interactions and Couplings Between Sub-models
3.3 MGB Model: A 3-Axes Framework
4 First Results of the Method
5 Conclusion
References
Design of a Snap-Fit Joint Through a Multibody Model
1 Introduction
2 Materials and Methods
2.1 Multibody Model
2.2 Experimental Tests
2.3 Tolerance Analysis Parameters
3 Results
3.1 Experimental and Numerical Curves
3.2 Tolerance Analysis
4 Discussion
5 Conclusions
References
Towards Real-Time Physics-Based Variation Simulation of Assembly Systems with Compliant Sheet-Metal Parts Based on Reduced-Order Models
1 Introduction
2 Problem Formulation
2.1 Representation of the VS Model
2.2 Physics-Based Model and Computational Challenges
3 Proposed Methodology
4 Case Study
4.1 Implementation
4.2 Description of the Case Study: Aircraft Vertical Stabilizer
4.3 Results
5 Conclusions
References
Design and Comparison of High Performance Sailing Hulls
1 Introduction
2 High-Performance Sailing Dinghy
3 Design Approach
4 Case Study
5 Conclusions
References
Real-Time Approximative Thermal Simulation for Wire Arc Additive Manufacturing
1 Introduction
2 Thermal Simulation
3 Results
4 Conclusion
References
Simulation Methods for Plant Layout Design-A Case Study
1 Introduction
2 The Simulation Model
2.1 Real Plant Description
2.2 Simulation Model Construction
3 Experiments and Results Analysis
4 Conclusions
References
Optimizing Racing Wheelchair Design Through Coupled Biomechanical-Mechanical Simulation
1 Introduction
2 Material and Methods
2.1 Long Distance Race Model
2.2 Sprint Race Model
2.3 Dataset Considered to Test the Models
3 Results
3.1 Results of the Long-Distance Race Model
3.2 Results of the Sprint Race Model
4 Discussion
5 Conclusion
Annex
References
Preliminary Validation of a Virtual Environment for Simulation and Recognition of Human Gestures
1 Introduction
2 Methods
2.1 Subject Test
2.2 Modeling of the Human Upper Limb
3 Results
4 Discussion
5 Conclusion and Outlook
References
Energy Efficient Trajectory Planning in Robotic Cells via Virtual Prototyping Tools
1 Introduction
2 Techniques for Reducing Energy Consumption in Robotics
2.1 Hardware Methods
2.2 Software Methods
3 Trajectory Planning
4 Case Study: A Pick-and-Place Robotic Cell
5 Conclusion
References
Design and Implementation of a Passive Agitator to Increase UV Dose in WWTPs Disinfection Channels
1 Introduction
2 CFD Modeling Setup
2.1 Case Study: Geometry and Meshing
2.2 Turbulence Model
2.3 Lagrangian Approach for Simulating Reactor Performance
2.4 Radiation Model
2.5 Disinfection Kinetics Model
3 CFD Results
4 Experimental Results
4.1 Velocity Measures
4.2 Water Quality Measures and Comparison Between Systems
5 Conclusions
References
Numerical Optimization of a Composite Sandwich Panel with a Novel Bi-directional Corrugated Core Using an Animal-Inspired Optimization Algorithm
1 Introduction
2 Novel Bi-directional Corrugated Core
3 Group Search Optimizer (GSO)
3.1 Scanning Mechanism of the Producers
3.2 Scrounging Mechanism
3.3 Ranger Simulations
4 Finite Element Analysis
4.1 Optimization Study
5 Conclusions
References
A Topology Optimization Approach to Design of a Low Cost 3D Printable Monolithic Compliant Mechanism for FWMAV’s Wing Actuation
1 Introduction
2 Materials and Methods
2.1 Setup of FEM Analyses for Topology Optimization
3 Results
4 Conclusion
References
Computer-Aided Prognostics and Health Management Using Incipient Failure Detection
1 Introduction
2 Basic Background
3 Data Processing
4 Residual Life Estimation
5 Incipient Fault Identification
5.1 Data Set
5.2 Build-Up of Equivalent Damage Index
5.3 Results
6 Conclusion
References
Geometric Modelling and Analysis
On the Adoption of the STEP Neutral Format for Procedural Mechanical CAD Models
1 Introduction
2 Representation Formats
3 The STEP Format
3.1 STEP Protocols for 3D CAD
3.2 STEP Complexity
4 Experimental Evaluation of the Quality of Reconstructed CAD Models
5 Conclusions
References
Generative Design in Bicycle Frames: A Review
1 Introduction
1.1 Main Concepts
1.2 Literature Review
2 Different Types of GD
2.1 Shape Grammars (SG)
2.2 L-Systems (LS)
2.3 Cellular Automata (CA)
2.4 Genetic Algorithms (GA)
2.5 Swarm Intelligence (SI)
3 Design of a Bicycle Frame
4 Study Cases
4.1 Preliminary Projects
4.2 A New Bicycle Frame Through GD
4.3 The Fastest Bicycle in the World
4.4 Other Bicycle Designs Brought to Reality Through Additive Manufacturing (AM)
5 Conclusions
6 Future Directions
References
Geometric Analysis of Product CAD Models to Support Design for Assembly
1 Introduction
2 Background and Motivations
3 The Proposed Approach
3.1 Assembly Analysis
3.2 Component Analysis
3.3 Component Features Analysis
3.4 Geometrical Attributes Mapping
4 Test Case
5 Conclusions
References
Image-Driven Manufacturing of Graded Lattices by Fused Deposition Modeling
1 Introduction
2 Background
3 Modeling Method
3.1 G-code Computation
3.2 Volumetric Modeling
3.3 Editing G-code
4 Case Studies and Discussion
4.1 Single Image
4.2 Multiple Images
4.3 Discussion
5 Conclusions
References
Geometric Modeling of Lattice Structures with Selectively Activated Walls for Hydraulic Manifolds
1 Introduction
2 Geometric Modeling Method for Lattice Structures with Internal Walls
3 Case Study: Hydraulic Manifold
4 Conclusions
References
Semi-automatic Point Clouds Registration for Upper Limb Anatomy
1 Introduction
2 Scanning Equipment
3 Conventional Manual Registration Procedure
4 Semi-automatic Registration Strategy
4.1 Key-Points Detection Through Neural Network
4.2 Key-Points Selection and Rough Cloud Alignment
4.3 Registration Refinement
5 Results
6 Conclusions
References
Optimizing Galvanic Process: Wet Surface Estimation for Small Metal Parts to Avoid Material Waste
1 Introduction
2 Wet Surface Estimation
2.1 3D Optical Scanning Approach: Feasibility Analysis
2.2 3D Optical Scanning Approach: Tests and Selection of Commercially Available Solutions
3 Results
3.1 AuRum 3D
3.2 AuRum LT
3.3 Rexcan DS3 Silver
4 Conclusion
References
Reverse Engineering and Redesign of the Impeller of a Submersible Centrifugal Pump
1 Introduction
2 Case Study
3 3D Acquisition and CAD Modelling
3.1 Reconstruction of CAD Model
4 Redesign of the Impeller
4.1 Modal Analyses
4.2 Results
5 Conclusions
References
An Augmented Reality Framework for Remote Factory Acceptance Test: An Industrial Case Study
1 Introduction
2 Remote Factory Acceptance Test Architecture
3 Case Study
4 Experimental Set-Up
4.1 3D Scanning System
4.2 Measuring Software
4.3 Device for Augmented Reality
5 Experiments and Results
6 Conclusions and Future Works
References
Convolutional Neural Network for Background Removal in Close Range Photogrammetry: Application on Cultural Heritage Artefacts
1 Introduction
2 Methodology
2.1 Dataset
2.2 CNNs Set-Up
2.3 Evaluation Metric for Segmentation
3 Results
3.1 CNN Results
3.2 Reconstruction Based on Automatic Background Removal
4 Discussion
5 Conclusions
References
A Benchmark of Traditional Visual Descriptors and Convolutional Networks `Off-the-Shelf' for Anomaly Detection
1 Introduction
2 Methods
2.1 Overall Procedure
2.2 Feature Extraction
2.3 Classification Model
3 Experiments
3.1 Datasets
3.2 Accuracy Estimation
3.3 Implementation, Execution and Reproducible Research
4 Results
5 Conclusions
6 Limitations and Future Work
References
Melt Pool Acquisition Using Near-Infrared Camera in Aluminum Wire Arc Additive Manufacturing
1 Introduction
2 Experimental Set-Up
3 Results and Discussion
3.1 Repeatability of the Method
3.2 Robustness of the Method
3.3 Sensitivity of the Method
4 Conclusion
References
Methodology for Image Analysis in Airborne Search and Rescue Operations
1 Introduction
2 Related Works
2.1 The Problem of Detecting Small Objects
3 The Proposed Methodology
4 Results
5 Conclusion and Future Works
References
Experimental and Numerical Investigations of Mechanical Behaviour for Titanium Bolted Joints with Friction Shim
1 Introduction
2 Specimen Preparation and Testing Procedure
2.1 Test Specimens
2.2 Friction Washer
2.3 Testing Setup
3 Experimental Test Results and Discussion
3.1 Static Loading to Failure
3.2 Fatigue Tests
3.3 Fatigue Failure Analysis
4 Finite Element Analysis
5 Conclusions
References
Interoperability Framework for a Computer-Aided Approach of Tolerance-Cost Optimization
1 Introduction
2 Related Work
3 Methodology
4 Case Study
5 Conclusions
References
A Digital Twin Approach for Smart Assembly of Aircraft Skin Panels with Mechanical Fasteners
1 Introduction and Background
2 ISAF Framework
3 Key Enabling Technologies
3.1 Key Enabling Technology (1): In-Line Part Measurement
3.2 Key Enabling Technology (2): Virtual Shimming Simulator
3.3 Key Enabling Technology (3): Shim Fabrication
4 Verification and Validation Using Vertical Stabiliser
5 Conclusions
References
A Technology Transfer Case in Graphic Engineering
1 Introduction and Motivation
2 Elements
3 Analysis and Diagnostic
4 Methodology
4.1 Delimitation Criterion of the Areas ZDP and ZP in LCC5/1996.
4.2 Visibility Indicators
4.3 Interaction Woodland-Vantage Point
5 Case Study
6 Results
7 Discussion
8 Conclusions
References
Ranged Segmentation of Slope Model for Spatial Analysis
1 Introduction
1.1 Case Study Site and Data
1.2 The Methodological Approach Adopted and the Results Obtained
2 Conclusions and Future Developments
References
Digital Thread and Building Lifecycle Management for Industrialisation of Construction Operations: A State-of-the-Art Review
1 Introduction
2 PLM Contributions to BIM Development
2.1 How to Leverage PLM to Enhance BIM?
2.2 Managing Data Classification to Represent a Project’s Evolution
2.3 Requirement and Change Management to Support Data Structuring
3 Improving Site Operations Management with 4D BIM
3.1 Why Would We Need 4D BIM?
3.2 4D BIM: General Principles
3.3 An Overview of 4D BIM Research Development
3.4 The Challenge of 4D BIM Implementation
4 Off-Site Construction: An Opportunity?
4.1 Why Would We Need to Build Off-Site?
4.2 A Holistic Design Approach to Support Lifecycle Thinking
4.3 How to Support an Off-Site Construction Project?
5 Summary and Future Works
References
Traceability of Uncertainty in Building Information Modelling Processes for Existing Structures
1 Introduction
2 Methods
3 Results
3.1 Detected Accuracy
3.2 Modelled Accuracy
4 Conclusions
References
Advanced Manufacturing and Control
Study of the Anisotropy of Triple Periodic Minimal Surface Structures Generate by Additive Manufacturing
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusions
References
Influence of TPU Lattice Structures on Functional Properties for the Design of Customized Rehabilitation Products
1 Introduction
2 Case Study: Rehabilitation Ball for Hands
2.1 Proceeding
2.2 Results
3 Conclusions
References
Preliminary Design Method Accounting for Shape Distortion in Metal Binder Jetting Parts: A Case Study
1 Introduction
2 Method and Model
3 Results and Discussion
4 Conclusions
References
Effects of Coating Post-processing on Mechanical Properties of Strut-and-Node-Based SLA Lattice Structures
1 Introduction
2 Materials and Methods
2.1 Specimen Geometry and Manufacturing
2.2 Coating Post-processing of the Specimens
2.3 Tensile Testing
3 Results
4 Discussion and Conclusions
References
Lattice Core FEM Simulation with a Modified-Beam Approach
1 Introduction
2 Numerical Analysis
2.1 Finite Element Model
2.2 Calibration Procedure
2.3 Comparison of Results
3 Conclusions and Future Works
References
A Design Methodology for Graded Density Triply Periodic Minimal Surfaces
1 Introduction
2 TPMS Shape and Relative Density Modelling
3 Graded Density Patterns: Design Methodology
3.1 Design Specifications
3.2 Design Parameters Calculation
3.3 Fundamental Shape Generation
3.4 Creation of Pattern Surfaces
4 Results and Discussion
4.1 Methodology Implementation
4.2 Case Study for Graded Density Constructs
4.3 Current Implementation Limitations
5 Conclusions
References
Could the Airframe of the Future Be Made of Metal Only? Potential of AM in the Helicopter Domain
1 Introduction
2 Methodology of Research
3 Helicopter Architectures
3.1 Architectures Description
3.2 Principal Subsystems (ATAs) Selection
4 Additive Manufacturing Potentials
4.1 Structural Optimisation and Weight Reduction
4.2 Integration of Functions
4.3 Other Potentials
5 System Optimization by AM
6 Design Framework
6.1 Simplified Model Proposal
7 Conclusion and Outlook
References
Design for Additive Manufacturing Tools: Are They an Effective Support for Designers?
1 Introduction and Research Background
2 Methodology
2.1 Evaluation Protocol
2.2 Metrics
3 Case Studies
3.1 Components
3.2 DfAM Tools
4 Results and Discussion
4.1 Quantitative Metrics Assessment
4.2 Qualitative Metrics Assessment
4.3 Discussion
5 Conclusions
References
An Analytical Cost Model for Laser-Directed Energy Deposition (L-DED)
1 Introduction and Literature Review
2 L-DED Cost Model
2.1 Cost Drivers
2.2 Process Parameters
2.3 Process Times
2.4 Raw Material Cost
2.5 Machine and Labour Costs
2.6 Consumable Cost
2.7 Energy Cost
2.8 Equipment Cost
3 Case study and results
4 Results and Discussions
5 Conclusions
References
Data Processing of L-PBF Manufactured Lattice Structures for Geometric Defects Characterization
1 Introduction
2 Method
2.1 Voxelization
2.2 Skeletonization
3 Application to 3D Registration
3.1 Error Distribution
3.2 Mean Node Distance
4 Conclusion
References
Mereo-DfAM: A Methodological Proposal of Mereotopological Design for Additive Manufacturing
1 Introduction
2 Literature Review
2.1 Design for Additive Manufacturing
2.2 Mereotopology
3 Process Discretization and Design Rules
3.1 Mereotopological Discretization of the Process
3.2 Rules Based on a Mereotopological Description
4 MereoDfAM
4.1 General Overview
5 Case Study
5.1 Mereotopological Description
5.2 Rules Application
6 Conclusion
References
Experiences of Additive Manufacturing for Nuclear Fusion Applications: The Case of the Wishbone of the Divertor of DEMO Project
1 Introduction
2 Methods and Materials
2.1 Selective Laser Melting
2.2 The Ti6Al4V ELI Alloy
3 The Case Study Application: SLM Wishbone Mock-Up of the DEMO Divertor Device
3.1 Description and 3D CAD Modelling of the Wishbone Mock-Up
3.2 Simulation and Prototyping of the SLM Wishbone Mock-Up
3.3 Prototyping of the Wishbone Mock-Up by Milling
4 Results and Discussion
5 Conclusion and Future Developments
References
Topology Optimization for Thin-Walled Structures with Distributed Loads
1 Introduction
2 Topology Optimization and Thin-Walled Structures: A State-of-the-Art Analysis
2.1 Topology Optimization Methodology
2.2 Thin-Walled Structures
3 Methodology
4 Case Studies
5 Conclusions
References
Embedded Workflow for Functionally Graded Material Fabrication with Multi-material Additive Manufacturing
1 Introduction
2 Deposition Strategies for Material Jetting
3 Heterogeneous Objects Modeling Strategies
4 Workflow for the Voxel Printing of Heterogeneous Objects
5 Case Study
6 Conclusion
References
Directed Energy Deposition Process Simulation to Sustain Design for Additive Remanufacturing Approaches
1 Introduction
1.1 Process Issues
1.2 Process Simulation
2 Methods and Tools
3 Case Studies
4 Results
4.1 Qualitative Study
4.2 Quantitative Study
5 Discussion
References
Eco-Assessment of Design for Additive Manufacturing Solutions Defined at Different Levels of Detail
1 Introduction
2 Literature Background
3 Methodology
3.1 Functional Unit and Operative Scenario
3.2 Compared Systems
3.3 Calculation Method
4 Results and Discussions
5 Conclusions
References
An Empirical Approach for Mechanical Behavior Characterization of a Small Diameter Cold-Drawn Steel Wire with a Three-Point Bending Test
1 Introduction
2 Experimental and Numerical Details
2.1 Materials and Testing
2.2 Numerical Modeling
3 Methodology
3.1 Experimental and Numerical Bending Tests
3.2 From a Bending Curve to a Stress-Strain Curve
4 Results and Discussions
5 Conclusion
References
Correlations Between the Hole Surface Integrity and Fatigue Life for Drilled 2024-T351 Aluminum Alloy
1 Introduction
2 Materials and Methods
2.1 Drilling Processes
2.2 Fatigue Test Samples
2.3 Surface Integrity Characterization
2.4 Fatigue Tests
3 Results and Discussion
4 Conclusions
References
Design and Creation of an Economical Camera Slider for Photography
1 Introduction
1.1 Background
1.2 Objectives
1.3 Type of Sliders
1.4 Applicability
2 Methodology
2.1 Evaluation of Alternative Propositions
2.2 Design of the Support Platform and Slide
2.3 Calculation of the Diameter of the Guide Rods
2.4 Assembly of the Slider
2.5 Programming of the Motor
3 Results - Final Manufacture of the Slider
4 Conclusion
References
Analysis of Data from In-Situ Monitoring of Additive Manufacturing Process
1 Introduction
1.1 Open-Architecture Lab Bench and Metamodel Development Methodology
2 Experimental Protocol and Design of Experiment
2.1 Design of Experiment Definition (DOE)
2.2 Metamodel Description
2.3 Experiments with Instrumented Lab-Bench WAAM
3 Results and Discussion
3.1 Metamodel for WAAM
3.2 Macro-defects Detection with Metamodel
4 Conclusion
References
Geometrical Tolerances: ``Planarity'' Measurements on Automotive Power Module During Its Assembly Processes-Flow
1 Introduction
2 Liquid Cooling Methods
3 Power Module: Description and Assembly Flow
3.1 Description
3.2 Assembly Process
4 Flatness Measurements
4.1 Measurements' DoE and Setup
4.2 Measurements' Results and Discussion
5 Conclusion
References
Toolpath Calculation Using Reinforcement Learning in Machining
1 Introduction
2 Literature Review
3 The VOM Environment
4 Tests
5 Conclusion
References
Simulation-Based Design of Reconfigurable Moulds for Injection Overmoulding
1 Introduction
2 Method
3 Case Study and Results
3.1 Product and Process Planning
3.2 Parts Design
3.3 Simulation-Based Design of the Mould
3.4 Part and Mould Detail Design
3.5 Mould Optimization
4 Discussion and Conclusions
References
Mixed Reality Platform Supporting Human-Robot Interaction
1 Introduction
2 Background
3 Material and Methods
3.1 Development Tools and Experimental Setup
3.2 Methods
4 Design of Mixed Reality Platform
5 Results
5.1 Coordinate Systems Alignment
5.2 End-Effector Trajectory Calculation
6 Conclusion
References
Markerless Motion Capture and Virtual Reality for Real-Time Ergonomic Analysis of Operators in Workstations with Collaborative Robots: a preliminary study
1 Introduction
2 Scientific Background
3 Methods and Tools
3.1 The Virtual Environment
3.2 Body Tracking and Real-Time Physical Ergonomics Analysis
4 Preliminary Test
5 Results and Discussion
6 Conclusion
References
Design of a Fabric–Based Antagonistic Pneumatic Actuator with Multiple Chambers for the Development of Soft Continuum Manipulators
1 Introduction
2 Design
2.1 Actuator
2.2 Electronic Control Unit
3 Prototyping
3.1 Realization of the Body
3.2 Realization of the Electronic Control Unit
4 Preliminary Experiments
5 Conclusions
References
Design, Fabrication and Characterization of a Novel Variable Stiffness Structure for Soft Continuum Robotics
1 Introduction
1.1 State of the Art
1.2 Selection of the Variable Stiffness Effect
2 Functional Principle
3 Design and Fabrication
4 Mechanical Characterization
5 Conclusion
References
Effects of Design Parameters on the Tip Steering Capabilities of Fabric Pneumatic Artificial Muscle-actuated Soft Growing Robots
1 Introduction
2 Soft Growing Robot Prototype Description
3 Soft Growing Robot Characterization
3.1 Experimental Plan
3.2 Experimental Setup
3.3 Results
4 Discussion
5 Conclusions
References
A User–Centered Approach Involving the Clinicians for the Design of Medical Devices: Case Study of a Soft Robotic Exoskeleton for Rehabilitation
1 Introduction
2 Methodology
3 User Viewpoint
3.1 Kano Questionnaire
3.2 Quality Function Deployment
4 Concepts Generation
5 Optimal Concept Selection
5.1 Kansei Words
5.2 ANOVA Analysis
6 Conclusion
References
Preliminary Design of a EMG Wearable Interface for the Actuation of Soft Pneumatic Artificial Muscles
1 Introduction
2 System Architecture
3 Experiments
3.1 Prototyping and Characterization of the McKibben Soft Actuator
3.2 Electronic Control Unit
3.3 EMG Data Capturing and Processing
3.4 Results
4 Conclusions
References
Preliminary Study of a Capacitive Force Sensor for Soft Robotic Applications
1 Introduction
2 Capacitive Force Sensors Design
2.1 Parallel-Plate Capacitor
2.2 Plate #1
2.3 Dielectric Layer
2.4 Plate #2
3 Sensor Prototype Fabrication
4 Preliminary Experimental Tests
4.1 Preliminary Experimental Results
5 Conclusions and Future Works
References
Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant with the Boundary Condition Design Criterion
1 Introduction
1.1 Aim of the Paper
2 Materials and Methods
2.1 The Updated Linear Stack-Up Model
2.2 General Hypothesis
3 External vs. Intrinsic Datum System
3.1 Datum System Transformation
4 2x Patterns Stack-Up
4.1 Stack-Up by Cases
4.2 Symmetric Stack-Up
4.3 Symmetric Adjusted Stack Up
4.4 Symmetric Optimized Stack Up
4.5 Monte Carlo Verification
5 Path Towards Generalization
6 The Case Study
7 Conclusions
References
A Case Study on the Correlation Between Functional and Manufacturing Specifications for a Large Injection Moulded Part
1 Introduction
1.1 The Case Study
2 Materials and Methods
2.1 Free State Acquisition
2.2 Numerical Simulation
2.3 Superimposition of the Effects
2.4 Approach Validation
2.5 Comparison
3 Results and Discussion
3.1 Comparison
4 Conclusions
References
InvNet: Transfer Learning on Point Cloud for Invariant Surface Classification
1 Introduction
2 Related Work
3 Point Cloud Processing for Invariant Surface Classification
3.1 Point Clouds Generation for Different Invariant Surfaces
3.2 Training Set Construction: A PCA-Based Method
4 Transfer Learning on the Training Set
5 Experiment and Discussion
5.1 Performance Evaluation
5.2 Practical Application of InvNet
5.3 Discussion
6 Conclusion and Outlook
References
X-Reality for Interactive Design
Apprentices’ Opportunities and Criticalities in Adopting Affordable User Interface Development Tools
1 Introduction
2 Material and Methods
2.1 InVision
2.2 BrioVR
2.3 HTC Vive Pro
3 Activities
3.1 Experience Planning
3.2 Team Definition
3.3 Interaction Definition
3.4 UI Implementation
3.5 User Testing
3.6 Opportunities/Criticalities List Generation
4 Results and Discussion
5 Conclusions and Perspectives
References
Role-Based Sailing Augmented Reality Interface: Three Proposals for Head-Mounted Display
1 Introduction
2 Motivation
3 State of the Art
4 AR Interfaces
4.1 Method
4.2 Screen-Stabilized Interface
4.3 Body-Stabilized Interface
4.4 Boat-Stabilized Interface
5 Discussion
6 Perspectives
7 Conclusions
References
Displaying Augmented Reality Manuals in the Design Phase of the Product Lifecycle
1 Introduction and Motivation
2 Case Study
2.1 Demo Versions of the AR Manual
2.2 Design of the Experiment
2.3 Participants
2.4 Results
3 Discussion
4 Conclusion and Future Works
References
From Therbligs to Visual Assets: A Technique to Convey Work Instructions in Augmented Reality Technical Documentation
1 Introduction
2 The Information Model for the Analysis of Work Instructions
2.1 From the Method of Therbligs to the Information Types
2.2 Case Study
3 Recommendation for the Choice of the Visual Assets
3.1 Identity
3.2 Location
3.3 Way-to
3.4 Notification
3.5 Order
3.6 Orientation
4 Conclusion and Future Work
References
Immersive Virtual Reality as a Training Tool for Safety Working Procedure in Confined Spaces
1 Introduction
2 The SPIRiT Design
2.1 Scenarios and User Tasks Identification Criteria
2.2 VR Technologies and Scenario Implementation
2.3 Locomotion and User Interaction
3 The Design of Experiment
4 Metrics and Procedure
5 Discussion and Conclusion
References
Tintoretto Unveiled: Interactive Virtual Experience for Artworks
1 Introduction
2 Digital Painting Workflow
3 Case Study
3.1 The Painting
3.2 Activity a: Unveil
3.3 Activity B: Color
3.4 The Experience at the Exhibition
4 Discussion
4.1 Guides’ Experience
4.2 Visitors’ Experience
4.3 Future Works
5 Conclusions
References
X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process of Large Products
1 Introduction
2 Test Case
3 XR App Development
3.1 Objectives
3.2 App Development
4 Simulations and Results
4.1 XR App for Scenarios Comparison
4.2 XR App for Training
5 Conclusions
References
From Virtual to Augmented Reality Training System: An IC.IDO-Based Approach
1 Introduction
2 Methodology
3 Case Study
4 Experimental Setup
4.1 Hardware Architecture
4.2 Software Architecture
5 Implementation of the Training Sessions
5.1 The Conversion to the AR Session
5.2 The VR-AR Cooperative Session
6 The Proposed Training Systems
6.1 Assisted Virtual Training
6.2 Autonomous Virtual Training
6.3 Assisted Augmented Training
6.4 Autonomous Augmented Training
6.5 Assisted Cooperative Training
6.6 Autonomous Cooperative Training
7 Conclusion and Future Works
References
An AR Tool to Support Warehouse Operations in the Context of Industry 4.0
1 Introduction
2 Background of AR Technology in Logistics
3 Research Approach
4 Use Case
5 Study Results
6 Conclusions
References
A Novel Approach to Use Marker-Less Mixed Reality Applications with In-Motion Systems
1 Introduction and Research Background
2 Tracking Solutions
2.1 Marker-Based Tracking
2.2 Marker-Less Tracking
3 Design and Development of the MR Application
3.1 Requirements
3.2 Spatial Anchor Development
3.3 Unity Application
3.4 Anchor Movement
4 Test and Results
5 Conclusions
References
A Human-Centred Design Approach for Designing Augmented Reality Enabled Interactive Systems: A Kitchen Machine Case Study
1 Introduction
2 Related Work
3 Human-Centred Design Approach
3.1 Context of Use and User Research
3.2 User Needs, User Requirements, and System Requirements
3.3 Design
3.4 Evaluation
4 AR-Enabled Kitchen Machine Case Study
4.1 User Research
4.2 User Needs, User Requirements, and System Requirements
4.3 Design
5 AR-Enabled Kitchen Machine Demo
6 Conclusions and Future Works
References
Performance Evaluation of an Immersive Measurement Instrument for Automotive Field Applications
1 Introduction
2 Methods
2.1 Analysis of Scene Geometries
2.2 Measurement Algorithms
2.3 Test Validation
3 Results
4 Conclusions
References
Virtual Golden Zone for Enhancing the Ergonomics of Complex Production Lines
1 Introduction
2 Related Works and Paper Positioning
3 Virtual Reality Integration in an Industrial Case Study
3.1 The Virtual Golden Zone Methodology
3.2 The VR System Description
4 Evaluation of the VGZ Usability Through a Questionnaire
5 Conclusions
References
Development of a Parametric Scene Editor of Serious Games for Safety in Workplaces
1 Introduction
2 Serious Games
3 Tools and Methods
4 Case Study
4.1 Targets and Game Features
4.2 Methodological Approach
4.3 Objects, Events and Errors
4.4 The Text Editor
4.5 Requests, Standards and Rules
5 Results
6 Conclusions and Future Works
References
Design of XR Applications for Museums, Including Technology Maximising Visitors’ Experience
1 Introduction
2 Technology Assessment for Implementing Virtual Museums
2.1 Evaluation Criteria
2.2 Measurement, Data Collection and Analysis
3 Importance of Multisensory Experiences in XR Applications
4 Multisensory Experience for a Virtual Museum
5 Conclusion
References
Exploring People’s Visual Perception and Its Impact on Evaluation of a Tiny House Prototype Using Eye Tracking Technology
1 Introduction and Background
2 Methodology
2.1 Materials
2.2 Participants
2.3 Procedure
3 Data Collection and Analysis
3.1 Collection and Processing of Objective Data
3.2 Collection and Processing of Subjective Data
3.3 Data Analysis and Results
3.4 Comments
4 Discussion, Conclusions, and Outlook
References
Heat on Sight - Display Thermal Radiation in the Peripheral Field of View
1 Introduction
1.1 Topic of the Paper
1.2 Specific Objective of the Paper
2 State of the Art and Related Work
2.1 User Centred Design Approach
2.2 Feedback Display
3 Peripheral Thermal Radiation Display (PTRD)
4 Methodology
5 Results
6 Limitations of the Study
7 Conclusion and Outlook
Annex 1
References
HRV Analysis Through the Sphygmic Wave Pulsation of the Femoral Artery
1 Introduction
2 Methods of HRV Analysis
2.1 Detection System
2.2 Signal Processing
2.3 Experiment Protocol
3 Results
4 Conclusions
References
A Mixed Reality Application for Collaborative and Interactive Design Review and Usability Studies
1 Introduction
2 The Context of Study
2.1 System Design and Simulation Tools for Assessment
3 Field Study Application: Aircraft Galley
3.1 Mixed Reality Platform Development
3.2 Task Execution
4 Conclusion and Future Works
References
Posture Interactive Self Evaluation Algorithm Based on Computer Vision
1 Introduction
2 Material and Methods
2.1 Reference Exercise – Isometric Squat
2.2 Video Dataset
2.3 The Pose Estimator and the Use of Keypoints
2.4 Machine Learning and Used Approach
3 Results and Discussion
4 Conclusions
References
A Review of Benchmarks for Visual Defect Detection in the Manufacturing Industry
1 Introduction
2 Industrial Needs for Visual Defect Detection
2.1 Metrics for Classification
2.2 Metrics for Segmentation
2.3 Supervision Level
2.4 Training and Evaluation Procedure
3 Review of Existing Benchmarks
3.1 The DAGM Dataset
3.2 NEU Surface Defect Database
3.3 Severstal Steel Defect Dataset
3.4 KolektorSDD
3.5 KolektorSDD2
3.6 MVTec Anomaly Dataset
3.7 Dataset Comparison
4 Proposed Guidelines and Conclusion
References
Education and Representation Techniques
Soft Skills Teaching Proposal for “Designers”
1 Introduction
2 State of the Art
3 Purpose of the Proposed Teaching Experiment
4 Prerequisites
5 The Soft Skills Considered
6 Soft Skills and the Designer’s Professional Profile
7 General Configuration of the Teaching Experiment
7.1 Initial Test
7.2 Preparatory Part
7.3 Guided Examples
7.4 Workshops
8 Conclusions
References
A New Approach to Teach Conceptual Design Integrating Additive Manufacturing Constraints
1 Introduction
2 Literature Review
2.1 Active Pedagogical Guidelines
2.2 Additive Manufacturing Process
2.3 Immersive Tool for Teaching
3 DfAM Courses
3.1 Structure
3.2 Step A
3.3 Step B
3.4 Step C
4 Difficulties and Validation of the Proposed Approach
5 Conclusion and Perspectives
References
Project and Problem Based Learning for Students and Teachers Involvement in a Multidisciplinary Engineering Project
1 Context and Problematic
1.1 General Context
1.2 The Issues of Students Engagement
1.3 The Issues of Teachers Engagement
2 State of the Arts Related to Students and Teachers Issues
2.1 Theoretical Studies for Students’ Engagement
2.2 Discussion of the State of the Arts: The Choice of the Learning Method
3 The Project and Problem-Based Approach: Implementation
3.1 Implementation of Concepts for the Teaching Practice
4 Scenario Execution
5 Results: First Qualitative Analysis
5.1 Students Survey
5.2 Results of PBL Implementation for Teachers
6 Conclusion and Roadmap for Improvement in Future Work
References
Formative Assessment in Context of COVID-19 Pandemic
1 Introduction
2 Objectives
3 Development of Innovation
3.1 Context of the Artistic Expression I Subject
3.2 Experience Schedule
3.3 Learning Assessment
4 Results
5 Conclusions
References
Think-Sketch-Create: Improving Creative Expression Through Sketching
1 Introduction
2 Materials and Methods
2.1 Workshop Rationale and Definition
2.2 Context and Participants
2.3 TSC Application
2.4 Assessment
3 Results and Discussion
3.1 Incorporation and Adaptation to the Curriculum
3.2 Valuation of the TSC Workshops
3.3 Broadening Knowledge
4 Conclusions
References
Methodology for Online Assessment of Technical Drawing Skills of Industrial Engineering Students
1 Introduction
2 Technical Drawing Methods to Test Skills
2.1 Graphical Essay Questions (GEQ)
2.2 Second Technical Drawing Test (TDT2)
3 Results and Discussion
3.1 Moodle Test Statistics: TDT2
3.2 Paper and Online Test Results Comparison
4 Conclusions and Future Developments
References
Techniques for the Representation of the Application of Historical-Archaeological Evidence Scales in Heritage Assets
1 Introduction
2 Objectives
3 Materials and Software
4 Methodology
5 Results
6 Conclusions
References
Author Index

Citation preview

Lecture Notes in Mechanical Engineering

Salvatore Gerbino · Antonio Lanzotti · Massimo Martorelli · Ramón Mirálbes Buil · Caterina Rizzi · Lionel Roucoules   Editors

Advances on Mechanics, Design Engineering and Manufacturing IV Proceedings of the International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing, JCM 2022, June 1–3, 2022, Ischia, Italy

Lecture Notes in Mechanical Engineering Series Editors Fakher Chaari, National School of Engineers, University of Sfax, Sfax, Tunisia Francesco Gherardini , Dipartimento di Ingegneria “Enzo Ferrari”, Università di Modena e Reggio Emilia, Modena, Italy Vitalii Ivanov, Department of Manufacturing Engineering, Machines and Tools, Sumy State University, Sumy, Ukraine Editorial Board Members Francisco Cavas-Martínez , Departamento de Estructuras, Construcción y Expresión Gráfica Universidad Politécnica de Cartagena, Cartagena, Murcia, Spain Francesca di Mare, Institute of Energy Technology, Ruhr-Universität Bochum, Bochum, Nordrhein-Westfalen, Germany Mohamed Haddar, National School of Engineers of Sfax (ENIS), Sfax, Tunisia Young W. Kwon, Department of Manufacturing Engineering and Aerospace Engineering, Graduate School of Engineering and Applied Science, Monterey, CA, USA Justyna Trojanowska, Poznan University of Technology, Poznan, Poland

Lecture Notes in Mechanical Engineering (LNME) publishes the latest developments in Mechanical Engineering—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNME. Volumes published in LNME embrace all aspects, subfields and new challenges of mechanical engineering. Topics in the series include: • • • • • • • • • • • • • • • • •

Engineering Design Machinery and Machine Elements Mechanical Structures and Stress Analysis Automotive Engineering Engine Technology Aerospace Technology and Astronautics Nanotechnology and Microengineering Control, Robotics, Mechatronics MEMS Theoretical and Applied Mechanics Dynamical Systems, Control Fluid Mechanics Engineering Thermodynamics, Heat and Mass Transfer Manufacturing Precision Engineering, Instrumentation, Measurement Materials Engineering Tribology and Surface Technology

To submit a proposal or request further information, please contact the Springer Editor of your location: China: Ms. Ella Zhang at [email protected] India: Priya Vyas at [email protected] Rest of Asia, Australia, New Zealand: Swati Meherishi at [email protected] All other countries: Dr. Leontina Di Cecco at [email protected] To submit a proposal for a monograph, please check our Springer Tracts in Mechanical Engineering at https://link.springer.com/bookseries/11693 or contact [email protected] Indexed by SCOPUS. All books published in the series are submitted for consideration in Web of Science.

More information about this series at https://link.springer.com/bookseries/11236

Salvatore Gerbino Antonio Lanzotti Massimo Martorelli Ramón Mirálbes Buil Caterina Rizzi Lionel Roucoules •









Editors

Advances on Mechanics, Design Engineering and Manufacturing IV Proceedings of the International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing, JCM 2022, June 1–3, 2022, Ischia, Italy

123

Editors Salvatore Gerbino Department of Engineering University of Campania “Luigi Vanvitelli” Aversa, Italy

Antonio Lanzotti Department of Industrial Engineering University of Naples Federico II Naples, Italy

Massimo Martorelli Department of Industrial Engineering University of Naples Federico II Naples, Italy

Ramón Mirálbes Buil Ingeniería de Diseño y Fabricación Universidad de Zaragoza Zaragoza, Zaragoza, Spain

Caterina Rizzi University of Bergamo Dalmine, Bergamo, Italy

Lionel Roucoules Arts et Métiers Institute Aix En Provence Cedex 1, France

ISSN 2195-4356 ISSN 2195-4364 (electronic) Lecture Notes in Mechanical Engineering ISBN 978-3-031-15927-5 ISBN 978-3-031-15928-2 (eBook) https://doi.org/10.1007/978-3-031-15928-2 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface and Acknowledgements

The Scientific Society ADM organized the first Italian conference in 1976 (in Giardini Naxos, Sicily) to foster the discussion about engineering design tools and methods in the Italian scientific academic community. After 20 years, the first Italo-Spanish conference with Ingegraf in 1996 (Naples) was organized to share this discussion with Spanish colleagues belonging to Ingegraf. Then the first international ADM conference in 1997 was celebrated in Florence to share research topics with the European scientific community. After 40 years, the first JCM conference with Ingegraf and Aip-Primeca, now S.mart, in 2016 (Catania) was organized to promote networking actions also with the French scientific community. Nowadays, We celebrate the JCM conference that comes back in Italy after the editions organized in Spain and in French. We are proud to host the participants at the fourth JCM conference organized in Ischia by the University of Naples Federico II and the University of Campania “L. Vanvitelli”. In 1995, We organized the ADM conference in the Royal Palace of Caserta with the University of Campania “L. Vanvitelli”, and, in 2003, We organized in Naples at the Natural Sciences Museum of Federico II, the ADM international conference. Since that time, We planned to organize all together the next conference in Ischia, because We all agreed that Ischia is a very welcoming location for all of us. This conference has the aim to consolidate the relationship among our communities. In the last 20 years, double degrees, international cooperation, and Erasmus agreements have been signed among our Universities so We believe that our networking efforts have contributed in a small part to the internationalization goals of Italian Universities and to European feelings of new generations. Our conferences, even if organized by French, Spanish, and Italian scientific societies, are open to participants from other European countries and from abroad with the same wish to start scientific and educational cooperation. Finally, We want to thank our Gold Sponsors Zeiss and Dassault Systèmes, and our Silver Sponsor Cadland, that not only supported the event, but also have given a valuable contribution to develop innovative solutions for the Industry 4.0 needs. We believe that our Sponsors represent not only leading actors in partnership with

v

vi

Preface and Acknowledgements

Universities, but also a successful synergy between gentlemen belonging to different worlds. Keynote speakers were selected from the European academic and industrial community based on their vast knowledge in the fields which the conference topics are inspired from. Stephan Odenwald is Head of the Department of Sports Equipment and Technology at the Technical University of Chemnitz. His keynote is on: “Challenges in Sports Engineering”. Guido Porro is Managing Director for Southern Europe of Dassault Systèmes. His keynote is on: “The Quest for Sustainability – The Power of Virtual Twin”. Vincent Cheutet is Director of the DISP laboratory (Decision and Information System for Production) at INSA Lyon. His keynote is on: “Digital Twin”. Valentín Gómez-Jáuregui is Civil Engineer by the E.T.S. de Ingenieros de Caminos of Santander (Universidad de Cantabria— Spain) and the Échole de Genie Civil (Université de Liège—Belgium). His keynote is on: “Design and manufacturing of deployable Tensegrity structures”. A novelty in this conference is the 3rd mission experience talk. Normally, in a scientific conference there is no room for the third mission activity presentation. We try to innovate the conference program by introducing a talk on the third mission best experience that involves private firms and public institution. We want to share experiences, discussing their social impact, in order to improve public engagement. The talk concerns the design of the Ferrero Kinder—Joy of Moving park (JoMP) for the Expo2015 and then for Vicolungo in 2020. The Joy of Moving Park of Ferrero is a social responsibility project that has high social impact and promotes a healthy style of life in new generations. We believe that a goal of our conference could be to award private firms for their social responsibility projects in cooperation with Universities. So We award Laura Sottovia, communication responsible of JOMP and former responsible of Ferrero Communication, Gianpiero Vietto, Global Manager of Ferrero Social Responsibility Projects and Gianni Lanfranco, President of Advanced Distribution (Volleyball Olympic Athlete) for the Ferrero JoMP project. Further, We deeply thank all of them for accepting our invitation to attend the third mission experience talk. We hope that this talk could inspire other private firms to start ethical projects with high social impact and younger colleagues to cooperate with them. The book is organized into five parts, each one corresponding to the tracks of the conference: Track 1: Design Methods and Tools Track Chairs: Paz MORER-CAMO (Ingegraf), Julien LE DUIGOU (S.mart), Francesco LEALI (ADM), Antonio Mancuso (ADM, Co-Chair for “Design for Sustainability and EcoDesign”), Daniele Regazzoni (ADM, Co-Chair for “Engineering Methods in Medicine”). Topics: Design for Sustainability and EcoDesign; Engineering Methods in Medicine; Experimental Methods in Product Development; Digital Twin; Digital Factory; Digital Model for Energy Transition; Knowledge and Product Data Management; Simulation, Analysis and Optimization.

Preface and Acknowledgements

vii

Track 2: Geometric Modelling and Analysis Track Chairs: Francisco CAVAS MARTINEZ (Ingegraf), Frédéric DEMOLY (S.mart), Enrico VEZZETTI (ADM). Topics: Computer-Aided Design; Reverse Engineering, Digital Acquisition, and Inspection; Image Process and Analysis; Geometric and Functional Characterization of products; Surveying, Mapping and GIS techniques; Building Information Modelling. Track 3: Advanced Manufacturing and Control Track Chairs: Francesca CAMPANA (ADM), Sébastien CAMPOCASSO (S.mart), Rikardo MINGUEZ GABIÑA (Ingegraf). Topics: Design for Additive Manufacturing; Design for Manufacturing and Assembly; Integrated Product and Process Design; Collaborative Robotics; Soft Robotics; Geometric Product Specification and Tolerancing. Track 4: X-Reality for Interactive Design Track Chairs: Frédéric NOËL (S.mart), David RANZ ANGULO (Ingegraf), Antonio Emmanuele UVA (ADM). Topics: Virtual and Augmented Reality; User-Centred Design and Virtual Ergonomics; AI for Image Processing and Pattern Recognition. Track 5: Education and Representation Techniques Track Chairs: Gianmaria CONCHERI (ADM), Cristina MANCHADO del VAL (Ingegraf), Lionel ROUCOULES (S.mart) Topics: Teaching Product Design and Drawing History; Teaching Engineering Drawing; Representation Techniques. A special session on Design for Soft Robotics was promoted as the topic is becoming a challenging research topic for our Scientific Sector together with robotics, automation, and controls. We believe that our conference has to promote special sessions on emerging scientific topics that are carried out in excellence international cooperation projects such as BIOIC between Federico II and Fraunhofer IWU. A great job was done by the track chairs in coordinating more than 150 reviewers that selected 140 papers starting from 160 abstracts, with at least two blind revisions for each paper. Our special thanks go to everyone involved in the review process. In particular, We want to remember two young professors, Giuseppe Marannano and Francesco Gherardini, for their endless commitment to supporting the management of the conference and the continuous communication with the authors. In the end, a special thank goes to: – President of ADM, Vincenzo Nigrelli, for having accepted the responsibility of the entire organization, he trusted in our work; – President of Ingegraf, Ramón Miralbés Buil, and President of S.mart, Lionel Roucoules, for the pleasant and fruitful cooperation;

viii

Preface and Acknowledgements

– President of the Scientific Committee of ADM, Caterina Rizzi, for her smart way of dealing with all the questions that the management of the conference proposed; – the driving force of the Organizing Committee and Coordinator of the track chairs, Massimo Martorelli, always present and clever in managing all the questions concerning the conference from the rooms organization to the management of special review problems; – all Chairmen involved in managing blended sessions for oral and poster presentations; – all participants for having accepted the gamble of submitting papers during the pandemic emergency and having planned in advance to attend the conference in presence or in remote mode. We believe that the success of this conference is due to the effort and the hope that each of us has fed in the past to live again a “normal” experience, such as a conference, instead of a “special” one due to the crisis We are experiencing. We all want peace, We all wish to be in touch without fear, and We all want to start networking more and more than in the past. Per aspera ad astra; ad maiora, semper! Salvatore Gerbino Antonio Lanzotti

Organization

Conference Chairs Antonio Lanzotti Salvatore Gerbino

Università degli Studi di Napoli Federico II, Italy Università degli Studi della Campania Luigi Vanvitelli, Italy

Conference Advisory Chairmen Vincenzo Nigrelli (ADM President) Ramón Miralbes Buil (Ingegraf President) Lionel Roucoules (S.mart President)

Università degli Studi di Palermo, Italy Universidad de Zaragoza, Spain Ecole Nationale Supérieure d’Arts et Métiers, France

Scientific Committee Coordinator Caterina Rizzi

Università degli Studi di Bergamo, Italy

Organizing Committee Antonio Lanzotti Stanislao Patalano Massimo Martorelli Salvatore Gerbino Francesco Caputo Alessandro Greco Giuseppe Di Gironimo Fabrizio Renno Andrea Tarallo

Università Università Università Università Università Università Università Università Università

degli degli degli degli degli degli degli degli degli

Studi Studi Studi Studi Studi Studi Studi Studi Studi

di Napoli, Italy di Napoli, Italy di Napoli, Italy della Campania, Italy della Campania, Italy della Campania, Italy di Napoli, Italy di Napoli, Italy di Napoli, Italy

ix

x

Ferdinando Vitolo Teodorico Caporaso Stanislao Grazioso Tommaso Ingrassia Giuseppe Marannano Francesco Gherardini Donato Perfetto Alessandro De Luca Giuseppe Lamanna Giuseppe Perrotta Luigi Calvanese Paola Muratto

Organization

Università Università Università Università Università Università Università Università Università Università Università Università

degli Studi di Napoli, Italy degli Studi di Napoli, Italy degli Studi di Napoli, Italy degli Studi di Palermo, Italy degli Studi di Palermo, Italy di Modena e Reggio Emilia, Italy degli Studi della Campania, Italy degli Studi della Campania, Italy degli Studi della Campania, Italy degli Studi di Napoli, Italy degli Studi di Napoli, Italy degli Studi di Napoli, Italy

Scientific Committee Track General Chair Massimo Martorelli

Università degli Studi di Napoli, Italy

Track Chairs Design Methods and Tools Francesco Leali (ADM) Antonio Mancuso (ADM, Co-chair for “Design for Sustainability and EcoDesign”) Daniele Regazzoni (ADM, Co-chair for “Engineering Methods in Medicine”) Paz Morer-Camo (Ingegraf) Julien Le Duigou (S.Mart) Geometric Modelling and Analysis Enrico Vezzetti (ADM) Francisco Cavas Martinez (Ingegraf) Frédéric Demoly (S.Mart) Advanced Manufacturing and Control Francesca Campana (ADM) Rikardo Minguez Gabiña (Ingegraf) Sébastien Campocasso (S.Mart) X-Reality for Interactive Design Antonio Emmanuele Uva (ADM) David Ranz Angulo (Ingegraf) Frédéric Noël (S.Mart)

Organization

xi

Education and Representation Techniques Gianmaria Concheri (ADM) Cristina Manchado del Val (Ingegraf) Lionel Roucoules (S.Mart) ADM Giovanni Berselli Francesca Campana Giorgio Colombo Gianmaria Concheri Ilaria Cristofolini Luca Di Angelo Lapo Governi Francesco Leali Antonio Mancuso Maurizio Muzzupappa Salvatore Massimo Oliveri Armando Viviano Razionale Daniele Regazzoni Antonio Emmanuele Uva Enrico Vezzetti

University of Genova University of Roma Sapienza Polytechnic of Milano University of Padova University of Trento University of L’Aquila University of Firenze University of Modena and Reggio Emilia University of Palermo University of Calabria University of Catania University of Pisa University of Bergamo Polytechnic of Bari Polytechnic of Torino

Ingegraf Fernando J. Aguilar Torres Elidia Beatriz Blázquez Parra Paz Morer-Camo Francisco Cavas Martinez Oscar David De Cozar Macias Valentín Gomez Jauregui Maider Iturrondobeitia Ellacuria Ismael Lengua Lengua Cristina Manchado del Val Rikardo Minguez Gabiña Cesar Antonio Otero Gonzalez Guillermo Peris Fajarnes David Ranz Angulo José Ignacio Rojas Sola Irene Sentana Gadea Erlantz Lizundia

Universiy of Almería University of Málaga University of Navarra Technical University of Cartagena University of Malaga University of Cantabria University of the Basque Country Polytechnic University of Valencia University of Cantabria University of Basque Country University of Cantabria Polytechnic University of Valencia University of Zaragoza University of Jaen University of Alicante University of the Basque Country

xii

Organization

S.Mart Alex Ballu Sébastien Campocasso Vincent Cheutet Jean-Yves Dantan Frédéric Demoly Emmanuel Duc Alexandre Durupt Xavier Fisher Florent Laroche Julien Le Duigou Frédéric Noël Manuel Paredes Maud Rio Lionel Roucoules Nadège Troussier

Université de Bordeaux Université de Toulon INSA Lyon Arts et Métiers Metz Université de technologie de Belfort-Montbéliard SIGMA Clermont Université de Technologie de Compiègne ESTIA Bidart Ecole Centrale Nantes Université de Technologie de Compiègne Grenoble INP INSA Toulouse Université Grenoble Alpes Arts et Métiers - Aix-en-Provence Université de technologie de Troyes

Contents

Design Methods and Tools A Computer-Aided Tool for the Energy Optimization of Industrial Robots in Manufacturing Applications . . . . . . . . . . . . . . . . . . . . . . . . . . Fabio Merlo, Greta Vazzoler, and Giovanni Berselli Experimentation and Characterization of Mycelium-Based Biocomposites for Ephemeral Product Applications . . . . . . . . . . . . . . . . Ignacio López-Forniés, Rosa Aurensanz Mallada, Aroa Ejarque-Ortiz, and María A. Madre Sediles Life Cycle Assessment of a Mobile Tiny House Made with Sustainable Materials and Design Implications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Laura Ruiz-Pastor, Stefania Altavilla, Chiara Nezzi, Yuri Borgianni, and Guido Orzes Industrial Scraps Valorization: Designing Products to Make Detached Value Chains Circular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cappelletti Federica, Rossi Marta, Ciccarelli Marianna, and Germani Michele

3

15

28

39

Circular Strategies to Improve Ephemeral Products Sustainability Through Co-creation and Its Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . Ignacio López-Forniés, Álvaro Aguerri Galindo, and Jorge Sierra-Pérez

51

A Step Forward Life Cycle Assessment to Optimize Products and Increase Company Eco-design Competencies . . . . . . . . . . . . . . . . . Rossi Marta, Cappelletti Federica, and Germani Michele

62

Workstation Design Centered to Workers’ Safe Use . . . . . . . . . . . . . . . Patrick Martin, Bruno Daille-Lefèvre, Xavier Godot, and Jelena Petronijevic

75

xiii

xiv

Contents

Design and Manufacture of a Sustainable Recycled Plastic Shredder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . José M. Tejero-Olalla, José Macías-García, M. Carmen Ladrón-de-Guevara-Muñoz, Óscar D. de-Cózar-Macías, Francisca J. Castillo-Rueda, and Manuel D. Marín-Granados

88

Critical Analysis on the Limitations of the Diffusion of Life Cycle Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Christian Spreafico, Davide Russo, and Daniele Landi Implementation of a Sustainable Washing System Applied to Waste Recycling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 Álvaro Millán-Estepa, Elena Díez-Martínez, M. Carmen Ladrón-de-Guevara-Muñoz, Óscar D. de-Cózar-Macías, E. Beatriz Blázquez-Parra, and Isidro Ladrón-de-Guevara-López Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Domenico Speranza, Romolo Di Bernardo, Massimo Martorelli, Antonio Gloria, Claudio Pensa, and Stefano Papa Virtual Design for Repair: The Case of a Gearmotor . . . . . . . . . . . . . . 135 Daniela Francia, Davide Bove, Gian Maria Santi, Patrich Ferretti, Giampiero Donnici, Leonardo Frizziero, and Alfredo Liverani Can MaWR-Method for Symmetry Plane Detection be Generalized for Complex Panfacial Fractures? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Luca Di Angelo, Paolo Di Stefano, Lapo Governi, Antonio Marzola, and Yary Volpe Mixed Reality-Based Support for Total Hip Arthroplasty Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Chiara Innocente, Pietro Piazzolla, Luca Ulrich, Sandro Moos, Stefano Tornincasa, and Enrico Vezzetti A Semiautomatic Procedure to Assist Physicians in Paediatric Airway Stenting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Chiara Santarelli, Luca Puggelli, Yary Volpe, Paola Serio, and Rocco Furferi An Engineering Method to Evaluate Care Processes and Introduce Televisits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Anna Savoldelli, Daniele Regazzoni, Simona Ludi, and Caterina Rizzi Modeling of Cutting Operations for Robotic Surgery Using CAE Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Stefano Pettinato, Roberto Francesco Pitzalis, Jesús Ortiz, and Giovanni Berselli

Contents

xv

Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Nicola Cappetti, Silvia Pierri, and Carlotta Fontana User-Centered Design for Manufacturing and Assembly: An Application in the Early Design Stages . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Cristina Renzi and Francesco Leali A Medical Device for Correctly Positioning Custom-Made Implants for Fractured Orbital Cavities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Manila Caragiuli, Marco Mandolini, Agnese Brunzini, Alida Mazzoli, and Mario Pagnoni A 3D Modeling Approach to Realize and Test Customized Knee Implants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Anna Ghidotti, Daniele Landi, Daniele Regazzoni, and Caterina Rizzi A Design-Based Thinking for Alarm Use and Management in the Neonatal Intensive Care Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Rosana Sanz-Segura, Eduardo Manchado Pérez, and Ignacio García de Paredes A Design Method for Custom Functionally Graded Lattice Orthoses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Riccardo Sponchiado, Francesca Uccheddu, Luca Grigolato, Pierandrea Dal Fabbro, and Gianpaolo Savio Automatic Segmentation of Stomach of Patients Affected by Obesity . . . 276 Andrea Pretto, Ilaria Toniolo, Alice Berardo, Gianpaolo Savio, Silvana Perretta, Emanuele Luigi Carniel, and Francesca Uccheddu Automatic Outcomes in Minnesota Dexterity Test Using a System of Multiple Depth Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 Teodorico Caporaso, Giuseppe Sanseverino, Dominik Krumm, Stanislao Grazioso, Raffaele D’Angelo, Giuseppe Di Gironimo, Stephan Odenwald, and Antonio Lanzotti Optimal Design of Surface Functionally Graded Dental Implants with Improved Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Pietro Ausiello, Massimo Martorelli, Ida Papallo, Antonio Gloria, Roberto Montanari, Maria Richetta, and Antonio Lanzotti The Morphological and Geometrical Segmentation of Human Thoracic and Lumbar Vertebrae: An Automatic Computer-Based Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306 Luca Di Angelo, Paolo Di Stefano, and Emanuele Guardiani

xvi

Contents

Interpreting Emotions with EEG: An Experimental Study with Chromatic Variation in VR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318 Ivonne Angelica Castiblanco Jimenez, Federica Marcolin, Luca Ulrich, Sandro Moos, Enrico Vezzetti, and Stefano Tornincasa Evaluation Method of Modular Products for the Prosumer. Proposal and Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Laura Asión-Suñer and Ignacio López-Forniés Experimenting with the Design Thinking Methodology in Citizen Participation Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 Mar Melgarejo-Torralba, Dolores Parras-Burgos, Emilio López-Salmerón, Francisco J. F. Cañavate, and Daniel G. Fernández-Pacheco Surface Grinding of Borosilicate Crown Glass Optics via a Robotic Approach Based on Superposed Trajectories . . . . . . . . . . . . . . . . . . . . . 354 Eugenio Tamassia, Fabio Pini, Tim Grunwald, Thomas Bergs, and Francesco Leali Design Requirements Assessment in the COMMAND (COmpact Multi-purpose Muon and Neutron Detector) Project . . . . . . . . . . . . . . . 367 Ileana Bodini, Diego Paderno, Davide Pagano, Antonietta Donzella, Germano Bonomi, Aldo Zenoni, Andrea Cantone, and Valerio Villa Path Approximation Strategies for Robot Manufacturing: A Preliminary Experimental Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . 380 Matteo Bigliardi, Pietro Bilancia, Roberto Raffaeli, Margherita Peruzzini, Giovanni Berselli, and Marcello Pellicciari Design and Development of an IoT Enabled Device for Remote Monitoring of Hydraulic Hammers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 Farhad Heidarpour, Alessandro Ciccolella, and Antonio E. Uva Implementing Vocal Natural Language Interface to Enterprise Resource Planning System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399 Shengzhe Zhang and Julien Le Duigou Collective Intelligence Application in a Kitting Picking Zone of the Automotive Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410 Santiago Montoya Zapata, Nathalie Klement, Cristovão Silva, Olivier Gibaru, and Meriem Lafou Application of Neural Networks for Smart Tightening of Aeronautical Bolted Assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Charly Foissac, Alain Daidié, Stéphane Segonds, and Clément Chirol Towards a Digital Factory in the Leather Goods Sector: The Case of an Italian Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430 Giorgia Pietroni and Marco Marconi

Contents

xvii

Digital Continuity Based on Reinforcement Learning Model Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 Quentin Brilhault, Esma Yahia, and Lionel Roucoules Enhancing Human-Robot Collaboration in the Industry 5.0 Context: Workplace Layout Prototyping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454 Castrese Di Marino, Andrea Rega, Ferdinando Vitolo, and Stanislao Patalano Requirements Management in Master Model Development: A Case Study in Fusion Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466 Francesca Giovanna Lanzotti, Domenico Marzullo, Vito Imbriani, Giuseppe Mazzone, Jeong-Ha You, and Giuseppe Di Gironimo Towards a Knowledge-Based Additive Manufacturing Approach . . . . . 479 Christelle Grandvallet, Franck Pourroy, and Frédéric Vignat Systems and Requirements Engineering: An Approach and a Software Tool for the Interactive and Consistent Functional Requirement Specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491 Agnese Pasquariello, Ferdinando Vitolo, and Stanislao Patalano Analysis of Guided Waves Dispersive Behavior for Damage Detection in Flat and Curved Composite Panels . . . . . . . . . . . . . . . . . . . . . . . . . . 503 Donato Perfetto, Alessandro De Luca, Giuseppe Lamanna, and Francesco Caputo Analysis of the Tensioning Field Induced by Stress Fibers in Nanoindented Stem Cells Adhered to a Flat Substrate . . . . . . . . . . . . . . 512 Lorenzo Vaiani, Antonio Emmanuele Uva, and Antonio Boccaccio Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train with Gear or Bearing Defect . . . . . . . . . . . . . . . . . . . . . . . . . . . . 520 Raphaël Mignot-Pasty, Lionel Roucoules, François Malburet, and Olivier Honnorat Design of a Snap-Fit Joint Through a Multibody Model . . . . . . . . . . . . 533 Giulia Pascoletti, Paolo Conti, Francesco Bianconi, and Elisabetta M. Zanetti Towards Real-Time Physics-Based Variation Simulation of Assembly Systems with Compliant Sheet-Metal Parts Based on Reduced-Order Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546 Mario Brandon Russo, Alessandro Greco, Salvatore Gerbino, and Pasquale Franciosa Design and Comparison of High Performance Sailing Hulls . . . . . . . . . 559 Antonio Mancuso, Antonio Saporito, and Davide Tumino

xviii

Contents

Real-Time Approximative Thermal Simulation for Wire Arc Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 569 Thibaut Tezenas Du Montcel, Nicolas Beraud, Frédéric Vignat, Jérôme Pailhès, Philippe Marin, and Franck Pourroy Simulation Methods for Plant Layout Design-A Case Study . . . . . . . . . 581 Mario Caterino, Marta Rinaldi, Maria Antonietta Turino, Marcello Fera, and Natalia Hartono Optimizing Racing Wheelchair Design Through Coupled Biomechanical-Mechanical Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 593 Adrien Loiseau, Thibault Marsan, Pablo Navarro, Bruno Watier, and Yann Landon Preliminary Validation of a Virtual Environment for Simulation and Recognition of Human Gestures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605 Giuseppe Sanseverino, Dominik Krumm, Lena Kopnarski, Julian Rudisch, Claudia Voelcker-Rehage, and Stephan Odenwald Energy Efficient Trajectory Planning in Robotic Cells via Virtual Prototyping Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614 Mario Baggetta, Giovanni Berselli, Roberto Razzoli, and Marta Zucchinetti Design and Implementation of a Passive Agitator to Increase UV Dose in WWTPs Disinfection Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626 Delia Trifi, Javier Climent, Rosario Arnau, Pablo Carratalà, Mairena García, Inés Beltrán, Cristobal Badenes, Sergio Chiva, and Raúl Martínez-Cuenca Numerical Optimization of a Composite Sandwich Panel with a Novel Bi-directional Corrugated Core Using an Animal-Inspired Optimization Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637 Giuseppe Marannano, Tommaso Ingrassia, Vito Ricotta, and Vincenzo Nigrelli A Topology Optimization Approach to Design of a Low Cost 3D Printable Monolithic Compliant Mechanism for FWMAV’s Wing Actuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 652 Giovanni Carollo, Tommaso Ingrassia, Antonio Pantano, Vincenzo Nigrelli, and Marco Cosimo Tripoli Computer-Aided Prognostics and Health Management Using Incipient Failure Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 664 Cecilia Gattino, Elia Ottonello, Mario Baggetta, Jacek Stecki, Roberto Razzoli, and Giovanni Berselli

Contents

xix

Geometric Modelling and Analysis On the Adoption of the STEP Neutral Format for Procedural Mechanical CAD Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 677 Pedro Company, Jorge D. Camba, Stanislao Patalano, and Antonio Lanzotti Generative Design in Bicycle Frames: A Review . . . . . . . . . . . . . . . . . . 686 Valentin Gomez-Jauregui, Carlos Vigil-Fernández, Cristina Manchado, and César Otero Geometric Analysis of Product CAD Models to Support Design for Assembly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 698 Brigida Bonino, Franca Giannini, Marina Monti, Roberto Raffaeli, and Giovanni Berselli Image-Driven Manufacturing of Graded Lattices by Fused Deposition Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 711 Luca Grigolato, Stefano Rosso, Enrico Bernardo, Gianmaria Concheri, and Gianpaolo Savio Geometric Modeling of Lattice Structures with Selectively Activated Walls for Hydraulic Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 722 Stefano Rosso, Luca Grigolato, Gianmaria Concheri, and Gianpaolo Savio Semi-automatic Point Clouds Registration for Upper Limb Anatomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 733 Paolo Neri, Beatrice Aruanno, Sandro Barone, Alessandro Paoli, and Armando Viviano Razionale Optimizing Galvanic Process: Wet Surface Estimation for Small Metal Parts to Avoid Material Waste . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743 Luca Puggelli, Monica Carfagni, Rocco Furferi, and Lapo Governi Reverse Engineering and Redesign of the Impeller of a Submersible Centrifugal Pump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 755 Giuseppe Marannano, Vincenzo Nigrelli, Riccardo Pirri, and Vito Ricotta An Augmented Reality Framework for Remote Factory Acceptance Test: An Industrial Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768 Antimo Angelino, Massimo Martorelli, Andrea Tarallo, Chiara Cosenza, Stefano Papa, Antonio Monteleone, and Antonio Lanzotti Convolutional Neural Network for Background Removal in Close Range Photogrammetry: Application on Cultural Heritage Artefacts . . 780 Michele Bici, Francesco Gherardini, Lorena de Los Angeles Guachi-Guachi, Robinson Guachi, and Francesca Campana

xx

Contents

A Benchmark of Traditional Visual Descriptors and Convolutional Networks ‘Off-the-Shelf’ for Anomaly Detection . . . . . . . . . . . . . . . . . . 793 Francesco Bianconi, Paolo Conti, Elisabetta Maria Zanetti, and Giulia Pascoletti Melt Pool Acquisition Using Near-Infrared Camera in Aluminum Wire Arc Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 803 Anthony Dellarre, Maxime Limousin, and Nicolas Beraud Methodology for Image Analysis in Airborne Search and Rescue Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815 Francesco Ciccone, Antonio Bacciaglia, and Alessandro Ceruti Experimental and Numerical Investigations of Mechanical Behaviour for Titanium Bolted Joints with Friction Shim . . . . . . . . . . . . . . . . . . . . 827 Guillaume Pichon, Alain Daidié, Éric Paroissien, Audrey Benaben, and Clément Chirol Interoperability Framework for a Computer-Aided Approach of Tolerance-Cost Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 839 Andrea Petruccioli, Fabio Pini, and Francesco Leali A Digital Twin Approach for Smart Assembly of Aircraft Skin Panels with Mechanical Fasteners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 851 Pasquale Franciosa, Salvatore Gerbino, Ettore Stella, Luigi Berri, Nicola Gramegna, Nicola Gallo, and Massimo Martorelli A Technology Transfer Case in Graphic Engineering . . . . . . . . . . . . . . 862 César Otero, Cristina Manchado, and Valentin Gomez-Jauregui Ranged Segmentation of Slope Model for Spatial Analysis . . . . . . . . . . 873 Pierpaolo D’Agostino, Agustín Menéndez-Díaz, Giuseppe Antuono, Tomás E. Martínez-Chao, and Pedro G. Vindrola Digital Thread and Building Lifecycle Management for Industrialisation of Construction Operations: A State-of-the-Art Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 884 Félix Blampain, Matthieu Bricogne, Benoît Eynard, Céline Bricogne, and Sébastien Pinon Traceability of Uncertainty in Building Information Modelling Processes for Existing Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 895 Andrea di Filippo, Lucas Matias Gujski, Nicola Cappetti, and Francesco Villecco

Contents

xxi

Advanced Manufacturing and Control Study of the Anisotropy of Triple Periodic Minimal Surface Structures Generate by Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . 905 Ramon Miralbes, Natalia Santamaria, David Ranz, and José Antonio Gómez Influence of TPU Lattice Structures on Functional Properties for the Design of Customized Rehabilitation Products . . . . . . . . . . . . . . . . . . . . 914 Sergio de la Rosa, Pedro F. Mayuet, Miguel-Angel Pardo-Vicente, and Lucía Rodríguez-Parada Preliminary Design Method Accounting for Shape Distortion in Metal Binder Jetting Parts: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925 Macro Zago, Matteo Perina, and Ilaria Cristofolini Effects of Coating Post-processing on Mechanical Properties of Strut-and-Node-Based SLA Lattice Structures . . . . . . . . . . . . . . . . . . 937 Francesco Tamburrino, Paolo Neri, Alessandro Paoli, and Armando V. Razionale Lattice Core FEM Simulation with a Modified-Beam Approach . . . . . . 946 Davide Tumino, Andrea Alaimo, Calogero Orlando, Stefano Valvano, and Carmelo Rosario Vindigni A Design Methodology for Graded Density Triply Periodic Minimal Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 955 Emilio A. Ramírez, Nicolas Béraud, Franck Pourroy, François Villeneuve, and Matthieu Museau Could the Airframe of the Future Be Made of Metal Only? Potential of AM in the Helicopter Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967 Cristiam Javier Lasso Perdomo, Dominique Millet, Myriam Orquera, Bertrand Gabriel, and Gregory De Dominici Design for Additive Manufacturing Tools: Are They an Effective Support for Designers? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 980 Marco Marconi, Alessio Zanini, Claudio Favi, and Marco Mandolini An Analytical Cost Model for Laser-Directed Energy Deposition (L-DED) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993 Marco Mandolini, Mikhailo Sartini, Claudio Favi, and Michele Germani Data Processing of L-PBF Manufactured Lattice Structures for Geometric Defects Characterization . . . . . . . . . . . . . . . . . . . . . . . . . 1005 Kévin Ferreira, Nabil Anwer, and Charyar Mehdi-Souzani Mereo-DfAM: A Methodological Proposal of Mereotopological Design for Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017 Robin Kromer, Chloé Douin, and Elise Gruhier

xxii

Contents

Experiences of Additive Manufacturing for Nuclear Fusion Applications: The Case of the Wishbone of the Divertor of DEMO Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1030 Domenico Marzullo, Barbara Motyl, Emanuele Vaglio, Stefano Filippi, Federico Scalzo, Giovanni Totis, Marco Sortino, Vito Imbriani, Giuseppe Mazzone, and Jeong-Ha You Topology Optimization for Thin-Walled Structures with Distributed Loads . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042 Antonio Bacciaglia, Alessandro Ceruti, Francesco Ciccone, and Alfredo Liverani Embedded Workflow for Functionally Graded Material Fabrication with Multi-material Additive Manufacturing . . . . . . . . . . . . . . . . . . . . . 1055 Riccardo Pigazzi, Marco Rossoni, and Giorgio Colombo Directed Energy Deposition Process Simulation to Sustain Design for Additive Remanufacturing Approaches . . . . . . . . . . . . . . . . . . . . . . 1067 Enrico Dalpadulo, Fabio Pini, and Francesco Leali Eco-Assessment of Design for Additive Manufacturing Solutions Defined at Different Levels of Detail . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079 Filippo Colombo Zefinetti, Christian Spreafico, Daniele Regazzoni, and Daniele Landi An Empirical Approach for Mechanical Behavior Characterization of a Small Diameter Cold-Drawn Steel Wire with a Three-Point Bending Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1090 Julien Vaïssette, Manuel Paredes, and Catherine Mabru Correlations Between the Hole Surface Integrity and Fatigue Life for Drilled 2024-T351 Aluminum Alloy . . . . . . . . . . . . . . . . . . . . . . . . . 1103 Yann Landon, Alexandra Lacombe, Landry Arnaud Kamgaing Souop, Alain Daidié, Manuel Paredes, Audrey Benaben, and Clément Chirol Design and Creation of an Economical Camera Slider for Photography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1115 Pedro Medina-Triviño, Paula Triviño-Tarradas, Rafael Ortiz-Cordero, and Miguel Gonzalez-Redondo Analysis of Data from In-Situ Monitoring of Additive Manufacturing Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126 David Comberton, Robin Kromer, and Eric Lacoste Geometrical Tolerances: “Planarity” Measurements on Automotive Power Module During Its Assembly Processes-Flow . . . . . . . . . . . . . . . 1138 Michele Calabretta, Alessandro Sitta, Angelo Alberto Messina, Salvatore Massimo Olivieri, and Gaetano Sequenzia

Contents

xxiii

Toolpath Calculation Using Reinforcement Learning in Machining . . . . 1149 Frédéric Vignat, Nicolas Béraud, and Thibaut Tezenas Du Montcel Simulation-Based Design of Reconfigurable Moulds for Injection Overmoulding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159 Nicolò Galati, Francesco Gherardini, Caterina Guaitoli, and Alberto Vergnano Mixed Reality Platform Supporting Human-Robot Interaction . . . . . . . 1172 Nicolas Calzone, Monica Sileo, Rocco Mozzillo, Francesco Pierri, and Fabrizio Caccavale Markerless Motion Capture and Virtual Reality for Real-Time Ergonomic Analysis of Operators in Workstations with Collaborative Robots: a preliminary study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1183 Daniel Lanzoni, Andrea Cattaneo, Andrea Vitali, Daniele Regazzoni, and Caterina Rizzi Design of a Fabric–Based Antagonistic Pneumatic Actuator with Multiple Chambers for the Development of Soft Continuum Manipulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1195 Stanislao Grazioso, Benedetta Maria Vita Ostuni, Teodorico Caporaso, Giuseppe Di Gironimo, and Antonio Lanzotti Design, Fabrication and Characterization of a Novel Variable Stiffness Structure for Soft Continuum Robotics . . . . . . . . . . . . . . . . . . . . . . . . . 1203 Linda Weisheit and Wilhelm Richard Wockenfuß Effects of Design Parameters on the Tip Steering Capabilities of Fabric Pneumatic Artificial Muscle-actuated Soft Growing Robots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1215 Mario Selvaggio, Stanislao Grazioso, Salvatore Fusco, Roberto Sabella, Giuseppe Andrea Fontanelli, Giuseppe Di Gironimo, Bruno Siciliano, and Antonio Lanzotti A User–Centered Approach Involving the Clinicians for the Design of Medical Devices: Case Study of a Soft Robotic Exoskeleton for Rehabilitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227 Teodorico Caporaso, Stanislao Grazioso, Benedetta M. V. Ostuni, Angela Palomba, Giuseppe Di Gironimo, Giovanni Iolascon, and Antonio Lanzotti Preliminary Design of a EMG Wearable Interface for the Actuation of Soft Pneumatic Artificial Muscles . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1239 Teodorico Caporaso, Stanislao Grazioso, Benedetta M. V. Ostuni, and Antonio Lanzotti

xxiv

Contents

Preliminary Study of a Capacitive Force Sensor for Soft Robotic Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247 Paolo Bellitti, Teodorico Caporaso, Stanislao Grazioso, Antonio Lanzotti, Emilio Sardini, and Mauro Serpelloni Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant with the Boundary Condition Design Criterion . . . . . . . . . . . . . . . . . . . 1256 Mattia Maltauro, Roberto Meneghello, and Gianmaria Concheri A Case Study on the Correlation Between Functional and Manufacturing Specifications for a Large Injection Moulded Part . . . . . 1268 Mattia Maltauro, Roberto Meneghello, Gianmaria Concheri, Daniele Pellegrini, Massimo Viero, and Giuseppe Bisognin InvNet: Transfer Learning on Point Cloud for Invariant Surface Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1279 Yifan Qie and Nabil Anwer X-Reality for Interactive Design Apprentices’ Opportunities and Criticalities in Adopting Affordable User Interface Development Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1293 Stefano Filippi and Giovanni Vanino Role-Based Sailing Augmented Reality Interface: Three Proposals for Head-Mounted Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1305 Francesco Laera, Alessandro Evangelista, Vito Manghisi, Massimo Foglia, and Michele Fiorentino Displaying Augmented Reality Manuals in the Design Phase of the Product Lifecycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316 Enricoandrea Laviola, Michele Gattullo, and Alessandro Evangelista From Therbligs to Visual Assets: A Technique to Convey Work Instructions in Augmented Reality Technical Documentation . . . . . . . . 1327 Michele Gattullo, Enricoandrea Laviola, and Antonio Emmanuele Uva Immersive Virtual Reality as a Training Tool for Safety Working Procedure in Confined Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1340 Vito Modesto Manghisi, Alessandro Evangelista, Veronica Rossano, Claudia Giliberti, Raffaele Mariconte, Maurizio Diano, Valerio Galasso, and Antonio Emmanuele Uva Tintoretto Unveiled: Interactive Virtual Experience for Artworks . . . . . 1352 Beatrice Aruanno, Sandro Barone, Francesco Tamburrino, and Mario Covarrubias Rodriguez X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process of Large Products . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364 Alessandro Greco, Antimo Cardilicchio, and Salvatore Gerbino

Contents

xxv

From Virtual to Augmented Reality Training System: An IC.IDOBased Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1376 Sara Buonocore, Pietro Giovannoli, and Giuseppe Di Gironimo An AR Tool to Support Warehouse Operations in the Context of Industry 4.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1389 Edoardo Casciotta, Riccardo Karim Khamaisi, Roberto Raffaeli, and Margherita Peruzzini A Novel Approach to Use Marker-Less Mixed Reality Applications with In-Motion Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1401 Agnese Brunzini, Marianna Ciccarelli, Mikhailo Sartini, Giacomo Menchi, Alessandra Papetti, and Michele Germani A Human-Centred Design Approach for Designing Augmented Reality Enabled Interactive Systems: A Kitchen Machine Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1413 Marina Ricci, Alessandra Scarcelli, Annarita D’Introno, Valeria Strippoli, Simona Cariati, and Michele Fiorentino Performance Evaluation of an Immersive Measurement Instrument for Automotive Field Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1426 Michele Iaquinandi, Carlotta Fontana, Iolanda Fiorillo, Alessandro Naddeo, and Nicola Cappetti Virtual Golden Zone for Enhancing the Ergonomics of Complex Production Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1436 Lucilla Dammacco, Raffaele Carli, Michele Gattullo, Vito Lazazzera, Michele Fiorentino, and Mariagrazia Dotoli Development of a Parametric Scene Editor of Serious Games for Safety in Workplaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1448 Fabrizio Renno, Stefano Papa, Rosario Polichetti, Domenico Coccorese, Raffaele d’Angelo, Corrado Grasso, Andrea Tarallo, and Antonio Lanzotti Design of XR Applications for Museums, Including Technology Maximising Visitors’ Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1460 Marina Carulli, Andrea Generosi, Monica Bordegoni, and Maura Mengoni Exploring People’s Visual Perception and Its Impact on Evaluation of a Tiny House Prototype Using Eye Tracking Technology . . . . . . . . . . . 1471 Aurora Berni, Chiara Nezzi, Laura Ruiz-Pastor, Stefania Altavilla, Ingrid Kofler, and Yuri Borgianni

xxvi

Contents

Heat on Sight - Display Thermal Radiation in the Peripheral Field of View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1483 Aline Püschel, Wolfgang Kilian, and Stephan Odenwald HRV Analysis Through the Sphygmic Wave Pulsation of the Femoral Artery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1495 Luca Salvati, Nicola Cappetti, Matteo d’Amore, Anita Fiorentino, Arcangelo Pellegrino, Pasquale Sena, and Francesco Villecco A Mixed Reality Application for Collaborative and Interactive Design Review and Usability Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1505 Sandhya Santhosh and Francesca De Crescenzio Posture Interactive Self Evaluation Algorithm Based on Computer Vision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1516 Emmanuele Barberi, Massimiliano Chillemi, Filippo Cucinotta, Demetrio Milardi, Marcello Raffaele, Fabio Salmeri, and Felice Sfravara A Review of Benchmarks for Visual Defect Detection in the Manufacturing Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1527 Philippe Carvalho, Alexandre Durupt, and Yves Grandvalet Education and Representation Techniques Soft Skills Teaching Proposal for “Designers” . . . . . . . . . . . . . . . . . . . . 1541 Edoardo Giovanni Maria Rovida, Alberta Gianotti, and Giulio Zafferri A New Approach to Teach Conceptual Design Integrating Additive Manufacturing Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1552 Robin Kromer and Elise Gruhier Project and Problem Based Learning for Students and Teachers Involvement in a Multidisciplinary Engineering Project . . . . . . . . . . . . . 1564 Camille Pédarriosse, Lionel Roucoules, Florian Huet, Esma Yahia, Arnaud Polette, and Mathias Kleiner Formative Assessment in Context of COVID-19 Pandemic . . . . . . . . . . 1576 Ana Serrano, Anna Biedermann, José Luis Santolaya, Natalia Muñoz, and Inmaculada Plaza Think-Sketch-Create: Improving Creative Expression Through Sketching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1585 M. Belén Calavia, Teresa Blanco, Ana Serrano, Anna Biedermann, and Roberto Casas

Contents

xxvii

Methodology for Online Assessment of Technical Drawing Skills of Industrial Engineering Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1598 Diego Paderno, Ileana Bodini, Stefano Uberti, Valerio Villa, and Gabriele Baronio Techniques for the Representation of the Application of Historical-Archaeological Evidence Scales in Heritage Assets . . . . . . 1610 Irene Cáceres-Criado, Diego Francisco García-Molina, Rafael Enrique Hidalgo-Fernández, and Paula Triviño-Tarradas Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1621

Design Methods and Tools

A Computer-Aided Tool for the Energy Optimization of Industrial Robots in Manufacturing Applications Fabio Merlo1 , Greta Vazzoler2,3 , and Giovanni Berselli2,4(B) 1

2

Danieli Centro Combustion S.p.A., Piazza Borgo Pila 39, 16129 Genova, Italy [email protected] Department of Mechanical, Energy, Management and Transportation Engineering, University of Genova, Via all’Opera Pia 15/A, 16145 Genova, Italy [email protected], [email protected] 3 Institute of Mechanical Intelligence, Scuola Superiore Sant’Anna, Via Alamanni 13b, 56010 Pisa, Italy 4 ADVR Department, Istituto Italiano di Tecnologia, Via S. Quirico 19d, 16163 Genova, Italy

Abstract. Nowadays, energy efficiency of industrial plants is an issue of primary concern. This research aims at minimizing the Energy Consumption (EC) of Industrial Manipulators (IMs) leveraging on Ecoprogramming strategies. In particular, building upon well-known methods for robot trajectory planning, a novel time-scaling approach is proposed, which employs a non-linear scaling function to better optimize EC. In this scenario, a Graphic User Interface (GUI) has been realized to ease the implementation of the presented algorithm, offering a fast and user-friendly tool to be used by robotic cell designers. Different manipulators models can be uploaded in the GUI to then automatically run the non-linear optimization process and retrieve a minimum-energy trajectory. As an instance, a 6 Degrees of Freedom IM has been considered; the obtained results have been compared with other methods known from the literature and validated through a commercial tool. The proposed method is verified to be more effective for energy-intensive trajectories, possibly reaching EC savings of more than 10%. Keywords: Smart manufacturing · Eco-programming · Trajectory scaling · Industrial robotics · Graphic User Interface · Energy efficiency

1

Introduction

Lately, the manufacturing industry has been strongly influenced by the need to improve energy efficiency and sustainability of robotized plants [1,2]. In particular, a big impact on the overall Energy Consumption (EC) derives from the electrical power required by Industrial Manipulators (IMs), which are usually programmed to work at their maximum speed so as to possibly increase the c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 3–14, 2023. https://doi.org/10.1007/978-3-031-15928-2_1

4

F. Merlo et al.

plant productivity. On the other hand, EC minimization in robotized industries may lead not only to a reduction of production costs but also of CO2 emissions, as recalled in [3]. Therefore, there is a clear need for EC optimization methods that are easily applicable to established industrial plants. Mainly, the tools for IM programming available on the market can only provide an EC computation, though not optimizing the EC on the basis of specific IM working conditions. Concerning the EC minimization of IMs, two major approaches can be identified from the scientific literature: • Methods that involve considerable system modifications, such as energy optimization achieved via hardware replacement [4–6]; energy-optimal robot selection for specific operations [7]; energy-optimal cell lay-out design [8]. • Methods with no substantial revision of the industrial plant, e.g., energyoptimization of paths and motion profiles, also considering the system dynamics and control [9–13]; scheduling aiming at minimum-energy strategies [14– 16]. Regarding the second point, a practical example of trajectory planning optimization, considering a fixed path assigned to the IM, can be found in [17], which describes an alternative method for the robot EC optimization, as compared to the classic linear scaling [18]. In practice, by exploiting parameters available from the control of the IM to the user, this approach is direct and easy to apply, although it is valid only for discontinuous laws of motion, e.g., trapezoidal speed law trajectories, and it requires the calculation of the IM inverse dynamics for each iteration of the optimization process, thus, ending in long computational times. Indeed, the authors implement a design of experiment by varying the velocity and acceleration of the IM joints within the dynamic robot constraints, evaluating the EC of the IM for each attempt. Similarly, [19] presents a trajectory optimization process of a 6 Degrees of Freedom (DoFs) IM produced by ABB, showing that a significant amount of energy can be saved by simply limiting the velocity and acceleration of the IM joints. Referring to the current methods for robot trajectory planning, the primary aim of this work is to develop a novel optimization approach, reducing the EC of the considered IM without any constraints on the assigned speed motion law. A Computer-Aided tool for optimizing the EC of typical IMs has been designed through the software MATLAB, to better visualize the dynamic behaviour of the studied IM, thus, to choose with ease the most effective strategy for the optimization process. Several typical IMs can be chosen, and the overall EC can be optimized based upon the assigned trajectory. The user can easily insert the desired parameters directly from the Graphic User Interface (GUI), and the code will automatically run the EC optimization, displaying plots and specific values of interest. The program code is kept accessible: it consists of a complete analytical dynamic model of a typical IM with 6 DoFs and a novel, dynamic scaling optimization process. The proposed theoretical method has been compared to well-known techniques, i.e., linear scaling [18] and the 2-independent parameter approach introduced in [17], and validated through a commercial tool, thus, verifying the accuracy of the obtained results.

A Computer-Aided Tool for the Energy Optimization

5

The rest of the paper is structured as follows: Sect. 2 contains the analytical models, from the dynamic scaling methods to the optimization processes; Sect. 3 introduces the developed tool, i.e., the GUI for the EC optimization of IMs; final remarks and future developments of the current study are provided in Sect. 4.

2

Dynamic Scaling for Trajectory Planning

With the term trajectory planning, one refers to the process of generating the inputs for the control system of an IM to enable a specific movement. By employing the so-called dynamic scaling procedure [20,21], the original trajectory assigned to the IM is scaled up in time, allowing the manipulator to execute the imposed path with the proper law of motion, under the torque limits constraints of the IM joints themselves. Thus, the calculation of the required torques to perform the scaled trajectory is simplified, avoiding the IM inverse dynamics re-computation [18]. Let once define the original trajectory in terms of angular positions of a n-joints IM as q(t) ∈ Rn , namely a specific law of motion function of time, t. The vectors q(t) ˙ and q¨(t) represent, respectively, the velocity and acceleration vectors of the IM joints. The original trajectory can be formulated ¯ (r(t)), where q ¯ is the scaled trajectory and r(t) is the scaling function. as q(t) = q The well-known inverse dynamics equation for a generic manipulator of n DoFs (sse [18]) can be written as in Eq. 1, which establishes the relationship between the torques τ requested for the implementation of the original q, and the torques ¯ , scaled through a generic function r(t): τ¯ due to the trajectory q τ¯(r) =

q  (r) τv τac − r¨B(q)¯ + τg + 2 r˙ r˙

where apex ( ) = d/dr and dot(. ) = d/dt. Also: ⎧ ⎪ q + C(q, q) ˙ q˙ ⎨τac = B(q)¨ τv = Fv q˙ ⎪ ⎩ ˙ + g(q) τg = τc + τg r = Fc sgn(q)

(1)

(2)

where τ ∈ Rn is the torque required for the execution of the original trajectory and τ¯ ∈ Rn is referred to the scaled trajectory. B ∈ Rn×n is the symmetric joint-space inertia matrix; C ∈ Rn×n includes the Coriolis and the centrifugal effects; Fv , Fc ∈ Rn×n are, respectively, the matrices of the viscous and static friction coefficients; g ∈ Rn involves the gravitational loading. By choosing r(t) = ct, being c a constant value, Eq. 1 can be simplified as Eq. 3, thus, defining the linear scaling approach: τ¯(r) =

τv τac + τg + c2 c

(3)

Concerning Eq. 3, the scaled trajectory can be run faster, for c < 1, or slower, for c > 1, changing the overall scaled motion time of the IM as t¯mot = ctmot , being tmot the duration of the original trajectory.

6

F. Merlo et al.

The linear scaling (Eq. 3) has found spread applications in the literature and industrial plants. On the other hand, the dynamic scaling in its general form (as in Eq. 1) is not frequently employed, since only few trajectory parameters can be practically customized by the user on older IM versions, such as the initial and the final points of a path. In this context, this paper presents a novel dynamic scaling approach by defining a non-linear scaling function r(t) to optimize the EC of an IM with no constraints on the type of motion law to be used. Compared to [17], the required computational times are reduced, and every trajectory within the IM joint space can be explored. Obviously, by choosing a generic scaling function, thus, computing the scaled torques as in Eq. 1, instead of a linear one (Eq. 3), the EC optimization process can be better customized, based upon the dynamic constraints of the IM. An accurate, theoretical, dynamic model of the robot system has been developed leveraging on well-known methods from the literature [17,18]. To validate such a model, the EC of the IM (in this case, the 6 DoFs KR 210–2, by KUKA [22]) is also calculated via the Robot Modeling Tool Chain [23]. Since many data, especially the dynamics parameters, are sensitive and not available unless authorized by KUKA, in this study the missing parameters have been found through notable identification techniques, such as [24,25], and the results have been provided in the dimensionless form (in general, x∗ specifies a dimensionless quantity). For the dynamic scaling implementation, different strategies are evaluated. The non-linear scaling function r(t) has been defined through a spline designed with third-degree polynomial functions. Figure 1 shows the non-linear function scaled through the Fast-Slow or Slow-Fast strategy (FS or SF), the FSF or SFS and the Mix strategy. Thanks to this novel dynamic scaling approach, each trajectory can be scaled as desired, e.g., it can be run faster when the power demand of the IM is low or slower if the IM needs to move its joints requiring high power. In this case, the overall time required for the manipulator to complete the trajectory can be kept fixed (i.e., t¯mot = tmot ). The choice of the proper strategy

Fig. 1. Strategies evaluation for dynamic scaling through the non-linear spline function t¯ = r(t): (a) Fast-Slow and Slow-Fast (FS and SF); (b) Fast-Slow-Fast and Slow-FastSlow (FSF and SFS); (c) Mix.

A Computer-Aided Tool for the Energy Optimization

7

Table 1. EC saving percentage (maximum values) for different robot paths (linear, circular, ellipsoidal) and different speed laws of motion (trapezoidal, harmonic, polynomial of 5th -order). EC saving [%] (maximum values) Linear path x y z

Circular path Ellipsoidal x y z path

14

Trapezoidal motion law 0.9 1.9 6.3 2 5.3 2.8 7.1 Harmonic motion law 1.9 4.3 11.5 4.2 7.9 5.8 10.2 Polynomial motion law 4.3 6.7 14.4 6.5 10.0 9.0 13.3

13.3 9.3 2.60

8

7.6 7.1 6.1 5.3

2.60

5.3 5.7

0.00 0

0.0

0.00

-2.6

0

0

0 2.6

4

5.0

Fig. 2. EC saving percentage [%] during robot motion: ellipsoidal path with 5th -order polynomial motion law.

depends on the assigned path and, thus, on the power curve development. Also, the method efficiency is strongly influenced by the difficulty of the path and the imposed law of motion. To find the optimal EC of the IM computed based upon the analytical robot system dynamic model [17], different optimization processes have been run, finding the optimal scaling function ropt (t) for the optimal motion time tmot,opt . The just presented EC optimization method has been implemented in a user-friendly Computer-Aided tool, further investigated in Sect. 4, which supports the IM designer in the implementation of the process. Table 1 sums up the maximum EC saving percentage found after the optimization process for different laws of motion and paths, typical of the industry sector, thus, justifying the method efficacy. With reference to the same table, it can be highlighted that the EC saving raises from the linear to the ellipsoidal path and, also, from the trapezoidal speed law to the 5th -order polynomial speed law trajectory. Indeed, the maximum EC saving of 13.3% occurs for the ellipsoidal path run with a polynomial law of motion (Fig. 2). Generally speaking, from the obtained results, it can be stated that the efficiency of the method emerges for energy-intensive laws of motion. Moreover, Fig. 3 shows a comparison between different motion laws

8

F. Merlo et al.

Fig. 3. Dynamic scaling (Fast-Slow-Fast strategy) of the 5th -order Polynomial Speed Law trajectory for different paths. Robot EC normalized to the Trapezoidal Speed Law (TSL) [%] plotted from path n1 to n25 (the paths consist of linear motions along the y − axis of the robot.

for each of the imposed paths. Referring to a trajectory run with a trapezoidal speed law (grey blocks), the trajectory run with the 5th -order polynomial law, which originally consumed more energy (red blocks), can be optimized. Thus, it reaches EC values in line with the EC of the reference trajectory (the green blocks are nearly equal to the grey blocks).

3

Graphic User Interface

This section describes the Robotics Trajectory Planner, a GUI implemented in the software MATLAB to run with ease the dynamic scaling minimum-energy optimization process described in Sect. 2. Nevertheless, the program code is kept accessible to the user, allowing customization as desired, thus, gaining flexibility. In order to use the proposed interface, the following tools are needed: i ) MATLAB R2014b or a version supporting the dot notation, including standard toolboxes; ii ) the Robotics Toolbox for MATLAB v10 [26] (available online). The Robotics Trajectory Planner is mainly divided into four sections, as depicted in Fig. 4: • • • •

Model. Trajectory. Energy calculation. Dynamic time-scaling optimization.

In the Model section shown in Fig. 5 the user can load the model of an IM, defined as an object of the SerialLink class, into the MATLAB workspace. By default, in the Robotics Toolbox installation folder, several IM models are available to be chosen by selecting the right path, in this case, by selecting the path robot toolkit 10\rvctools\robot\models, in the Manipulator class.

A Computer-Aided Tool for the Energy Optimization

9

Fig. 4. Robotics trajectory planner GUI. Four main sections: 1) Model 2) Trajectory 3) Energy calculation 4) Dynamic time-scaling optimization. Model Load robot Models folder

C:\Users\Dropbox (Personale)\robo

Manipulator

mdl_KR2210.m Load

kuka daimler Kinematic model Dynamic model Balancer PMSM model

Specified

Default

Drive model

Specified

Default

Payload Add payload

0 [kg]

Add

Fig. 5. Model panel: loading of the manipulator of interest from the Manipulator class. Classes defining the chosen robot: Kinematic model, Dynamic model, Balancer, (PMSM model ) and Drive model. Additional class: Add Payload.

However, being the dynamic behaviour of the manipulator confidential to the producer, in most of these models, only kinematic parameters are defined. Once the IM model is loaded, the tool recognizes how many parameters have been entered and warns the user if the model is not complete. Concerning Fig. 5, the Model panel is reported in detail. After choosing the IM, the corresponding Kinematic model, Dynamic model, Balancer, Permanent Magnet Synchronous servo Motors model (PMSM model ) and Drive model classes are uploaded. If the tool does not identify the presence of the electric motors and the drive system

10

F. Merlo et al. Trajectory

Trajectory

Create trajectory

Load trajectory Trajectory folder C:\Users\Dropbox (Personale)\ Trajectory Elapsed in [s]

line6.txt 4.00

Path type

Line

Options

Motion Law

Armonic

Options

Motion time [s]

Brake release waiting time

Generate

Settings

Settings Total cycle time

1.00

Exclude wrist

Load

12.00

[s]

Total cycle time

5.00

[s]

Brake release waiting time

12.00

[s]

5.00

[s]

Fig. 6. Trajectory panel. Load Trajectory configuration (left). Create Trajectory configuration (right). Energy calculation

using robot kuka daimler with payload [kg]: 0 performing traj. line6.txt elapsed in [s]: 4.00 with total cycle time [s]: 12.00 brake release waiting time [s]: 5.00

6 4

P ac [kW]

Start energy calculation

8.88

[kJ]

Power peak

5.46

[kW]

Em = 40%

ErobON = 17% 17%

40%

2 0 0

Energy consumption

1

2 Motion time[s]

3

4

Mechanical energy Copper losses Iron losses Other losses Brakes losses

ERs = 2%

Ebr = 30% 30%

Eadd < 1% = 11% EFe2% 11% < 1%

Fig. 7. Energy calculation panel. Total power absorbed from the network PAC plotted in the motion time tmot , and pie chart of the robot EC contributions (mechanical energy, chopper losses, iron losses, brake losses, other losses).

models, these are created with default values for the associated parameters. The definition of a possible balancing system of the second joint of the IM can also be carried out by loading the .m file and describing the parameters linked to the Balancer class. Finally, the user can indicate the load to be applied to the end-effector of the IM by filling the Add Payload class. The Trajectory section is illustrated in Fig. 6. On one hand, the user can load a trajectory in the joint space of the IM (Fig. 6, left): the required format is a .txt file with n + 1 columns separated by tabs (the first column contains the vector of times, the following include the angular positions, in degrees, of the n joints). Alternatively, the user can switch to the trajectory creation mode by clicking on the Trajectory menu at the top-left of the screen visible in Fig. 4. It is possible to define a path in space, rectilinear or spiral (linear in polar coordinates), and the law of motion to be used (Fig. 6, right). The tool will carry out inverse kinematics to obtain the trajectory in the joint space of the IM. Also, the user can specify the robot cycle time (Fig. 6, Total cycle time class) and the brake closure delay time (Fig. 6, Brake release waiting time class). Figure 7 illustrates the Energy calculation section, where the user can perform the EC calculation of the selected trajectory executed with the loaded IM. As output, the tool returns the total power curve absorbed from the network PAC , plotted in function of the motion time tmot , the EC and the power peak value max(PAC ). The different EC contributions (whose detailed definition may

A Computer-Aided Tool for the Energy Optimization

11

Fig. 8. Dynamic time-scaling optimization panel. Linear scaling settings and Nonlinear scaling settings configurations. Results configuration: Robot EC and total power absorbed by the network PAC plotted during the motion time tmot ; comparison between the scaled trajectory with different optimization processes (linear, non-linear scaling) and the original one; pie chart of the EC contributions (mechanical energy, chopper losses, iron losses, brake losses, other losses) for the linear and non-linear scaling.

be found in [27]) i.e. mechanical energy, losses in the chopper, in the iron, in the brake opening/closure phase and other losses, are plotted in a pie chart. It is worth noting that the visualization of the power curve has been the key for the choice of the dynamic scaling strategy (Fig. 1) during the optimization process presented in Sect. 2. Indeed, the method aims at tailoring the scaling strategy to different portions of the motion law depending on energy intensity. In the Dynamic time-scaling optimization section depicted in Fig. 8, the user can choose whether to run a linear scaling optimization [18] or implement the dynamic scaling developed in this research with the proper non-linear scaling function (Sect. 2, Fig. 1). Before running the non-linear scaling, it could be useful to perform a linear scaling, by specifying the time interval from the Minimum motion time, Maximum motion time classes and defining the number of iterations through the n check-points class from the Linear scaling settings configuration visible in Fig. 8. As output, the tool returns the optimal values of the motion time tmot , the max(PAC ) and EC of the scaled trajectory, specifying the percentage variations concerning the original one. Also, the tool plots the graph of EC = f (tmot ) (Fig. 8, top-right) and the power absorbed by the network PAC for the optimal trajectory (Fig. 8, bottom left). Then, the dynamic scaling optimization can be run specifying the following parameters in the Non-linear scaling settings configuration reported in Fig. 8: • Number of iterations (n check-points class); • Motion time, preferably the optimal value found with the previous linear scaling optimization (Motion time class);

12

F. Merlo et al.

• Function to be minimized, choosing between the EC and the peak power max(PAC ) (Target class); • Strategy to be used, choosing one among these proposed in Sect. 2 and showed in Fig. 1 (Strategy class). Once the non-linear scaling optimization has been carried out, the tool returns the values of the peak power max(PAC ) and the EC of the optimal scaled trajectory, specifying the percentage variations concerning the original trajectory and the scaled trajectory previously obtained with the linear scaling (Fig. 8, Results configuration). Also, the tool updates the graphs of the linear scaling by inserting the data relative to the non-linear scaling optimization. By launching successive simulations, all the plots are updated. Finally, the last optimized trajectory can be saved by clicking on the Save menu. Also, the user can display different figures by choosing from those proposed in the settings of the Plots menu, e.g., the IM workspace, the joint space position, speed or acceleration, the motors shaft torque, the total power consumed by the grid, the DC link voltage, the non-linear scaling iterations. By launching a new linear scaling optimization, all the plots are cleaned, and a new problem is started.

4

Conclusions

This paper proposed a novel minimum-energy optimization method for the trajectory planning of IMs. A Computer-Aided tool has been designed to easily implement the presented analytical process, consisting of the dynamic scaling development through a non-linear scaling function. Different strategies are evaluated and defined based upon the power curve of the IM during the motion. The user can upload the desired manipulator by choosing one of the typical IMs available directly from the tool. The main parameters of the chosen IM are automatically identified, and the user can define the trajectory to be imposed and the desired law of motion. The tool returns the EC optimal values of the IM, and different optimization processes can be compared. The method has been verified to be more efficient for trajectories with a soft law of motion, such as harmonic or 5th -order polynomial, than for discontinuous laws, reaching EC savings ranging from about 2% to more than 13%. This novel tool allows to strongly optimize energy-intensive motion laws, making them competitive with the trapezoidal speed law, commonly used in industrial plants. The presented GUI is a seamless connection between the state of the art and the Offline Programming and Computer-Aided Robotic tools. The experimentation and validation of the results have been implemented in one industrial robot only. To extend the work considering other models, parameter identification needs to be fulfilled for each of them. So, in future works, the optimization of different IMs will be implemented, and experimental validation of the obtained results will be carried out. Also, to be exploited in standard multi-robot industrial cells, the presented algorithm needs to be integrated into the controller logic, thus, prospecting interesting advantages for the robot automatized systems. The user-friendliness of the tool may be improved by connecting it to other Offline Programming tools for

A Computer-Aided Tool for the Energy Optimization

13

robotics cells, and the experimentation for robots that allow free motions and lower payload, e.g., the cobots, may be tested.

References 1. Sustainable Developement, United Nations: The 17 Goals. https://sdgs.un.org/ goals Accessed 14 Jan 2020 2. Palomares, I., et al.: A panoramic view and swot analysis of artificial intelligence for achieving the sustainable development goals by 2030: progress and prospects. Appl. Intell. 51(9), 6497–6527 (2021). https://doi.org/10.1007/s10489-021-02264-y 3. Meike, D., Pellicciari, M., Berselli, G., Vergnano, A., Ribickis, L.: Increasing the energy efficiency of multi-robot production lines in the automotive industry. In: IEEE International Conference on Automation Science and Engineering, pp. 700– 705 (2012) 4. Trianni, A., Cagno, E., Accordini, D.: Energy efficiency measures in electric motors systems: a novel classification highlighting specific implications in their adoption. ELSEVIER Appl. Energy 252, 113481 (2019) 5. Visinka, R.: Chapter 2 - Energy efficient three-phase AC motor drives for appliance and industrial applications. In: Goldberg, L.H., Middleton, W. (eds.) Green Electronics/Green Bottom Line, pp. 29–42. Butterworth-Heinemann, Woburn (2000) 6. De Almeida, A., Fong, J., Brunner, C., Werle, R., Van Werkhoven, M.: New technology trends and policy needs in energy efficient motor systems - a major opportunity for energy and carbon savings. ELSEVIER Renew. Sustain. Energy Rev. 115, 109384 (2019) 7. Maimon, O., Profeta, E., Singer, S.: Energy analysis of robot task motions. SPRINGER J. Intell. Robot. Syst. 4, 175–198 (1991) 8. Gadaleta, M., Berselli, G., Pellicciari, M.: Energy-optimal layout design of robotic work cells: potential assessment on an industrial case study. ELSEVIER Robot. Comput.-Integr. Manuf. 47, 102–111 (2017) 9. Gleeson, D.: Generating optimized trajectories for robotic spray painting. In: IEEE Transactions on Automation Science and Engineering, pp. 1–12 (2022) 10. Park, J.: Motion profile planning of repetitive point-to-point control for maximum energy conversion under acceleration conditions. ELSEVIER Mechatronics 6(6), 649–663 (1996) 11. Sergaki, E., Stavrakakis, G., Pouliezos, A.: Optimal robot speed trajectory by minimization of the actuator motor electromechanical losses. SPRINGER J. Intell. Robot. Syst. 33, 187–207 (2002) 12. Huang, M.S., Hsu, Y.L., Fung, R.F.: Minimum-energy point-to-point trajectory planning for a motor-toggle servomechanism. IEEE/ASME Trans. Mechatron. 17(2), 337–344 (2012) 13. Field, G., Stepanenko, Y.: Iterative dynamic programming: an approach to minimum energy trajectory planning for robotic manipulators. In: IEEE Interantional Conferance on Robotics and Automation. vol. 3, pp. 2755–2760 (1996) 14. Pellicciari, M., Berselli, G., Leali, F., Vergnano, A.: A minimal touch approach for optimizing energy efficiency in pick-and-place manipulators. In: IEEE Interantional Conferance on Advanced Robotics. pp. 100–105 (2011) 15. Hovgard, M., Lennartson, B., Bengtsson, K.: Applied energy optimization of multirobot systems through motion parameter tuning. CIRP J. Manuf. Sci. Technol. 35, 422–430 (2021)

14

F. Merlo et al.

16. Berselli, G., Balugani, F., Pellicciari, M., Gadaleta, M.: Energy-optimal motions for servo-systems: a comparison of spline interpolants and performance indexes using a cad-based approach. ELSEVIER Robot. Comput.-Integr. Manuf. 40, 55–65 (2016) 17. Gadaleta, M., Pellicciari, M., Berselli, G.: Optimization of the energy consumption of industrial robots for automatic code generation. ELSEVIER Robot. Comput.Integr. Manuf. 57, 452–464 (2019) 18. Siciliano, B., Sciavicco, L., Villani, L., Oriolo, G.: Robotics: Modelling, Planning and Control. SPRINGER Science & Business Media (2010). https://doi.org/10. 1007/978-1-84628-642-1 19. Paes, K., Dewulf, W., Elst, K.V., Kellens, K., Slaets, P.: Energy efficient trajectories for an industrial ABB robot. ELSEVIER Procedia Conf. Life Cycle Eng. 15, 105–110 (2014) 20. Gasparetto, A., Boscariol, P., Lanzutti, A., Vidoni, R.: Trajectory planning in robotics. SPRINGER Math. Comput. Sci. 6(3), 269–279 (2012) 21. Hollerbach, J.: Dynamic scaling of manipulator trajectories. In: IEEE American Control Conference San Francisco, CA, USA (1983) 22. KUKA System Software 8.3: Operating and Programming Instructions for System Integrators. https://www.kuka.com Accessed 14 Jan 2020 23. Meike, D.: Increasing Energy Efficiency of Robotized Production Systems in Automobile Manufacturin. PhD thesis, Riga Technical University (2013) 24. Oliva, E., Berselli, G., Pini, F.: Dynamic identification of industrial robots from low-sampled data. Transfer. Technol. Publ. Appl. Mech. Mater. 328, 644–650 (2013) 25. Grotjahn, M., Daemi, M., Heimann, B.: Dynamic identification of industrial robots from low-sampled data. ELSEVIER Int. J. Solids Struct. 38(10), 1889–1902 (2001) 26. Corke, P.: Fundamental Algorithms In MATLAB Second. Completely Revised, Extended And Updated Edition (2017) 27. Pellicciari, M., Berselli, G., Balugani, F.: On designing optimal trajectories for servo-actuated mechanisms: detailed virtual prototyping and experimental evaluation. IEEE/ASME Trans. Mechatron. 20(5), 2039–2052 (2015)

Experimentation and Characterization of Mycelium-Based Biocomposites for Ephemeral Product Applications Ignacio López-Forniés1(B) , Rosa Aurensanz Mallada1 , Aroa Ejarque-Ortiz2 and María A. Madre Sediles1

,

1 Zaragoza University, 50018 Zaragoza, Spain

[email protected] 2 Laboratorios de Innovación Abierta LIA-CESAR, 50003 Zaragoza, Spain

Abstract. Single-use plastics, due to their ephemeral nature, are a problem for sustainability. To overcome this difficulty, biomaterials are being created. A biocomposite based on mycelium and six different substrates has been developed to study its characteristics and possible applications in the substitution of ephemeral plastic-based products. The use of six leftovers or biological waste allows results to be compared between samples, to determine which leftovers or waste are revalued instead of being eliminated, in accordance with the principles of the circular economy (CE). Samples and specimens were developed for laboratory tests to characterize density, water absorption, and compression. All tests were carried out according to norms and standards that correspond to the plastics they can replace. Results indicate that the 6 types have very low densities, even lower than polymeric foams. They present good substrate properties in terms of compressive strength, with values similar to expanded polystyrene (EPS). In addition, all the samples are biodegradable since they do not require any type of coating, they can take a wide variety of shapes and the molds can be developed using various manufacturing techniques. Potential applications are found in the packaging industry since ephemeral containers require a certain resistance and low weight, being able to replace EPS or alveolar film. Being an inert material, it could be used in food packaging and even replace some applications of cardboard or paper pulp. Keywords: Biocomposites · Mycelium · Ephemeral products · Sustainability · Eco-design

1 Introduction Currently, one of the main obstacles to sustainability is the indiscriminate use of singleuse plastics, they are ephemeral due to their limited use and its very short life cycle [1]. A high percentage of garbage are plastics, and most of them come from single-use containers and all kinds of wrapping and packaging [2]. The European Union in July 2021 banned the use of some single-use plastic items [3], this law includes the prohibition © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 15–27, 2023. https://doi.org/10.1007/978-3-031-15928-2_2

16

I. López-Forniés et al.

of containers and cups for food and beverages made of EPS, including their lids and plugs. Another alternative to the prohibition of the use of this type of plastic is recycling, although it has various disadvantages. According to Ecoembes [4], one in four of the waste in the yellow container was not correctly located. Also, plastic cannot be recycled an infinite number of times, as the molecular chains are degraded. For these reasons, some types of plastics cause a serious environmental problem that must be solved through the exploration of other alternatives. One of the main plastics in landfills is EPS, with good qualities such as versatility and ease of shaping, shock absorption or lightness. In addition, EPS has very good mechanical resistance, and is a good insulator against cold and heat [5]. There are already studies on the substitution of these polluting materials in containers of various types and studies on the characteristics of the substitute materials of an organic nature [6–9]. The increased demand for green materials has given rise to a large number of studies on biocomposites [10, 11]. Biocomposites are defined as composite materials where biopolymers form the matrix and are reinforced by natural fibers, where the fibers are usually of organic origin. A very important factor of biocomposites is the possibility of taking advantage of biological remains or residues such as shells, fibers or residual stems. In this way, the remains or waste are recovered instead of being discarded, in accordance with the principles of the CE [12]. Circular economy is defined as a production and consumption process that involves sharing, renting, reusing, repairing, renewing and recycling existing materials and products as many times as possible to create added value, so the life cycle is extended. It implies reducing waste to a minimum and for this reason the promotion of this type of material favors this process [13]. This study presents the production and characterization process of a biocomposite, as well as tests performed in biology laboratory from LIA-CESAR laboratories of the University of Zaragoza at the Etopia Center for Art and Technology. The aim of the study is to design and experiment with a new biocomposite that can replace in some applications plastic materials with strong environmental impact. Mycelium-based biocomposites contribute to the principles of the CE since hardly any new raw materials are used, the production process requires a low energy input and they are biodegradable. To achieve this, the material is characterized by means of mechanical tests. The expected result is a new material tested with six different fibers, and the definition of its characteristics and potential applications in the design of environmentally friendly products. Another objective is to determine which organic waste has better behavior. After analyzing the results, their characteristics are found to be similar to those of single-use plastics, thus being suitable for applications of ephemeral products that can be replaced to approach the CE.

2 Materials and Methods New materials that respect the environment are emerging, some of them being biocomposites made from mycelium, the vegetative body of a fungus. Mycelium-based biocomposites use natural organism as the fungus for the matrix, organic recycled materials for the filler material or reinforcement, the production process requires low energy

Experimentation and Characterization of Mycelium-Based Biocomposites

17

input and they are biodegradable being an ideal compound for the CE. In the scientific literature, some works studied characteristics such as density, thermal conductivity, Young’s modulus, compressive strength, flexural strength, tensile strength, response to exposure to moisture or response to immersion in water, among others [10, 14, 15]. The culture of the mycelium has also been studied to refine and achieve the best results [16, 17], including engineering, architecture and design applications [16, 18], or articles already marketed for own crops [19], consumer products [20] or registered materials as Mycocomposite [19], Mylo [21] or Fungicel [22]. The methodology followed during this process starts with the production of six biocomposites resulting from the growth of Ganoderma Lucidum fungus on a filler material that in all of them is an organic substrate, these are: (a) wood chips, (b) straw, (c) pellets, (d) cotton from old T-shirts, (e) peanut shell and (f) cardboard. The objective is to submit each of the composite materials to three physical tests: compressive strength, hardness and water absorption, the density will also be determined. With the data obtained, their properties are known and comparisons can be made between them and the plastic materials to be replaced. 2.1 Regulations, Tests and Specimens Prior to the growth process of the material, the tests to which it will be subjected are defined. Due to the fact that there is no specific regulation, the same tests to which polymeric foams such as EPS are subjected are carried out. These tests will be: compressive strength, hardness and water absorption, including the calculation of density. The specimens of the material that will be used in the three tests are created, with a shape conditioned by the dimensions of the laboratory machines for the tests. To define a specific geometry, the standard must be consulted. “Rigid cellular plastics - Determination of compression properties” version ISO 844:2021 and the Spanish version UNE-EN ISO 844. According to the standard, the specimens must have a thickness of (50 ± 1) mm and that the base must be either square or circular, with a minimum area of 25 mm2 and a maximum of 230 mm2 . The plate of the compression machine is 45 mm in diameter. It is determined that the specimens should be cylindrical with a base of 45 mm in diameter and 50 mm in height. Before the start of the project, a first version of the mold is made in the fabrication laboratory of LIA-CESAR in Etopia. In this mold, made by 3D printing, the biocomposite specimens will be grown. These first samples are subjected to compressive strength and hardness tests to check if the selected dimensions are correct. Once the dimensions of the specimens are verified, a final version of the mold is made.

18

I. López-Forniés et al.

2.2 Culture and Growth of the Specimens For the growth phase, it is necessary for the biocomposite to have 80% cellulosic sugars and 20% proteins and approximately twice the weight of water is added to the mixture (Table 1). Sugars are provided by organic lignocellulosic residues and proteins are added thanks to parts of bran, brown rice flour or coffee waste. Three samples for each substrate are grown as experimental replicates to obtain more accurate results in physical tests. It should be noted that in several substrates the protein source is a wholemeal flour (called BRF, brown rice flour). In Table 1, the proportions of matter for each substrate are shown. Figure 1 and Table 2 shows the cultivation and growth process. Table 1. Proportion of water in the substrates Sugars (80%)

Proteins (20%)

Water

10 g wood chips and 10 g wood dust

2,5 g bran and 2,5 g de BRF

50 g

10 g straw and 10 g wood dust

2,5 g bran and 2,5 g de BRF

50 g

20 g Cotton fibers

2,5 g bran and 2,5 g de BRF

50 g

20 g cardboard

5 g grounded coffee

60 g

40 g pellet

10 g grounded coffee

150 g

25 g peanut shell

25 g

Fig. 1. Cultivating and growing process of mycelium-based composite. Based on [10]

Experimentation and Characterization of Mycelium-Based Biocomposites

19

Table 2. Cultivating and growing process of mycelium-based composite Stage

Actions

1. Preparation plates with Agar Fungi Substrate: 2.3 g Agar; 4 g malt extract; 0.4 g yeast extract; 0.4 g Peptone Boil the mixture. Pour into a petri dish 2. Reproduction of the fungus in Agar Fungi Selection of Ganoderma Lucidum. Place mycelium of the fungus in the Petri dish. Introduce in the incubator at 22–25 ºC 3. Preparation of the wheat grain

Wash and hydrate the wheat grains for 24 h. Boil grains 15 . Sterilize the grains in the autoclave

4. Colonization of wheat grain

Cut 1 cm2 squares of Ganoderma Lucidum agar plate culture. Put 3 or 4 squares in the grain container and shake. Incubate for 4 to 6 days at 22 ºC for the mycelium to colonize the grain

5. Preparation of the substrates

Cut the substrates in a glass blender. Mix the cut substrates with water. Pour the mixture into a container and sterilize by autoclaving at 121 ºC 25

6. Mix substrate with fungus

Introduce the colonized grain (5%) into the jars with the substrate. Incubate from 10 to 14 days (depending on the substrate) at 22 ºC

7. Shape the material

Crumble the colonized substrate in a tray. Introduce the colonized substrate into the mold pressing it with a piece of wood. Incubate for 6 to 8 days (depending on the substrate) at 22 ºC. Put the mold in a plastic zip-lock bag slightly open at one end to keep humidity letting air flow. Incubate outside the mold for about 5 days

8. Bake and dry

Bake for 1 h at 80/90 ºC (take care not to burn). Finish drying outside for a further 7 days

2.3 Description of the Tests Water Absorption To carry out this test, the standards ISO 15148:2002 “Hygrothermal performance of building materials and products - determination of water absorption coefficient by partial immersion” and ISO 15148:2002/AMD 1:2016 “Hygrothermal performance of building materials and products - determination of water absorption coefficient by partial immersion - Amendment 1” have been followed. As the norm says, a metal grid is placed in a plastic container, the grid is always parallel to the base and at the same distance from the bottom. The level at which the water should be is 5 mm from the highest point of the grid. Fill the container with water up to the indicated limit and place the 6 samples on the grid. A strip of wood is placed over the samples to prevent movement as the samples could float on the water.

20

I. López-Forniés et al.

Eight measurements are taken over 24 h. These measurements will be after 5, 20 min, 1, 2, 5, 8 and 24 h from the moment the samples are placed on the grid. The test gives rise to graphs (Fig. 1) where the amount of water absorbed by the material is observed as the test progresses and this is carried out by facing the following parameters: Mt √ (kg/m2 ) Difference between the mass in each weighing; and, t (s) Square root of the times in which each weighing is performed. Compressive Strength For the compression test the Instron 5565 machine is used. To carry out this test, the standard ISO 844:2021 “Rigid cellular plastics - Determination of compression properties”. According to the experience of the prior tests, the machine is programmed to interrupt the test when it reaches a load of 4000 N. Although the previous test was carried out only with Chip samples, it was observed that around this value the samples stopped reducing their height. The speed has been 5 mm/min, the UNE-EN ISO 844 standard which says that: “the sample is compressed at a speed as close as possible to 10% of its original thickness per minute”. The sample is 50 mm high, so the speed is 5 mm/min, it was found that at that speed the sample had optimal behavior in the steps that took place throughout the test. According to the standard, the material can behave in two ways. Mode A: the specimen reaches maximum force and breaks. Mode B: There is no maximum point and the specimen is compressed without breaking. Some specimens have behavior A and others B. The standard establishes some parameters to be calculated depending on whether one behavior or another. In order to compare the test results for each material and follow a single criterion, it is established that the parameters corresponding to Mode B will be calculated. According to Mode B, the parameters to be calculated according to the standard are: Compression stress at 10% of its relative deformation σ (Compression strength MPa); and, Modulus of elasticity or Young’s Modulus E (MPa). Hardness To carry out the hardness test, an analog Shore A scale hardness tester is used, since this is the one used for soft polymeric materials. To carry it out we take from 6 to 9 measurements depending on the material of the sample. The measurements are taken on the flat faces and depending on the substrate with which they have been made, they have a more or less uniform surface, so it is convenient to take many measurements if it is irregular. The final hardness result is the average of the measurements obtained. The hardness test is carried out again after the compression test to check how the hardness changes when subjected to a load, in this case 4000 N. In this way, possible properties and applications can be seen if it is subjected to compression processes.

3 Results The 6 biocomposites present average density values of between 0.19 and 0.37 g/cm3 , lower than EPS. This plastic can have values between 0.02 to 0.31 g/cm3 or even higher depending on the application, which places the biocomposite, within the range, so it can replace EPS in applications where low density is required (Table 3).

Experimentation and Characterization of Mycelium-Based Biocomposites

21

Table 3. Mean values for density Sample

(a) Chips

(b) Straw

(c) Pellet

(d) Cotton

(e) Peanut

(f) Cardboard

EPS 50

EPS 250

Density (g/cm3 )

0,23

0,19

0,30

0,37

0,27

0,37

0,02

0,31

The first test is the water absorption test as this is not destructive. The calculation of the density of each sample will be carried out at the same time as the last two tests to ensure that the dimensions of the sample will not be altered further due to the stabilization of the weight. For water absorption the results show that all substrates behave differently, as seen in Fig. 2. Both absorption coefficients are compared with the density of the samples to check if there is a relationship between both characteristics. It can be said that denser substrates tend to absorb more water. In the compression test (Tables 4 and 5), the samples show two different behaviors. Pellet, peanut and cardboard based samples disaggregate or break, the test is interrupted before reaching 4000 N of load. Both pellets and peanuts disaggregate whereas the cardboard substrate compresses without disaggregating, but cracks inside. Other samples are compressed without breaking, wood chips, straw and cotton support the 4000 N load (Fig. 3). The cotton substrate is the one with the highest 10% compression stress (0.38 MPa). The hardness test was carried out on the specimens both before and after submitting them to the compression test. This allows us to check how this characteristic varies. The Shore A value ranges from 43 for straw to 68 for cardboard. Relating hardness and density, it can be seen that, the higher the density, the samples have greater hardness.

Fig. 2. Average water absorption (over 1 day)

The graphical and numerical results of the compressive strength test are shown, in Figs. 4 and 5. The results correspond to the average of the samples for each filler material.

22

I. López-Forniés et al.

Fig. 3. Images of the tests of the different samples

Table 4. Compression resistance test Filler material

Behavior

Pass

(a) Chips

Compressed sample, but without presenting any internal rupture. Only Yes the outer faces are slightly cracked

(b) Straw

Compressed sample, but without presenting any internal rupture. Slightly cracked outer faces. The final result is very similar to that of straw, very similar substrates

Yes

(c) Pellet

The test is interrupted before reaching 4000 N the sample breaks and disaggregates

No

(d) Cotton

Very good behavior. It decreases its height but without suffering breaks, neither internal nor on the external faces

Yes

(e) Peanut

Good compression behavior in the first moments of the test. It eventually cracks and breaks

Yes

(f) Cardboard

Internal break before the end of the test, but the sample is hard and solid and does not show breaks on the external faces

No

Table 5. Values resulting from the compression test (a) Chips

(b) Straw

(c) Pellet

(d) Cotton

(e) Peanut

(f) Cardboard

0,11

0,08

0,13

0,38

0,17

0,13

Modulus of 1,76 elasticity (MPa)

1,16

2,09

2,71

2,17

1,81

Compressive strength (MPa)

4 Discussion The low density of this mycelium-based biomaterial is one of its most attractive aspects. However, this characteristic cannot be finely controlled in the growth process of the material and varies significantly in the phase in which it is introduced into the mold. Being a manual process, it varies between one test to another and in the case of taking it to industrialization it would be necessary to carry out an ad hoc quality control. This characteristic is very favorable for the manufacture of packaging products, which must be as less dense as possible. Taking into account that the lower the density, the lower the

Experimentation and Characterization of Mycelium-Based Biocomposites

23

compressive strength, these products must to ensure that both properties tend to balance these values, packaging must be light but resistant to shocks.

Fig. 4. Extension in the compression test

From the Stress (MPa) - Average Strain graph (Fig. 4) the following conclusions can be drawn. The samples present two different behaviors, as stated in the standard. Some disaggregate, in which case the test must be interrupted in the case of pellets and peanuts, while other samples are compressed without breaking and the graph does not reach a maximum. The cotton substrate is the one that presents a compressive stress at 10% higher (0.38 MPa) with no breaks. The chips and straw substrates do not break either. Cardboard substrate, is compressed without disaggregating but it cracks inside. Therefore, when choosing one substrate, it is also possible to consider whether the part made with the biocomposite must be sacrificed or its integrity must be maintained. Although cotton has also the highest modulus of elasticity, so do granules and peanuts. Thus, it is necessary to apply a higher tension than in the rest of them to undergo the same deformation. When comparing density and compressive strength, the graph (Fig. 5) shows that the substrates follow a slight trend, the lower the density, the lower the compressive stress and vice versa. Regarding the relationship between density and elasticity modulus, they show a similar behavior with the exception of the cardboard and pellet. The lower the density, the lower the modulus of elasticity and vice versa. Cotton substrate appears to be a good material to replace some EPS parts due to its high compressive strength relative to its low density. This could be because the fibers are small pieces of tissue that, despite being mixed with water, do not break and, by joining the mycelium, create a resistant mesh. On the other hand, chips and straw are also compressed without breaking, becoming a more compact material because the fibers of these substrates are small but elongated. In the material preparation phase, in which they are mixed with water, they do not fall apart and maintain their integrity so that,

24

I. López-Forniés et al.

Fig. 5. Deformation in the compression test

together with the mycelium, create a material that is compressed but does not break. On the other hand, the cardboard samples have larger fibers than the rest, but when they get wet they dissolve, which can influence their compressive strength. And finally, the test peanuts and granules are broken up and disaggregated. The fibers of the latter are short and do not facilitate union with the mycelium, being less resistant and durable. For the manufacture of containers with mycelium, the ideal properties are high resistance to compression and low density. The density slightly affects the compressive strength, the higher the density, the greater the resistance (Fig. 6).

Fig. 6. Relationship between density and compressive strength

Experimentation and Characterization of Mycelium-Based Biocomposites

25

The type of fibers of each substrate also influences. The best option is a fabric such as cotton, since its intertwined fibers provide great resistance, although it is the densest sample. This property can be controlled during the growth phase of the material being molded. By reducing the pressure when the material is fed into the mold, the density can be reduced. It would be necessary to verify if this variation in the manufacture of the material does not worsen the properties in terms of compressive strength. On the other hand, although chips and straw have lower compressive strength values, their low density makes the balance between these two properties ideal for the manufacture of some products. This shows that the elongated and fine fibers provide good properties to this compound. Some properties of the developed samples should be tested in future works, since mycelial materials possess certain fire retardant properties and could be used as an economical, sustainable and fire safer alternative to synthetic polymers [23]. Electromagnetic microscope images will help to understand the growth to select the best substrates.

5 Conclusions The 6 types of the mycelium biomaterial have a very low density, even lower than polymeric foams. They have good properties against resistance to compression, placing some substrates with values similar to EPS. All samples are biodegradable since no coating is applied, can take a wide variety of shapes and molds can be developed using various manufacturing techniques. The behavior in the tests is conditioned by the type of fibers of each substrate. In terms of resistance to compression, the cotton substrate is the one that has the best behavior since its fibers start from a previously woven material, which makes it very resistant. Those with the smallest fibers break and therefore disaggregate. The density of the samples influences their hardness. Those that are denser have higher hardness values, and vice versa. This is because porosity causes the durometer to encounter less resistance in the test. It can be seen how by compressing the material and reducing the air inside it, the hardness of the samples increases. The density of the samples is not significantly related to the water absorption coefficients. It seems that it has more to do with the water absorption capacity of the substrate itself. For samples to have higher hardness values, the density must increase. So that the products made with this material do not suffer plastic deformations on their surface, a balance must be sought between density, compressive strength and hardness. On the other hand, it is sought that the material has low water absorption coefficients so that it is stable in high humidity working conditions. For all these reasons mentioned above, we can conclude that mycelium-based biocomposites have the potential to be applied in containers and packaging that require a certain resistance and low weight to replace EPS or other plastic foams. In addition, since these bio-composites are inert materials, they could be used in food packaging and even replace some applications of paper or cardboard pulp used in electrical or electronic devices.

26

I. López-Forniés et al.

References 1. López-Forniés, I., Sierra-Pérez, J.: Ephemeral products: opportunities for circularity based on ideation for reuse. An experience. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 365–372. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_37 2. Hopewell, J., Dvorak, R., Kosior, E.: Plastics recycling: challenges and opportunities. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 364(1526), 2115–2126 (2009) 3. European Community: Los plásticos de un solo uso, prohibidos a partir de 2021 (2019) 4. Ecoembes: Ecoembes (2022). https://www.ecoembes.com/es. Accessed 24 Jan 2022 5. Anape: Poliestireno Expandido EPS. Producto (2022). http://www.anape.es/index.php?acc ion=producto. Accessed 24 Jan 2022 6. Landi, D., Cicconi, P.: An approach for the responsible design of the packaging employed in household appliances. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 381–389. Springer, Cham (2022). https://doi.org/ 10.1007/978-3-030-91234-5_39 7. Landi, D., Postacchini, L., Cicconi, P., Ciarapica, F.E., Germani, M.: Study and design of sustainable packaging for household hoods. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 51791, p. V004T05A037 (2018) 8. Miralbes, R., Ranz, D., Zouzias, D.: Study of the use of sawdust and mycelium composite as a substitute of EPS. In: Roucoules, L., Paredes, M., Eynard, B., Morer Camo, P., Rizzi, C. (eds.) JCM 2020. LNME, pp. 67–72. Springer, Cham (2021). https://doi.org/10.1007/978-3030-70566-4_12 9. Plumed, A., Ranz, D., Miralbes, R., Vargas, G.: Enhanced material-driven design methodology: Luffa Cylindrica’s case. In: Roucoules, L., Paredes, M., Eynard, B., Morer Camo, P., Rizzi, C. (eds.) JCM 2020. LNME, pp. 182–187. Springer, Cham (2021). https://doi.org/10. 1007/978-3-030-70566-4_29 10. Elsacker, E., Vandelook, S., Brancart, J., Peeters, E., De Laet, L.: Mechanical, physical and chemical characterisation of mycelium-based composites with different types of lignocellulosic substrates. PLoS ONE 14(7), e0213954 (2019) 11. Castillo Ordoñez, K.H., Ceron Meneses, Y.P.: Elaboración de un material compuesto a partir de residuos lignocelulósicos por colonización fúngica de Pleurotus Ostreatus. Unuversidad del Cauca (2019) 12. Girometta, C., et al.: Physico-mechanical and thermodynamic properties of mycelium-based biocomposites: a review. Sustainability 11(1), 281 (2019) 13. E. Community: Economía circular: definición, importancia y beneficios (2015) 14. Appels, F.V.W., et al.: Fabrication factors influencing mechanical, moisture- and water-related properties of mycelium-based composites. Mater. Des. 161, 64–71 (2019) 15. Islam, M.R., Tudryn, G., Bucinell, R., Schadler, L., Picu, R.C.: Morphology and mechanics of fungal mycelium. Sci. Rep. 7(1), 13070 (2017) 16. Jones, M., Huynh, T., Dekiwadia, C., Daver, F., John, S.: Mycelium composites: a review of engineering characteristics and growth kinetics. J. Bionanosci. 11(4), 241–257 (2017) 17. Antinori, M.E., Ceseracciu, L., Mancini, G., Heredia-Guerrero, J.A., Athanassiou, A.: Finetuning of physicochemical properties and growth dynamics of mycelium-based materials. ACS Appl. Bio Mater. 3(2), 1044–1051 (2020) 18. Attias, N., et al.: Mycelium bio-composites in industrial design and architecture: Comparative review and experimental analysis. J. Clean. Prod. 246, 119037 (2020) 19. Ecovative LLC: ECOVATIVE (2022). https://ecovative.com/. Accessed 27 Jan 2022 20. MOGU S.r.l.: Mogu (2022). https://mogu.bio/. Accessed 27 Jan 2022

Experimentation and Characterization of Mycelium-Based Biocomposites

27

21. Bolt Threads Inc.: Mylo (2022). https://www.mylo-unleather.com/. Accessed 27 Jan 2022 22. PolyBion: Fungicel (2022). https://polybion.mx/. Accessed 27 Jan 2022 23. Jones, M., et al.: Thermal degradation and fire properties of fungal mycelium and mycelium - biomass composite materials. Sci. Rep. 8(1), 17583 (2018)

Life Cycle Assessment of a Mobile Tiny House Made with Sustainable Materials and Design Implications Laura Ruiz-Pastor(B)

, Stefania Altavilla , Chiara Nezzi , Yuri Borgianni , and Guido Orzes

Free University of Bozen-Bolzano, Piazza Università, 1, 39100 Bolzano, BZ, Italy [email protected]

Abstract. Nowadays, relevant design challenges include the need to use sustainable materials that allow designing products with a lower environmental impact. The construction sector is currently undergoing a slow but continuous change towards the use of sustainable materials. One of the most generalized methods for assessing sustainability is the Life Cycle Assessment (LCA), which aims to analyze and compare product alternatives to minimize the environmental impact of a product or a process. In this work, the LCA method has been applied to a mobile tiny house prototype built with sustainable materials, such as hemp bricks or wood. The ISO 14040 and the EN 15804 standards were followed. The life stages calculated are hemp cultivation and processing, production of the hemp brick, construction of the tiny house and transportation. The results show that the most significant impact comes from the production of titanium sheet metal, wood, bricks, and the transport of raw materials. The results suggest that hemp bricks are a sustainable alternative, but they need to be combined with the right manufacturing and transportation processes. This research offers insights into how to introduce sustainability in the building sector through early design decisions, such as the selection of materials. Keywords: Life Cycle Assessment · Tiny house · Sustainable design · Biomaterials

1 Introduction Sustainability promotes the optimization of resources for making their best use possible [1], so that human needs are satisfied without compromising the resources provided by ecosystems [2]. To pursue sustainability goals, especially in the fields of product and service design, design engineering is one of the most important drivers [3]. This applies to the construction sector and to the design of architectural spaces too. Design decisions involve how products will be used and how their end of life will be, the actions to be taken as regards the handling of resources. These intertwined requirements can be managed by considering sustainability and user behavior from the beginning of design processes through strategies, as for example the ones proposed by [4] or [5]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 28–38, 2023. https://doi.org/10.1007/978-3-031-15928-2_3

Life Cycle Assessment of a Mobile Tiny House

29

Introducing sustainable features in products and buildings gives rise to new challenges. Designers have to identify the most suitable mix of materials that allows an optimal and sustainable performance, while, at the same time, ensuring competitive prices for the purchaser. Assessment procedures are therefore crucial in this context; several metrics and indicators for sustainability assessment have been developed accordingly [6–10]. One of the most acknowledged methods both in product and construction sectors is the Life Cycle Assessment (LCA). LCA is a quantitative assessment approach whose main objective is to analyze technical systems, such as products and processes, in terms of their environmental impact. As the name suggests, a lifecycle approach is considered in LCA. To assess sustainability in buildings, all the lifecycle stages should be taken in consideration, from the extraction and manufacturing of raw materials to the end of life and dismantling [11]. The main objective of the assessment of buildings’ sustainability is to obtain information for decision-making during all the stages of a building [12]. However, many contributions in the building sector consider a restricted number of lifecycle stages and operations, e.g., [13, 14]; this paper is of no exception. One of the reasons is the fact that the expected life of buildings is several decades and that simulations are therefore needed. In addition, the study of a limited set of phases is sometimes sufficient to make decisions on materials to be chosen, despite operational phases are typically attributed of major environmental impacts [15–17]. Otherwise said, reliable analyses can be carried out by using primary data if a prototype is built. The same approach is followed in this work, where LCA is applied to a prototype of a 25square-meter mobile tiny house; details are provided in Sect. 2. To run the LCA, the stages regarding production and manufacturing of the house and the construction of the prototype are actually analyzed in the paper (see Sect. 3). Section 4 is earmarked to presenting data inventory. The LCA outcomes are presented in Sect. 5 along with a comparison of the results with other case studies, which was enabled by choosing one square meter of floor as a functional unit of the analysis. As highlighted in the concluding Sect. 6, the outcomes are considered sufficiently informative to steer early design stages, where the most important features of the product are defined, including materials [18]. This is very useful, as decisions and changes can be easily made with limited costs and consequences in the front end of the design process [19].

2 Context of the Work The case study for the LCA analysis was a prototype of a mobile house that has been designed and constructed to represent a sustainable example of building due to the materials used and their local origin. The construction of the prototype is part of the project Tiny FOP MOB, as detailed in the acknowledgements. The name of the project is used hereinafter to indicate the prototype too. The scope was to create a Real-world Laboratory, which could be moved in different locations in the Vintschgau Valley, Italy, for evaluation scopes. This explains the reasons for the mobility of the tiny house, which was achieved by means of a trailer, which, as a consequence, limited its size. Figure 1 shows the exterior (a) and interior (b) of the tiny house. The prototype has a total weight of approximately 12 tons. Its structure is made primarily of hemp bricks, used to construct load-bearing walls, and wood for the frame,

30

L. Ruiz-Pastor et al.

floor, roof and external coating. The bricks have been assembled using a natural mortar. The interior surface has been finished using plaster made of hemp fiber and natural hydraulic lime. Spruce wood has been used for the frame, beams and screed, while larch has been employed for the floor, the false ceiling and the external cladding. Other materials used are a galvanized titanium sheet for the roof and a vapor barrier made of wood fiber.

Fig. 1. External (a) and internal (b) view of the mobile tiny house.

3 Methodology, Goal and Scope of the Study The LCA has been conducted following the ISO 14040 and the EN 15804 standards. Specifically, the EN 15804 was developed for the LCA of building materials and provides a basis for Environmental Product Declarations. The goal of the research is to calculate the life cycle impact of the Tiny FOP MOB. The defined functional unit is a square meter of floor area of the prototype. EN 15978 describes several stages of a building life cycle. Stage A includes all the activities from raw material extraction to building construction. Stage B addresses the useful life of the building and its maintenance and repair processes. Stage C includes the end of life, dismantling and eventual recycling of materials. Stage D accounts for the potential positive impact of reusing materials and components after the end of life. Based on EN15978, this paper identifies the system boundaries in all the activities leading to the construction of the Tiny FOP MOB from the production of its raw materials. The life cycle phases analyzed are reported below: • • • • • •

A1a: Hemp cultivation, harvesting and transportation to the transformation site; A1b: Processing of hemp shives for the production of the brick; A2: Transport of the hemp shives to the brick production site; A3: Production of the hemp brick; A4: Transport of all raw materials to the construction site; A5: Construction of the tiny house.

The considered stages of the prototype lifecycle are widely presented in the subsections that follow. Figure 2 reports them graphically along with the data used as an input of the LCA analysis.

Life Cycle Assessment of a Mobile Tiny House

31

As mentioned, the decision to focus on the early stages of the prototype’s life cycle is primarily due to examining how these initial design choices can affect the environment. Moreover, since the hemp brick used in the construction of the Tiny FOP MOB is a new material on the market, there is no actual end-of-life data, which would prevent the full consideration of the phases indicated in EN 15978.

Fig. 2. LCA phases and inputs/outputs considered in the process of creation of the tiny house prototype for the environmental assessment of 1 square meter of floor surface

4 Data Collection and Life Cycle Inventory The data has been collected through semi-structured interviews with the main project partners, i.e. the producer of the hemp bricks and the Tiny FOP MOB’s constructor.

32

L. Ruiz-Pastor et al.

Further information has been obtained through the analysis of the literature. Data on means of transport, energy consumption and the production of some raw materials (e.g. wood and lime) have been achieved from the Ecoivent version 3.8 database, which includes information on the environmental impacts of various industrial and agricultural processes. 4.1 Hemp Cultivation-Stage A1a For the cultivation of hemp shives, which are the essential element in the mixture for the bricks, a typical Central European cultivation context has been considered. In particular, reference has been made to a study that analyses hemp cultivation in Austria [20], as for the supplier of the Tiny FOP MOB’s bricks. The output of the stage provides 1 kg of dry matter. The processes taken into account are soil preparation and sowing, fertilizing, harvesting, baling and transport of the hemp to the transformation site, as reported in Fig. 2. The sowing rate has been set to 48 kg per hectare, while the fertilizing consisted of 80 kg/ha of nitrate, 58 kg/ha of phosphate and 115 kg/ha of potassium. An herbicide has been also employed in the cultivation, corresponding to approximately 3 kg of glyphosate per hectare. The harvested hemp is finally transported by a lorry, whose weight is between 16 and 32 t, for approximately 100 km to the processing site. 4.2 Hemp Shives Production-Stage A1b Once the hemp is fed into the manufacturing line, the decortication process mechanically removes the fibers from the straw and separate them into fibers, shives, and dust. Considering that the production of hemp per hectare is around 7500 kg, the decortication process resulted in 30% of fiber, 5% waste and the remaining 65% of shives. The environmental assessment of the fiber and the dust are outside of the system boundary. However, those derivatives can be reused for other industrial purposes such as ropes or as a filler in plastics, lime renders or compressed for fuel logs. The production line consumes approximately 0.07 kWh to process 1 kg of hemp and it has been powered by electricity from the grid. The line can process 4 tons of hemp straw per hour. 4.3 Hemp Blocks Production-Stage A3 In the production of the hemp bricks, the two materials required are hemp shives and hydraulic lime, which have been delivered in bulk to the production site. Specifically, the hemp shives have been sent from the processing site in Austria to the brick production site in the Vintschgau Valley (Stage A2 in Fig. 2). The distance is approximately 600 km, and a less-than-16-tons truck has been used. The hydraulic lime, instead, has been shipped from Germany in a truck weighing less than 16 tons, for about 800 km. The raw materials from the storage place have been sent to a mixing machine, where they are finally combined with water. In total, 0.357 kg of hemp shives, 0.471 kg of hydraulic lime and 0.514 L of tap water are necessary to produce 1 kg of hemp bricks. The mixture has been then poured into a cement block mold. The bricks have been

Life Cycle Assessment of a Mobile Tiny House

33

finally pressed and air-dried. Once hardened, the blocks have been loaded into pallets and wrapped in nylon film, ready to be transported to the construction site. The energy consumption of the machinery is allocated by considering the electricity mix for Italy and corresponds to approximately 0.02 kWh to produce 1 kg of brick. In-house movements at the production site have been excluded from the analysis. 4.4 Construction of the Tiny Prototype-Stage A5 In the final Tiny FOP MOB’s production, all materials have also been assembled at the constructor site in the Vintschgau Valley. In this case, the main operations have been cutting, drilling and screwing. The estimated total energy consumption is 32 kWh. Raw materials have been sent from local suppliers in the case of hemp bricks (30 km away), wood (14 km away) and titanium sheet (5 km away). The cement and the insulating wall were supplied by companies from Germany, particularly from the Stuttgart area, about 400 km from the construction site. A truck weighing between 3.5 and 7.5 tons has been employed for all transportations (stage A4 in Fig. 2). The handling of raw materials within the construction site has been neglected again. In total, about 3 tons of hemp bricks have been used to construct the Tiny FOP MOB’s perimeter. The total cubic meters of wood are about 8, considering the two different types of timber. The zinc plate weighed 125 kg, and 0.0015 m3 have been allocated to the wood fiber vapor brake. In assembling the wood and other materials, about 250 screws and brackets have been used.

5 Results and Discussion 5.1 Life Cycle Impact Assessment The LCA assessment has been carried out by using the acknowledged characterization method CML-IA Baseline, developed by the Institute of Environmental Studies of the University of Leiden [21]. The CML-IA Baseline has been chosen since it is one of the most widely accepted methods in the construction sector [22], which makes it largely adopted also in recent studies [23]. The choice of the method allows us to compare the results obtained in the present study with others available in the literature. CML-IA Baseline allows the determination of the impact that the functional unit has on soil, air and water by analyzing eleven impact categories: • Abiotic Depletion and Abiotic Depletion fossil that relate to the extraction of minerals and fossil fuels, calculated in Kg Sb equivalent and MJ, respectively; • Global Warming (GWP 100a) due to the emission of greenhouse gases over a 100-year time interval, calculated in Kg CO2 equivalent; • Ozone Layer Depletion due to the different gases, measured in Kg CFC-11 equivalent; • Human toxicity, Freshwater Aquatic Ecotoxicity, Marine aquatic ecotoxicity, and Terrestrial Ecotoxicity, which measure the effects of toxic substances on human health, water, groundwater, marine and terrestrial systems, respectively, indicated in Kg CFC11 equivalent;

34

L. Ruiz-Pastor et al.

• Photochemical Oxidation, which measures the formation of reactive substances (mainly ozone), which can be harmful to human health and ecosystems, calculated in Kg C2 H4 equivalent; • Acidification, which measures the production of acidifying substances that damage terrestrial ecosystems, expressed in Kg SO2 equivalent; • Eutrophication that includes all impacts due to excessive levels of macronutrients in the environment, measured in Kg PO4 equivalent. The analyses have been carried out with the SimaPro software, version 8.0.2, considering a lifetime of 100 years. 5.2 CML-IA Baseline Results for the Mobile Tiny House Prototype Figure 3 shows the normalized results on the different environmental impact categories according to the CML-IA Baseline method for the functional unit of the whole prototype, i.e. one square meter of floor. For each indicator, normalization factors are based on the average yearly environmental load worldwide, divided by the number of inhabitants for the years 1990 and 1995. The calculation is provided by the SimaPro software.

Fig. 3. CML-IA results for each impact category.

Based on Fig. 3, the processes with the most significant impact on total emissions are the production of titanium sheet metal, wood, bricks, and the transport of raw materials. The titanium sheet used for the Tiny FOP MOB roof represents almost the entire category of Abiotic Depletion (98%), being it an alloy that involves extracting minerals from the ground for its production. This material also significantly affects the toxicity caused to water and groundwater (37%) and the marine ecosystem (47%).

Life Cycle Assessment of a Mobile Tiny House

35

Wood has a homogeneous impact on all categories. Considering the wood production as a single item without distinguishing between different typologies, an effect of more than 51% in the Human Toxicity category has emerged. Out of this percentage, 20% is mainly due to the chemicals emitted during the production of laminated wood for outdoor use. An impact of 45% is also observed in the Photochemical Oxidation class, mainly due to a reaction that occurs when the wood is exposed to UV radiation. Concerning the transport of raw materials to the production site, it can be observed that the most affected categories are the Abiotic Depletion fossil (45%) due to the extraction of fuel and the Global Warming category (35%) due to the emission of greenhouse gases, which also impacts on the Ozone Depletion (25%). These values are mostly related to the presence of raw materials that have been supplied by other countries such as Austria and Germany. By combining different materials, the hemp bricks do not significantly affect the prototype as a whole, despite their considerable weight in its construction. The highest effect occurs in the Eutrophication (52%) and Terrestrial Ecotoxicity (52%). This is most likely due to the need to use fertilizers and pesticides in hemp cultivation. Another category with a 30% impact is Global Warming due to CO2 emissions in the processing of hemp and in the brick’s production. However, the effect of the biogenic uptake of hemp and the carbonation of lime binder has been neglected in this analysis, as only positive CO2 processes have been calculated. By considering these two aspects, more CO2 is actually retained in the brick production than is emitted. According to examples and studies found in the literature, the amount of CO2 sequestrated is around 18 to 34% of the one emitted during the brick’s production and throughout the lifetime of the brick [24]. Hence, considering the CO2 sequestered, it is possible to achieve a negative impact. This demonstrates that the use of hemp is a sustainable choice compared to other materials, nowadays prevalent. In Table 1, the results achieved in the project prototype are compared with two other studies in terms of: building characteristics; materials used in their production; characterization method for the LCA calculation; total CO2 emissions into the atmosphere (per kg of the same functional unit). Since the CO2 emitted is the most common category assessed in the literature, it has been decided to compare the case study with a wooden [25] and a traditional [26] building in terms of this value. The choice of the two cases was made to try to directly compare with other constructions with sustainable materials or cases of traditional buildings. Unfortunately, to the authors’ best knowledge and a literature review, there are no cases of LCA analyses examining systems with characteristics (volume, non-residential use, mobility) closer to the Tiny FOP MOB. From Table 1, it can be seen that the two examples have a very different size. The authors have therefore decided to search for and compare structures according to the functional unit of one square meter of floor space to overcome this problem. The CO2 value of the current study is significantly higher than in the two comparison cases. The difference is primarily ascribable to the design choices made on the prototype. Indeed, the Tiny FOP MOB was designed and built primarily for scientific, divulgation and educational purposes. Some aspects of transporting raw materials and manufacturing have not been considered for tiny houses of a standard size. For example, the fact that the walls are built primarily with hemp bricks entails a higher weight of the house. However,

36

L. Ruiz-Pastor et al. Table 1. Comparison of case studies with wooden and traditional buildings Tiny FOP MOB

[25]

[26]

Functional unit (FU)

1 m2 of floor surface

1 m2 of floor surface

1 m2 of floor surface

Building characteristics

25 m2 of floor surface; energy consumption being calculated

5-storey building; net floor area 726 m2 ; operating energy consumption 63 kWh/m2 /a

5 floors; net floor area 16746 m2 , annual heating requirement 56 kWh/m2

Materials

Spruce, larch, hemp bricks

Larch; wooden window frames; concrete; cement; mortar; rubber products; rock wool

Concrete; cement; mortar; rubber products; rock wool

Characterization methods

CML-IA Baseline

CML 2001

CML v4.1

GWP (kgCO2 e/FU)

300

42

6

this design choice was made in order to have a stable structure that could be transported by trailer across different towns. Hence, more material is required in construction than prefabricated units designed for residential buildings are. There is also a gap between the wooden and the traditional construction. The reason is mainly related to the characteristics of the two buildings. The net floor area of the traditional one is much larger, which distributes the CO2 emission values better over the functional unit. The difference, however, is again a matter of design choices; the project requirements imposed the fabrication of a house transportable on standard truck trailers. So far, in the construction sector, sustainable materials such as hemp and wood are used in small and medium-sized buildings. The structural obstacles of sustainable materials have be overcome in order to start thinking about large traditional sustainable buildings.

6 Conclusions LCA has become a fundamental tool to evaluate choices in terms of resources for the design and construction of buildings. In this study, the use of sustainable materials, such as hemp and wood, has been evaluated through LCA on a prototype of a tiny house built for the Tiny FOP MOB project. The environmental advantages in using sustainable materials, if compared to traditional ones, have been demonstrated. The results obtained suggest that, on a large scale, this type of building can be a good sustainable choice in combination with optimal design, manufacturing and transportation processes. The example reported in this paper underlines how having so many constraints on the design and construction of the prototype creates disadvantages in terms of sustainability that cannot be compensated by merely using sustainable materials. In

Life Cycle Assessment of a Mobile Tiny House

37

this case, design requirements were so constraining that they compromised the design efforts towards sustainability. This brings us back to the widely discussed conflict in the literature between performance and sustainability. Looking at the future development of the tiny house, the LCA analysis would guarantee the first step to obtaining the energy certification. Hence, there will be a real possibility of moving into the actual production of the tiny house, rethinking its design towards a more sustainable construction. A further objective would be to combine the LCA analysis with a life cycle cost analysis. In this case, the balance between costs and sustainability will be evaluated, while also contributing to the improvement and evolution of design for sustainability in the early stages. It has been shown that taking costs into account at an early stage of development may help make informed design choices [27, 28]. In this case, the trade-off between sustainability and costs can be effectively considered. Acknowledgments. The work is fully supported by the project “Tiny FOP MOB - A Real World Laboratory made of wood and hemp travelling through the Vintschgau Valley”, funded by the European Regional Development Fund (ERDF) Investment for Growth and Jobs Program 2014– 2020, 5th call in the Axis 1 “Research and Innovation” of the Autonomous Province of BolzanoSouth Tyrol, grant FESR1161.

References 1. Meadows, D.H., Meadows, D.L., Randers, J., Behrens, W.W.: The Limits to Growth. Universe Books, New York (1972) 2. Morelli, J.: Environmental sustainability: a definition for environmental professionals. J. Environ. Sustain. 1(1), 2 (2011) 3. Golinska, P., Kosacka, M., Mierzwiak, R., Werner-Lewandowska, K.: Grey Decision Making as a tool for the classification of the sustainability level of remanufacturing companies. J. Clean. Prod. 105, 28–40 (2015) 4. IDEO: Design Kit. https://www.designkit.org/methods. Accessed 29 Sept 2020 5. Bocken, N.M.P., de Pauw, I., Bakker, C., van der Grinten, B.: Product design and business model strategies for a circular economy. J. Ind. Prod. Eng. 33(5), 308–320 (2016) 6. de Aguiar, J., de Oliveira, L., da Silva, J.O., Bond, D., Scalice, R.K., Becker, D.: A design tool to diagnose product recyclability during product design phase. J. Clean. Prod. 141, 219–229 (2017) 7. Mesa, J., Esparragoza, I., Maury, H.: Developing a set of sustainability indicators for product families based on the circular economy model. J. Clean. Prod. 196, 1429–1442 (2018) 8. Kravchenko, M., Pigosso, D., McAloone, T.C.: A trade-off navigation framework as a decision support for conflicting sustainability indicators within circular economy implementation in the manufacturing. Sustainability 13(1), 314 (2021) 9. Vinante, C., Sacco, P., Orzes, G., Borgianni, Y.: Circular economy metrics: literature review and company-level classification framework. J. Clean. Prod. 288, 125090 (2021) 10. Ruiz-Pastor, L., Chulvi, V., Mulet, E., Royo, M.: A metric for evaluating novelty and circularity as a whole in conceptual design proposals. J. Clean. Prod. 337, 130495 (2022) 11. ISO 14040-44 Environmental Management—Lifecycle Assessment: Principles and Framework. ISO 14040-44 (2006)

38

L. Ruiz-Pastor et al.

12. Bragança, L., Mateus, R., Koukkari, H.: Building sustainability assessment. Sustainability 2(7), 2010–2023 (2010) 13. Arrigoni, A., Pelosato, R., Melià, P., Ruggieri, G., Sabbadini, S., Dotelli, G.: Life cycle assessment of natural building materials: the role of carbonation, mixture components and transport in the environmental impacts of hempcrete blocks. J. Clean. Prod. 149, 1051–1061 (2017) 14. Abd Rashid, A.F., Yusoff, S.: A review of life cycle assessment method for building industry. Renew. Sustain. Energy Rev. 45, 244–248 (2015) 15. Cabeza, L.F., Rincón, L., Vilariño, V., Pérez, G., Castell, A.: Life cycle assessment (LCA) and life cycle energy analysis (LCEA) of buildings and the building sector: a review. Renew. Sustain. Energy Rev. 29, 394–416 (2014) 16. Junnila, S.: The environmental impact of an office building throughout its life cycle. Helsinky University of Technology (2004) 17. UNEP: Buildings and Climate Change Summary for Decision-Makers Sustainable Buildings & Climate Initiative, Paris (2009) 18. Cross, N.: Design research: a disciplined conversation. Des. Issues 15(2), 5–10 (1999) 19. Borgianni, Y., Cascini, G., Rotini, F.: Investigating the future of the fuzzy front end: towards a change of paradigm in the very early design phases? J. Eng. Des. 29(11), 644–664 (2018) 20. Schulte, M., Lewandowski, I., Pude, R., Wagner, M.: Comparative life cycle assessment of bio-based insulation materials: environmental and economic performances. GCB Bioenergy 13(6), 979–998 (2021) 21. Institute of Environmental Sciences (CML): CML-IA Characterisation Factors. Leiden University (2016) 22. Alhazmi, H., Alduwais, A.K., Tabbakh, T., Aljamlani, S., Alkahlan, B., Kurdi, A.: Environmental performance of residential buildings: a life cycle assessment study in Saudi Arabia. Sustainability 13(6), 3542 (2021) 23. Günkaya, Z., Özkan, A., Banar, M.: The effect of energy-saving options on environmental performance of a building: a combination of energy audit–life cycle assessment for a university building. Environ. Sci. Pollut. Res. 28(7), 8822–8832 (2020). https://doi.org/10.1007/s11356020-11141-z 24. Arehart, J.H., Nelson, W.S., Srubar, W.V.: On the theoretical carbon storage and carbon sequestration potential of hempcrete. J. Clean. Prod. 266, 121846 (2020) 25. Takano, A., Pal, S.K., Kuittinen, M., Alanne, K., Hughes, M., Winter, S.: The effect of material selection on life cycle energy balance: a case study on a hypothetical building model in Finland. Build. Environ. 89, 192–202 (2015) 26. Häfliger, I.F., et al.: Buildings environmental impacts’ sensitivity related to LCA modelling choices of construction materials. J. Clean. Prod. 156, 805–816 (2017) 27. Altavilla, S., Montagna, F.: When lifecycle cost analysis enables strategic design considerations. In: Proceedings of the DESIGN 2018 15th International Design Conference, vol. 92, pp. 953–964 (2018) 28. Altavilla, S., Montagna, F.: When costs from being a constraint become a driver for concept generation. In: Proceedings of the 20th International Conference on Engineering Design (ICED 2015), Milan, Italy, vol. 5, pp. 27–30 (2015)

Industrial Scraps Valorization: Designing Products to Make Detached Value Chains Circular Cappelletti Federica(B) , Rossi Marta , Ciccarelli Marianna , and Germani Michele Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy [email protected]

Abstract. Intersecting value chains for the sake of sustainability is a very challenging target; however, eco-design can enable it. The high performing products required by the customers have boost the research, development and employment of composite materials, that often cannot be handled at their End of Life. The literature lacks hints and methodologies that support this. The paper presents a methodology whose core is the redesign of components, so that their material can be substituted with scraps deriving from other products. It aims to a symbiosis between enterprises that are active on different sectors; it is applicable to composites, as shown in the case study and offers an alternative to the cannibalization and low demand for remanufactured products because, unlike previous works, it investigates how materials can be employed in different products. The case study quantitatively evaluates the environmental benefits derived from the symbiosis of two companies, active in different sectors. Process scraps and product are analyzed; a re-design follows; the scraps are remanufactured and used as new material. The results highlight the need for a strong cooperation between companies, to take advantage of value hidden in their products. Future studies should focus on the economic impacts, considering not only the technical sphere, but also including the benefits echoing on company’s images. Keywords: Circular economy · Design for de-manufacturing · Composite materials · Design for disassembly · Industrial symbiosis

1 Introduction Circular Economy (CE) is strictly bounded to the concept of industrial symbiosis (IS) to use resources in a more sustainable way than a traditional and linear economy. An IS is a form of brokering to bring companies together in innovative collaborations in which waste or by-products of an industry or industrial process become the raw materials for another. IS requires an enterprise not only to harmoniously encompass all its internal departments but also to cooperate with downstream and upstream actors of the supply chain. The field of research in IS is immature and more research is necessary to demonstrate the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 39–50, 2023. https://doi.org/10.1007/978-3-031-15928-2_4

40

C. Federica et al.

economic, environmental, and social benefits and to get practical design insights related to IS [1]. CE and IS have the End of Life (EoL) product phase as starting point: it must generate suggestions and feedback to be implemented at the design phase [2]. EoL is a very delicate lifecycle phase; it must be planned to make it efficient; when it comes to EoL strategies such as reuse, remanufacture and recycle, each product and material need special consideration, especially when composite materials, which are the focus of this paper, are employed. The use of composite materials is growing with the global business sector for composite products has reached about £73 billion by 2020 [3]. Consequently, enterprises must face the big challenge of developing approaches and technologies to optimize the waste generated from their manufacturing operations and EoL [4]. Three are the main options currently implemented for treating composite wastes: landfill, incineration and recycling. Oliveux et al. [5] calculated the related environmental impact of these strategies obtaining significant negative burdens on the environment for landfilling and incineration treatments, while minor values for the recycling process. Landfill of composite waste is banned in Germany (since 2009) and other EU companies are expected to follow this route; furthermore, when they are incinerated, around 50% of the composite waste remains as ash and must be landfilled [6]. It is therefore evident the need to further investigate the question related to the EoL composite treatment scenario and more evident the need for establishing design protocols that positively affect the EoL opportunity of this kind of materials, yet in the first phases of the design process. Reuse is the best option of the European Union’s (EU) Waste Framework Directive, second only to reduce. Design strategies must focus on the implementation of re-use of scraps, through the application of design for de/remanufacturing and disassembly rules. This could happen by creating circular supply chains, where scraps become input materials, reducing waste treatment costs and increasing the environmental benefits. The literature however does not provide enough hints and methodologies aimed to make supply chains of composites materials, circular. The present paper presents a method of analysis in the context of Eco-design whose core is to transform scraps and off-specification products in primary materials for the manufacturing of different goods, also thank to their redesign. Three are the key strengths of the developed approach: first, it faithfully implements two of the principles of CE, according to which the EoL and the design phases must be strictly bounded, and the circularity concerns a system, not a single organization. Secondly, the method aims to a symbiosis among enterprises that are active on different sectors and several materials; this offers a way out to a critical aspect that in certain sectors endangers CE: the cannibalization and low demand for remanufactured products. Ultimately the method explores innovative employment for composite materials, diminishing their EoL treatment. Differently from the existing literature, the proposed approach aims to apply the scraps with different functions they were initially conceived for (Sect. 2). The case study in Sect. 3 implements the proposed methodology and quantitatively evaluates the environmental benefits of the IS of companies that act in different sectors (kitchen equipment/home furnishing and professional appliances): process scraps of the first are analyzed, simultaneously with the product of the second; the re-design enables

Industrial Scraps Valorization: Designing Products

41

the use of the scrap to produce coffee grinder components. Results highlight the need for strong cooperation between companies, to take advantage of the products’ hidden value. Design strategies to increase the level of applicability of circular strategies, e.g. simplicity in shape and feature of products are key factors for IS. Results are further discussed in Sect. 4, prior to the conclusion (Sect. 5), where future studies are debated; these should focus on the economic impacts, considering not only the technical sphere, but also including the benefits echoing on company’s images.

2 Industrial Symbiosis Design Approach The proposed approach, shown in Fig. 1, aims at supporting enterprises in identifying and evaluating circular strategies for IS with a structured process and enabling them to quantify the environmental benefits of reusing scraps as raw materials. In order to make detached value chains circular and establish a link in their actions, an assessment of the present state (As-is) is needed.

Fig. 1. Scrap characterization phase and potential employment in industrial symbiosis

This consists in a full or partial Life Cycle Assessment (LCA) for one or more product families of the companies. The left side of the graph contains lifecycle stages

42

C. Federica et al.

of enterprises willing to find an innovative use for their scraps, on the right potential user of those. The environmental evaluation allows to have a picture of the current state and identify the most critical phases. However, it is important to rationalize the current process and product status, so it is easier to find the optimal strategies to optimize composite materials waste; this is the scrap characterization phase and consists in: • Quantifying the scraps produced in a set timeframe (e.g. yearly); this is useful to monitor the amount and the variability level and consequently the availability of a specific scrap on a reference target period; • Clustering material typology and composition; this is useful to identify potential treatments/machining and performances; in particular: – Geometrical properties, surface quality, features and shapes for components, to identify parts with higher/lower level freedom degree (e.g. esthetical vs. functional components); it may be hard to achieve high superficial quality by using scraps and/or by re-manufacturing composite waste; – Material properties for components and their technical, mechanical, thermal, and physical performances; i.e. the potential new (composite) materials must be able to withstand to requirements of the components they will be further employed in; – Distances (of enterprises of the potential IS and of dismantling centres) and frequencies of disposal; – Environmental impacts and criticalities of detached value chains. This step is important to obtain a clear picture of the potential flow of parts and materials to employ in innovative applications. The second step consists in the component characterization phase. It identifies potential target components to manufacture with alternative materials and processes; this is important because several are the components that must be compliant to multiple constraints that prevent the use of alternative materials of production processes. The potential scenario investigation, i.e. scenario evaluation phase, should focus mostly on linking the material performance (both technical and environmental) and components function with constraints and technical feasibility. In fact, unlikely the components are ready to be made of scraps in their current state of design; nevertheless, small changes open big solutions and make the components compliant to the constraints set by the scraps (i.e. re-manufacturing process cannot employ the same technologies expected to be used in the traditional manufacturing process). The re-design is useful to overcome the challenges projected by the scraps feature and the characteristics to be reached by the target component. A To-Be scenario foresees the identification of one or more circular strategies to apply for the specific situation. By quantifying their environmental impacts/benefits, it is possible to choose the best solution. By focusing on de/re-manufacturing the method easily finds application for products and scraps made of composite material, whose disposal is very challenging, and recycling is most of the time an unattainable standard. Secondly, it is compliant with the base principles of circular economy. In fact, it strictly links the end of life and first stages of products lifecycles. In addition to that, it expects multiple manufacturers of various sectors to work together to find sustainable solutions and improvements, lower scraps

Industrial Scraps Valorization: Designing Products

43

and waste to dismantle. This hides a great strength of the methodology: the symbiosis prevents the risk of cannibalization and low demand for remanufactured products that often distress enterprises and hamper circular initiatives. By identifying an application in a completely different sector, the market of the original product is not endangered neither jeopardized; this must be a great boost for manufacturers to put effort in looking for innovative implementation.

3 Case Study: Include Scraps in Components Manufacturing 3.1 Scraps and Components Characterization The proposed method was applied by three Italian companies from the Macerata district; their relative distance is not farther than 30 km. Two of them made their scraps available to be worked and employed as secondary raw materials in the production of a professional coffee grinder. The scraps were derived from the manufacturing of marble and porcelain stoneware (Company 1) and sink for domestic kitchens (Company 2). They were both process scraps (i.e. dust or pieces from Computerized Numerical Control - CNC - machining) or pieces discarded because they do not accomplish internal quality standards. Their dimensions are unstable and unpredictable because related to process inaccuracy. The As-Is analysis evaluates the environmental burdens, through LCA methodology and tool, deriving from the extraction, production, and disposal of the annual quantity of scraps produced by Company 1 and 2. Colored icons in Fig. 1 highlight which lifecycle stages were considered. Table 1 summarizes the Life Cycle Inventory (LCI) data and results for the environmental assessment of the scraps deriving from processes of Company 1 and 2. For each waste type the percentages in weight of annual scraps is indicated, together with the material and respective composition. Table 1. LCI scraps and Environmental impacts distribution LC Phase Slime Type Quantity* 18% 35% Dust-mixed Marble Material Composition Transport EoL LC phase

Company 1 40% Porcelain stoneware Silica sand

Clay

6%

48%

Pieces 15%

Feldspar

Granite

46% Road, truck Quarry re-development

10% Quartz agglomerate Quartz dust

Polyester

93%

7%

Company 2

Sink scraps Sink dust Type Quantity* 87% 13% Material Mineral PMMA** Additives Mineral PMMA** Additives Composition 0,7 0,25 0,05 70% 25% 5% Road, truck Transport Pre-treatment (mechanical), Incineration EoL 11-20% 21-30% 31-40% 41-50% 51-60% 61-70% 71-80% 81-90% 91-100% 3000€ 2b SA Low 3a Sc V3 Hih 3b SA Marketing 4a Si Comparison 4b SA V4 Certification 4c SA Gen. Ass. 4d Sc/Si Single 5a SA Mul. Small 5b Si/SA V5 area Many 5c Sc/si Countries Dependent 6a SA V6 Leading 6b Sc

F Si F/SA Si/F SA/F F Si/F SA/F

Budget 2a S Q/S

2b P

Detail

Objective

Location

Ro SC

Supp #

Supp. Type

3a DB S

3b S P

4a 4b 4c 4d 5a 5b 5c 6a DB S/DB P DB P/S S/DB S/DB S S/DB P/Q S/DB P/S P/S Q/S S

6b S Q

7a S Q/S

7b S S

8a DB Q

8b S Q/P

DB

S

DB DB/S S/S DB/S P/S S/DB Q/S

Q/S

Q

DB

DB

Q/S

P

P/S DB DB/S S S/P

P S P

P

P

DB/S S

Q/P

SA S/F F Si/F F

Q/P Q Q Q Q/S

Sc/Si Si/SA SA F Si Si/F Si SA SA SA F Sc/Si Si/F Sc Si/SA SA/F Si

P

DB

P/Q

DB Q/S S DB DB S Q/P S S/Q S/DB DB

P S S/P S/P S/P Q/S S/P Q/S S/Q S/DB

Si

SA

F

Si/SA

SA/F

Si

SA/F

Si

SA

F

Si/SA

P

SA/F Sc/Si SA/F

Si

SA/F

Si

SA

F

Sc/Si

Si

SA/F SA/Si Si Sc/S F SA/F Si SA/F A 3 7b Si/SA Si/SA Si/SA Si/SA Non- str 8a Si Si/SA Si Si Si Si/SA SA V8 Strat 8b SA F SA F F SA/F F Outsourced 9a Sc/Si Sc/Si Si V9 In-site 9b SA F SA F Si/SA F SA SA/F SA/F

Si

P/S Q/S Q S/DB S/DB S/P S/Q S/P S/Q S/Q Q/DB

9a 9b DB P/S Q P DB/ DB S P P S/P S/P P P/S P

SA/F

P S/DB S/P P S/P

P

Productio n

Si/SA SA/F Q/S Si Si/SA DB Si Si/SA Si/SA Si Si/SA F F SA/F Si/SA F SA/F Si/SA Si SA/F SA/F SA/Si SA/F SA/Si

Q/P Q/S

SA SA/F

To take the most advantage of the presented approach, who is willing to draft an environmental analysis should first go through the graph of Fig. 1 and answer the questions, systematically. Subsequently, by entering the Methods & Data collection strategies

A Step Forward Life Cycle Assessment

67

matrix it is possible to identify the cells that correspond to the status of the organization and the boundaries of the analysis. This must be done for each of the 9 factors (lines). Although symmetrical (9 × 9) the content of the cell is not identical above and below the diagonal. Above it, the green cells contain the outcome of the relation of factors concerning the data collection strategies, while below it the yellow cells suggest the type of analysis that best fits the combination. The content of all cells relevant to the current case must be highlighted. Once retrieved each line contains the answer to the previous questions, the resulting acronyms must be counted. Those with larger numbers reveal the type of analysis and collection strategy recommended for the current situation. In fact the method allows to identify the best (i.e. most recurrent typologies of analysis) solution. Both the Fig. 1 and Table 3 refer to the same acronyms and meaning of colored cells. The potential analysis typologies are: • Scouting (Sc): this analysis is less detailed and aimed to discover alternatives to the current process or product; • Simplified analysis (Si): this refers to a specific process or product but does not follow any well-stated standard; • Spot analysis (SA): with this the level of detail rises, nevertheless it focuses on a single or few process or product aspects; • Full LCA (F); this is the most detailed analysis, also normalized and standardized by the UNI EN ISO 14040/44. Concerning how data can be collected, the following alternatives can be chosen: • Database (DB): this consists in choosing statistical averages of real processes contained in databases (free or commercial); although this ensures short analysis time, the results are quite general, and may not describe accurately the specific situation; • Secondary data collection (S); all data available in the literature belong to this category, together with all public results of previous studies (i.e. Environmental Product Declaration EPD, etc.). • Primary data collection (P); this is undoubtedly the most detailed way to represent a process or a product, but at the same time this requires high effort, both in terms of time and resources; it consists in measuring data of what is analyzed. • Questionnaires (Q); this is an alternative to the previous data collection strategy; by submitting questionnaires to suppliers (both back and forth the SC) peculiar data of their realities can be collected. Nevertheless, this requires time and expects a good relationship between the suppliers and the company that carries out the analysis. As previously stated, the reason why analysis is approached may be very different and influence what comes next to the analysis. Table 4 summarizes which capitalization action can be taken, according to why and how the analysis is carried out (RQ2). The first and second columns cluster for each objective, the available analysis typology, and data collection strategy; the third column contains the actions that can originate from the analysis, then it is shown how the analysis results can be spread inside or outside the organization and possible constraints to consider and observe.

68

R. Marta et al. Table 4. Capitalization actions and possible constraints.

Objective

Method Coll. Capitalization action Strategy

Dissemination Constraints

Marketing

Si SA F

DB Q S

• Marketing strategy • Advertising project

• External • Internal

General assessment

Sc Si F

DB Q P S

Redesign (process, • External product, material) • Internal • Professional figures selection • Training • Workshops • Standards/guidelines • More detailed analyses • Sustainability/circ. strategy

• Lack of prof. roles dedicated to sustainability • Lack of partners for implementation of strategies • Functional constraints • Budget • Timing • Norms and regulation

Certification SA F

Q P S

• Certification

• External • Internal

• Quality of the analysis

Re-design comparison

Q P S

• Marketing strategy • Redesign (process, product) • Material selection • Training • Workshops • Standards definition • Guidelines definition • More detailed analyses

• External • Internal

• Lack of professional figures dedicated to sust. • Lack of partners for implement possible strategies • Functional constraints • Budget • Timing • Norms and regulation • Competitors

SA F

• Budget • Timing • Competitors

A Step Forward Life Cycle Assessment

69

In particular two typologies of dissemination can happen: external or internal. The first represents strategies to communicate the analyses realized on the product to evaluate/improve its environmental performance. The second, represents how the company can capitalize the results obtained through the studies realized. All the results need to be communicated to the competences in charge of product design, from the definition of material, shape, to the identification of production processes, connection strategies and assembly procedures. Several forms can be considered, accordingly to the specific context in which they will be applied: • Training session, to increase the competence level on environmental sustainability of designer, stimulate the consideration of environmental sustainability inside the deign process and introduce the environmental profile of the analysed company products. • Workshop, to present and communicate the results obtained in the environmental analysis; particularly useful when analyses (simplified or detailed) are conducted by external experts to disseminate and spread the main conclusion of the studies to internal company staff; brainstorming sections can follow the workshop to identify possible solution strategies; • Check list and procedure, to guide in a simple way the choice realized during the design phase. Accordingly to the results obtained in the analysis, the company can define a list of avoided or recommended materials, processes, assembly strategies and procedures; particularly useful to guide the design phase when no or minimal competences on environmental sustainability are detained by internal staff in charge of the design phase. • Guideline, to summarize the results creating a direct correlation with product/process characteristics; particularly useful when the analysis are internally conducted; the competences and the awareness of product characteristics allows the company to produce specific material able to translate the results in suggestion for product design. Guidelines can be derived for all the product which present similar characteristics and support their environmental sustainable design.

3 Case Study To validate the proposed method, its implementation was realized in an Italian company, a leader in the design and production of leather armchairs and sofas. The company faces for the first time the environmental question and the main objective is to realize recognition of the environmental performances of different models of its iconic product: the leather armchairs. The following variables of the proposed method were defined for the specific case: • V1 &V2: Not urgency in terms of duration and limited budget for the analysis; • V3 & V4: Low detail level for the outcomes, because it’s the first time the company faces the environmental issue; • V5, V6 & V7: Production located in a single site and leading role in a restricted supply chain (both for suppliers’ number and location, mostly in the same district); • V8 &V9: Presence of not strategic suppliers and in-site production.

70

R. Marta et al.

According to the method, the suggestion for the company is to realize a full attributional A-LCA, based on secondary data. The full A-LCA will allow the company to derive quantitative results for the product life cycle phases, responding to the main objective; the not need for a high level of details directs toward the use of secondary data. The company received the support of researchers to apply both the method and realize the LCA, due to the absence of internal competencies in terms of environmental sustainability. The A-LCA was therefore performed, according to the reference normative. Object. Compare the environmental impact of three different models of leather armchairs to derive first considerations of their environmental performances. Functional Unit and Reference Flows. According to the objective, the functional unit is to “ensure the comfortable seat of one person for 15 years”. The lifetime of 15 years was fixed according to the Product Category Rules [19] for this type of product. The reference flow referring to the functional unit considers three different leather armchairs, realized with three different design principles: • A1 - Traditional design: the structure was realized in hardwood, with a traditional springing system (steel springs bound together with jute string), a classic shape pudding in vegetable hair; it has about 8 m2 of leather for a total weight of about 70 kg. • A2 - Modern design: the structure was realized with the use of hardwood, plywood and steel, with a traditional springing system (steel springs bound together with jute string), and a modern shape pudding with polyurethane (PU) foam; it has about 5,5 m2 of leather for a total weight of about 44 kg. • A3 - Innovative design: a steel structure (optimized in mass), with an innovative springing system based on textile belts connected with the structure by a simple locking system; the cover and the seatback are padded by polyurethane foam. It has about 4,4 m2 of leather for a total weight of about 40 kg. System Boundaries, Databases and Tool. All the product life cycle phases were included in the analysis, from raw material extraction to dismantling processes. The software SimaPro v9 was chosen with implemented Ecoinvent3.6 database. As a concern, the life cycle phase related to leather production, due to the absence of secondary data from the EcoInvent DB, the results of an EPD realized by a company’s leather supplier were used. The leather supplier has developed in these last years a lot of EPDs to certify the environmental performances of their leathers; the specific typology of leather used in the analyzed armchairs was also certified. All the other materials used were modeled using secondary data, as the preproduction processes. Concerning the internal processes, data on resource consumption were derived directly from the company site by a data flow management system implemented in the plant and then modeled by secondary data.

A Step Forward Life Cycle Assessment

71

The transport of materials and the semi-finished products were modeled deriving the location of suppliers and accordingly to the means used. The transport of finished products to consumers was modeled accordingly to the related PCR: a national transport (with a 16–32 ton lorry) with a travel road distance of 1000 km. The use phase was modeled assuming a cleaning of the product every six months using a very mild soap diluted with water and a reusable cloth, which is not included in the analysis. The EoL phase was modeled according to the following scenario: 50% in mass of the armchairs directed to the landfill as municipal waste; 50% in mass of the armchairs directed to incineration; 90% in mass of the carton box packaging directed to the recycling centre; 10% in mass of the carton box packaging directed to landfill (to include the inefficiency of the recycling process). Cut-off in input data was applied for all these components with a mass minor than 20 g, which correspond to 0.05% of the minor mass for armchairs (e.g. labels). Impact Categories and Related Methods. The use of an EPD for the impact of leather production forces the selection of the same impact categories and methods. Consequently, the method (EPD 2018 v1.01) and the indicators considered are those used in the EPD: acidification (AP)[kg SO2 eq.], eutrophication (EP) [kg PO4 eq.], global warming (GWP) [kgCO2eq.], photochemical oxidation (POP) [kg NMVOC], abiotic depletion, elements (ADP) [kg Sb eq.], abiotic depletion, fossil fuels (ADPF) [MJ], water scarcity (WS) [m3 water eq.]. Results. At first, the trend of the LCA performed for the three reference products is obtained and shown in Fig. 2. From the results, the use phase was excluded, because it presents a negligible impact (less than 0.01% in all the impact categories). Results show a predominance of the material phase impact for all the products and the leather covers over 70% of the total impact for all the impact categories, except ADP. This is the reason why product A1, which has the highest weight of leather, presents the higher impact, followed by A2, and A3. Different the results for the ADP category, where the presence of steel in spring and structure determines the highest impact for A2. The optimized geometry (and consequently the reduced mass) of the structure, makes the impact of A3 very low on this category, near to A1, where only steel springs are present. Other interesting results were derived for structures and pudding systems, which are the second elements most impactful after the leather. A hardwood structure and a natural pudding material determine (Table 5) the less environmental impacts (A1), in comparison with plywood, steel and PU foam (in A2 and A3). On the contrary, based on its design the A3 has a higher potential to be disassembled, due to the use of simple and easy to disassemble liaison, with consequently better performances at the EoL if reuse/recycling strategies could be applied. However, in the present analysis, alternative EoL strategies are not considered.

72

R. Marta et al.

7

3

AP

5

2

EP 2

1

3 1

1 -1

POP

1.5

A1

A2

0

A3

A1

A2

0 A1

GWP

700 600 500 400 300 200 100 0

0.5 A2

A1

6000 5000 4000 3000 2000 1000 0

WS

700 600 500 400 300 200 100 0 A3

A3

A1

A2

A3

ADPF

A1

A3

A2

A2

A3

ADP

0.03 0.025 0.02 0.015 0.01 0.005 0

EoL Transport Manufacturing Leather Material A1

A2

A3

Fig. 2. Life cycle impact results for the armchairs

Table 5. Impact comparison for structures Product

AP

EP

POP

GWP

WF

ADF

ADP

A1

2.1E−2

8.7E−2

1.8E−1

1.9E−1

3.7E+1

3.6E+2

0.1E−2

A2

8.8E−1

3.2E−1

6.6E−1

6.6E−1

7.7E+1

2.1E+3

3.0E−2

A3

8.5E−1

3.4E−1

7.0E−1

7.0E−1

9.4E+1

2.5E+3

1.4E−2

4 Discussion Through the application to the industrial case study, the proposed structured and modular method results successfully in supporting managers in the selection of methods and tools for effective implementation of environmentally sustainable strategies (RQ1) and in using them to transfer the acquired knowledge in a convenient way for the organizations (RQ2).

A Step Forward Life Cycle Assessment

73

In this case, the suggestion to perform a full A-LCA guides the company to derive a complete overview of the product environmental burden. At the same time, the method answers to the low level of details required, by proposing the use of secondary data. The specific context of leather production, forces the use of the EPD results, simplifying the entire analysis. The application of the method and the related implementation of analysis involve one designer and one manager, directly and respectively involved in the LCA analysis (form the objective definition to the inventory phase), with the support of one environmental expert, and in the interpretation and capitalization phase. The method allows to fulfill the constraints both in terms of budget, internal resources and outputs obtained. The results of the environmental analysis are used to train designers and architects in two workshops, to increase their competencies on the matter. The training material was structured in guidelines, which show and compare: • The actual solution and its environmental impact on several impact categories; • The reason (design solutions) related to the obtained results; • Alternative design strategies (already implemented in other models/products or new) and their impacts, focusing on improvement strategies. The company also planned the next steps. Detailed analysis of EoL strategies for the leather and design strategies to favor them (e.g. avoid permanent connections); comparison of impacts for innovative leathers, produced by applying more sustainable and efficient tannery processes.

5 Conclusion The present work presented a structured procedure that in the first-place guides managers and designers in the implementation of the right steps toward sustainability and the selection of the most effective tools and methods for their scope and context. Subsequently, it suggests alternatives to capitalize on the gained knowledge. Its implementation in a case study of a company that produces armchairs and was approaching the environmental analysis topic for the first time allowed to identify the best type of environmental assessment and the most accurate data collection strategy. The results reported higher environmental impacts related to leather. Then other critical aspects are the PU used in the armchairs and the steel structure, compared to lower impacts of frames made of hardwood and natural padding. Followed-up identified actions regarding guidelines for both designers and architects and more-detailed LCA that can include detail for other EoL strategies.

References 1. Brambila-Macias, S.A., Sakao, T.: Effective ecodesign implementation with the support of a lifecycle engineer. J. Clean. Prod. 279, 123–520 (2021) 2. Lame, G., Leroy, Y., Yannou, B.: Ecodesign tools in the construction sector: analyzing usage inadequacies with designers’ needs. J. Clean. Prod. 148, 60–72 (2017)

74

R. Marta et al.

3. Dekoninck, E.A., Domingo, L., O’Hare, J.A., Pigosso, D.C., Reyes, T., Troussier, N.: Defining the challenges for ecodesign implementation in companies: development and consolidation of a framework. J. Clean. Prod. 135, 410–425 (2016) 4. Albers, A., Behrendt, M., Klingler, S., Reiß, N., Bursac, N.: Agile product engineering through continuous validation in PGEeProduct Generation Engineering. Des. Sci. 3 (2017) 5. Buchert, T., Halstenberg, F.A., Bonvoisin, J., Lindow, K., Stark, R.: Target-driven selection and scheduling of methods for sustainable product development. J. Clean. Prod. 161, 403–421 (2017) 6. Al-Haddad, S., Kotnour, T.: Integrating the organizational change literature: a model for successful change. J. Organ. Chang. Manag. 28, 234–262 (2015) 7. Pereira, G.M., Jabbour, C.J.C., Finne, M., Borchardt, M., Santos, M.A.: Expert insights on successful multinational ecodesign projects: a guide for middle managers. J. Clean. Prod. 248, 119–211 (2020) 8. Stone, L.J.: Limitations of cleaner production programmes as organisational change agents. II. Leadership, support, communication, involvement and programme design. J. Clean. Prod. 14, 15–30 (2006) 9. Andreasen, M.M.: 45 Years with design methodology. J. Eng. Des. 22(5), 293–332 (2011) 10. Hjorth, P., Bagheri, A.: Navigating towards sustainable development: a system dynamics approach. Futures 38(1), 74–92 (2026) 11. Pollini, B., Rognoli, V.: Early-stage material selection based on life cycle approach: tools, obstacles and opportunities for design. Sustain. Prod. Consum. 28, 1130–1139 (2021) 12. Environmental Management Software. https://www.intelex.com/products/environment/. Accessed 01 Feb 2022 13. Greenstone. https://www.greenstoneplus.com/about-us/supply-chain-software. Accessed 01 Feb 2022 14. Sustainability tool. https://www.sustainabilitytool.com/our-clients/lmjv/. Accessed 01 Feb 2022 15. Corporate sustainability. https://sphera.com/corporate-sustainability-software/. Accessed 01 Feb 2022 16. CircoUp. https://gs1it.org/chi-siamo/eventi/20211018-corso-circol-up-lo-strumento-per-lamisurazione-della-circolarita-in-azienda/ 17. GaBi Packaging Calculator. https://gabi.sphera.com/international/software/gabi-envision/ gabi-packaging-calculator/. Accessed 01 Feb 2022 18. Eqopack. https://quantis-intl.com/it/metriche/soluzioni-digitali/eqopack/. Accessed 01 Feb 2022 19. Product Category Rules, Seat, 2020-04-17, CPC 1811

Workstation Design Centered to Workers’ Safe Use Patrick Martin1(B) , Bruno Daille-Lefèvre2 , Xavier Godot1 , and Jelena Petronijevic1 1 Laboratoire de Conception Fabrication Commande - Arts et Métiers Institute of Technology,

Université de Lorraine, 4 rue Augustin Fresnel, 57078 Metz, France [email protected] 2 Institut National de Recherche et de Sécurité (INRS), Work Equipment Engineering Department, 1 rue du Morvan, 54519 Vandœuvre-lès-Nancy Cedex, France

Abstract. Several studies have pointed out that Human factors are one of the main significant challenges in the development of the Factory of the Future (FoF). We speak now of Industry 5.0: industry 4.0 with the emphasis of ecology and human aspects. In this context the aim of this paper is to present the new issues for worker’s health and safety which have to be taken into account as soon as possible in the workplace design process and its use. We are speaking of “integrated prevention” which has to be applied from the system requirements to the functional design and the architectural design. Deployment of new technologies and organizations in the frame of FoF introduces hazardous events, problems of health and safety which have to be taken into account in the different life situations. The identification of dangerous situations therefore requires investigation of identification of characteristic design parameters related to risk qualification for the operator tasks. We start from the basic idea that accident is the interaction between a dangerous device and the worker, that can be directly linked to the generalized energy in a defined space. Each energy is characterized by parameters related to hazards for which the severity is defined by the standards and regulations. The methodology proposed combines a generic design approach based on a systems engineering approach to identify the uses that operators will make to ensure production requirements, and an energy approach for identifying potential hazards. Keywords: Health and safety · Hazard identification · Factory of the future · Systems engineering · Integrated prevention

1 Introduction 1.1 Context and Aim of the Paper Several studies have pointed out that Human factors are one of the main significant challenges in the development of the Factory of the Future (FoF). Numerous studies have underlined this issue in terms of scientific aspects as well as industrial deployment or societal aspects. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 75–87, 2023. https://doi.org/10.1007/978-3-031-15928-2_7

76

P. Martin et al.

The European risk observatory has done a literature review dedicated to the identification of new risks and challenges for workers, based on a Delphi methodology, in order to forecast risks related to physical, chemical, biological and psycho-social risks; multifactorial risks, complexity of new technologies, poor ergonomic design of workplaces and human – machine – interfaces are the main emerging ones [1]. The EFFRA (European Factory of the Future Research Association) in its last report [2] highlight this point in terms of: new job profiles, well-being in the factory, safety, competence and qualification growth, social and environmental responsibility, but this survey stays at a general level, the implementation needs in depth studies. In this way the European project humAn CEntred Factories – ACE- [3] has developed implementation of new technologies in industrial workshops in order to adapt individual skills, to encourage a symbiotic relationship between operation and equipment’s and to develop working practices for increasing well-being and safety. Industry 4.0 leads to an increased amount of information and new ways of using automation by a new usage of collaboration between human and robot. A classification of collaboration levels has been proposed in the frame of another European project: Manuwork [4]. The increasing complexity and variety of manufacturing operations asks for development of tools allowing the interaction between human and equipment. Urgo et al. [5] proposed an approach using artificial intelligence to identify the workers’ actions. This work also exploits the knowledge related to the processes through Hidden-Markov models to identify possible errors from the planned execution or dangerous situations. Newman et al. [6] present a systematic framework for considering human factors following the several phases of implementation of Industry 4.0: conceptualization, design, implementation, based on a large literature review. This framework gives a global view of the challenges, stakeholders and key parameters to be taking into account. Reiman et al. [7] present a scoping review dedicated to ergonomics problems. In these cases, the problems are observed on real workplaces and only curative ways of prevention can be proposed. Several works [8–10] are more dedicated to human robot collaboration, and based on technical solutions (interfaces, sensors, ICT technologies, control algorithms). Human in the factory of the future can be presents through several angles of analysis: – – – –

social assessment, wellbeing; skills, competences, knowledge, communication skills, open-mindedness [11]; human machine interface, cognitive assistance; health, ergonomics, physical assistance, fatigue, stress; dangerous environments, repetitiveness of gestures, prolonged static postures, dust, pollution, vibrations, ….; – safety (fast acting event, accident). These aspects can be taken into account and expanded at workplace, manufacturing system or factory levels. Furthermore, this point can be studied following design of equipment, space organization, design of lay-out, process plan manufacturing operation, tasks scheduling (lean manufacturing), time allowances. In the frame of design for safety, during the design phase of industrial equipment or manufacturing systems, several concepts or methodologies are proposed. Sadeghi et al.’s paper [12] gives a large survey of theories, methodologies and tools. A design framework

Workstation Design Centered to Workers’ Safe Use

77

using Behaviour Energy Structure (BES) has been proposed by Gomez Echeverri et al. [13]. Deployment of new technologies and organizations introduce hazardous events, problems of health and safety which have to be taken into account in the different manufacturing system life situations. New paradigms come into being: the uncertainty of the demand in terms of products as well as production rate, product customization, product/service integration, variability in manufacturing processes and times, reconfiguration of manufacturing equipment. Also, new technologies are implemented: robots, virtual/augmented reality, sensors and connected objects. Several tasks are performed by the workers, the robots or both in collaboration. We speak now of Industry 5.0: industry 4.0 with the emphasis on ecology and human aspects [14]. Human is integrated in the deployment of cyber production systems in addition to technical aspects [15, 16]. So the question is: how to ensure simultaneous responses to performance (productivity, flexibility, quality) and health-safety objectives? [17]. In this context the aim of this paper is to present new issues for worker’s health and safety which have to be taken into account as soon as possible in the workplace or manufacturing system design or re-design process. We are speaking of “integrated prevention” which have to be applied from the system requirements to the functional design and the architectural design. Usually health and safety problems are studied at the end of the design phase or on an existing equipment, so the solutions proposed increase complexity, the number of solutions available is limited and the global equipment efficiency is not optimal. The identification of dangerous situations, and thus prevention, therefore requires investigation of identification of characteristic design parameters related to risk qualification for the operator tasks. So a structured methodology which can be use by the manufacturing systems designer, production line’s new equipment designer, production manager is proposed in order to get safe working situation. In order to answer to this objective Paciaova et al. [18] have proposed a methodology based of the analysis of industrial cases and 19 requirements for safety which evaluates the level and effectiveness of current safety measures. This methodology is based on a list of several relevant parameters (list of hazards, hazardous situation, risk assessment, ….), quantification of probability, consequence and risk. All these data are gathered in a risk matrix for each equipment in order to propose safety measures for risk reduction. Inside the Factory of the future, the principal risks for health and safety are those identified in the standards [19]. Nevertheless, their occurrences, causes and severity, depend directly on the new context and the new work conditions and situations, particularly in terms of equipment design for re-configurability, human-robot interactions (transport, assembly) and the organization of production or workstations (production control, lean-manufacturing, self-organization, time allowances). The literature review shows the significance of interactions between human and his environment, the number of factors involved in risk assessment, the identification of generic qualitative and quantitative parameters, the need for methodology for helping the designer.

78

P. Martin et al.

1.2 Our Approach and Hypothesis To promote research on occupational health and safety and integrated prevention our two labs have decided to joint their strength in a common laboratory named LC2S *. Previous works conducted in this laboratory are based on three hypotheses: – Dangerous phenomena are directly linked to energy flow; – Accident or disease occur when a human stand in a dangerous area; – The knowledge and the definition of operator’s real work activities is a good way to design the appropriate work equipment, dedicated to this usage and to integrate the security by helping and protecting the operator in this activity. The first hypothesis, dangerous phenomena are directly linked to energy flow, was developed in a thesis work [20]. The result is a method, EZID (Energy Analysis For Systematic Hazard Identification During Design), used to identify hazards on work equipment. To do this, the EZID method asked the designer to describe the future work equipment under a schematic language: Functional-structural model. Each element of the future work equipment, conceptual elements in a system view or small and precise parts according to the details known at the design phase in progress, are represented by a circle (Fig. 1). Designer must identify the energies incoming or outgoing (Table 1) by the functional surface. For each element, designers have to describe if the energy is transformed, divided or transmitted. Between two elements, designers have three symbols to describe if the energy flow has to be conductive, semi conductive or insulating. For the interaction between operator (defined by a biomechanical energy) and another component the designer has a schematic view of the potential hazards and can propose preventive generic solutions (barriers, link removal, topological modification, …).

Fig. 1. Functional-structural model: interaction worker electric-motor [20]

Workstation Design Centered to Workers’ Safe Use

79

This EZID method could be used in each phase of a design process. In that sense, knowing the requirements of the future work equipment, energy flows can be described using EZID approach and, consequently, dangerous phenomena can be identified. The energy flow is observe from the general perspective in order to take into account the various physical phenomena involved (mechanical, electrical, thermal, …) (Table 1). The second hypothesis, accident or disease occur when a human stand in a dangerous area, was also developed in a thesis work. The proposer’s aim is to help work equipment designer or user (in fact: production manager) to take into account occupational health and safety in the different phases of work equipment lifecycle, like reconfiguration. On a CAD design, the designer has to [20]: – First: identifying hazard, by using the EZID method; – Then to draw on the CAD file the area where these hazard have an effect, and where the hazard can induce an injury on a human. The form of the drawn area could vary with the time, during the different phases of process or work equipment usages, and following the production scheduling.

Table 1. Generalize energy and its key parameters

Hazards Physical parameters

Mechanical hazards

Electrical hazards Thermal hazards

Radiaons hazards

Material / substances

Ergonomic Hazards

Key parameters Geometry: posion and shape Geometry: surface roughness Material Dynamics Potenal: hydraulic or pneumac energy Potenal: elasc force Potenal: gravity Vibraons Noise Electrical contact Conducon Radiaon Convecon Opcal radiaon Laser Electromagnec Ionizing Chemical Dust Biological Biomechanical

80

P. Martin et al.

An example of the dangerous area shown during the CAD design of a reconfigurable manufacturing line is given Fig. 2. In order to answer to the third hypothesis: knowledge about the real usage, it is necessary to get a good description of the stakeholders (internal and external), technical functions carried out by the system and their interactions. This point is developed Sect. 2.2.

In green: any interaction between human and AGV. In red: the dangerous area caused by a dangerous device (robot, machine tool, …). In blue: working space of the operator. In yellow: potential dangerous area: simultaneous presence of worker and dangerous component (intersection of blue and red areas). Fig. 2. CAD modeling of a flexible manufacturing line showing the dangerous area and the worker’s reachable space

2 Design Methodology 2.1 System Engineering Approach In order to fulfill our objective, we propose a design methodology based on system engineering approach, for the first steps of the design process: from functional requirements to system structure design (beginning of the embodiment design) (Fig. 3). The left column describes the different phases (data collection of functional requirements and key parameters, items involved, precedence requirements, technical choices). In particular, relevant data for health and safety objectives: generalized energy flow, presence of operator or any other person in a potential dangerous area have to be collected and gathered in the right column (arrows from left to right column). As soon as assumption of technical solutions and system layout are known, it is assumed that there is enough information for using health and safety indexes and rules. So if the health and safety requirements (last activity of the right column: data analysis) are not fulfilled the designer have to choose another technical solution (layout, actuator, …) (arrow from right to left column) or to add housing or to change functional scenarios. After that it is necessary to perform

Workstation Design Centered to Workers’ Safe Use

81

an in-depth analysis of the relevant data and their computation in order to give useful indicators that help the designer in his technical choices.

Fig. 3. Taking into account of health and safety requirements during the first phases of design process

2.2 Workstation Layout Design for Workers’ Safety Use A deeper analysis in order to assure the workstation design for any kind of safe use by the worker: real use which is different to the prescribed use, has been developed based on the previous methodology. Each function is defined following three lines: to act, to control (analyzing, deciding and ordering), and to assess (measuring, processing and informing). The interactions between requirements and technical constraints described by the link between the physical or insubstantial items of the system and the tasks to be fulfilled have to be identified (Fig. 4). These interactions are described following three criteria: goal (purpose, precedence), energy (nature, shape, intensity), and area (geometry, topology, external stakeholders).

82

P. Martin et al.

The methodology was developed following the framework presented Table 2. Relevant parameters for the description of resources, stakeholders and functions have to be identified in terms of qualitative and quantitative data in order to get technical solutions (Table 3) the architectural design. Each task has to be described in details by the relevant parameters for the resources, stakeholders, items and by taking into account of their interactions. Data collection allows to gather relevant data linked with health and safety: limited to potential interactions with a human (worker or any other person). During the first phases technical relevant information (type of energy, system topology and structure, accessibility, …) linked with potentials uses are not known, but the interactions and the scheduling allow to get a first idea of the dangerous circumstances.

Fig. 4. Interactions identification

As soon as energy (kind, flow, gradient), system structure (topology, potential dangerous area) and worker’s interaction (parts of the body) are known and defined, the designer can estimate its severity following standards [21–24]. At this level although the designer has a partial view of the system and the working situation, he can from now on do modifications (structure, energy, separator, sensors, controllers, scheduling) in order to reduce the hazard impact. Note that at this level probability or frequency are not known. This methodology has been applied and validated, in the design stage of an experimental flexible system located in INRS, for testing health and safety problems in the frame of Industry 4.0 paradigm. In this case electrical and mechanical energies, topology, technical functions, scheduling are the main items involved (Fig. 5). Validations in the frame of industrial working situations are foreseen.

Workstation Design Centered to Workers’ Safe Use

83

Table 2. Methodology framework

Table 3. Quantitative evaluation of each technical solution and examples N°

Funcon

Line

Technical soluon Performances

Maturity: technical, Total Costs: equipements, developpement , seng- competences up scale from 0 to 3

to act

To set-up device in the test area (

operator arm

1

3

3

7

to control operator eye

2

3

1

6

40 human) to assess

operator brain

2

3

2

7

to act

pneumac actuator

3

2

3

8

3

0

3

6

3

0

3

6

To set-up device to control camera in the test area ( pneumac and 40 electrical devices) to assess

PLC, electronic device

3 Ergonomic Analysis Use of the previous methodology in order to make up ergonomic analysis at the architectural design level has been studied too. The first study was the choice of the more convenient software for ergonomic analysis with a reduce number of information’s. It is necessary to break down each task in gestures in order to be able to build the digital model of interaction between human and machine for using after that an ergonomic simulation software. Several potential solutions can be used: virtual reality allowing a

84

P. Martin et al.

Fig. 5. Graph of interactions associated to our example

quick use, augmented reality which allows to take into account of loads, motion capture in order to simulate several operator gestures. After that, several well-known methods and software: OREGE (Outil de Repérage et d’Évaluation des GEstes) [25], RULA (Rapid Upper Limb Assessment) [26], REBA (Rapid Entire Body Assessment) [27], NIOSH (National Institute for Occupational Safety and Health) [28], OSHA (Occupational Safety and Health Administration) [29], are analysed. Ergonomics parameters which are taken into account are not the same ones used in each assessment tool. Usually score is defined following a specific body’s part linked to the occupational risk. These tools needs to get a complete view of the manufacturing equipment real usage but this information is not available during the first phases of the design process. At this level complete ergonomic study cannot be undertaken. Nevertheless, in order to help the designer in his choices, tool using available information can be useful. It is possible also to carry out some simulations for several usages scenarios based on a working place sketch. Taking into account of these limitations these tools are compared following a quantitative scale from 1 to 5 for four explanatory parameters linked to the occupational safety (frequency, body part involved, load, environment) (Table 4). The frequency which is given by the production management is not known so it is not relevant for our purpose. REBA gives a quick analysis of the whole body postures (static or dynamic). The load handle is also a significant parameter for the score. So REBA assessment seems to be a relevant solution for a first step in prevention of occupational risks.

Workstation Design Centered to Workers’ Safe Use

85

Table 4. Comparison of some ergonomic tools following our objective

Frequency

OREGE RULA 5 2

REBA NIOSH OSHA 2 3 4

Environment (temperature, light, noise..)

1

1

1

1

3

Load handle

1

3

3

5

1

Parts of the body exhausvity

3

4

5

1

3

4 Conclusion, Future Works This paper presents the structured methodology which has been developed and tested on a representative test bed for human factors in the factory of the future context located at INRS. This methodology is based on system engineering approach and identification of characteristic parameters related to risk qualification for the operator tasks in order to be taken into account as soon as possible in the workplace design. Hence, the methodology is called “integrated prevention”. During the first design phases information is gathered in order to get architectural design and relevant data for health and safety objectives. Although the designer has a partial view of the system and the working situation, he can do modifications (structure, energy, separator, sensors, controllers, scheduling) in order to reduce the hazard impact. More in order to take into account the ergonomic requirement during this first stage of design process and to help designer in his choices, potential technical solutions (VR, AR, motion capture) for building the digital ergonomic simulation model, can be used. Among the large number of available software, REBA (Rapid Entire Body Assessment) seems to be a relevant solution for designer decisionmaking at this level of product development. In depth study and validations of the methodology in the frame of industrial working situations or on others life situation (set-up, integration of modification of equipment, demanufacturing, …) are foreseen. Uncertainties and variabilities during the equipment use by the worker have to be taken into account more and more, so furthers studies have to be proposed. In this way, the best tools suitable among the existing ones [30] are to be chosen. Ackowledgments. This project is part of the LC2S laboratory (Joint laboratory for the safe design of work situations) supported by INRS and LCFC/ENSAM, whose objective is to provide designers of work equipment with a structured and equipped approach to move from the design of technical systems to the design of work situations in order to reconcile performance and prevention of occupational risks.

References 1. Literature Review: The Human Machine Interface as an Emerging Risk - European Agency for Safety and Health at Work, 40 (2009). ISBN-13: 978-92-9191-300-8 2. EFFRA 2019: EFFRA vision for a manufacturing partnership in Horizon Europe. www.eff ra.eu. Accessed 20 Jan 2022

86

P. Martin et al.

3. Casla, P., Sarah Fletcher, S., Johnson, T., Adlon, T.L.: Human-centred factories from theory to industrial practice. Lessons learned and recommendations. ACE Factories White Paper (2019) 4. Kolbeinsson, A., Lagerstedt, E., Lindblom, J.: Classification of collaboration levels for humanrobot cooperation in manufacturing. In: Advances in Manufacturing Technology XXXII, pp. 151–156 (2018). https://doi.org/10.3233/978-1-61499-902-7-151 5. Urgo, M., Tarabini, M., Tolio, T.: A human modelling and monitoring approach to support the execution of manufacturing operations. CIRP Ann. Manuf. Technol. 68(1), 5–8 (2019). https://doi.org/10.1016/j.cirp.2019.04.052 6. Neumann, W.P., Winkelhaus, S., Grosse, E.H., Glock, C.H.: Industry 4.0 and the human factor – a systems framework and analysis methodology for successful development. Int. J. Prod. Econ. (2020). https://doi.org/10.1016/j.ijpe.2020.107992 7. Reiman, A., Kaivo-oja, J., Parviainen, E., Takala, E.P., Lauraeus, T.: Human factors and ergonomics in manufacturing in the Industry 4.0 context – a scoping review. Technol. Soc. 65, 101572 (2021). https://doi.org/10.1016/j.techsoc.2021.101572 8. Kaivo-oja, J., Roth, S., Westerlund, L.: Futures of robotics. Human work in digital transformation. Int. J. Technol. Manag. 73(4), 176–205 (2016). https://doi.org/10.1504/IJTM.2017. 083074. ISSN 1741-5276 9. Kruger, J., Lien, T.K., Verl, A.: Cooperation of human and machines in assembly lines. CIRP Ann. Manuf. Technol. 58, 628–646 (2009) 10. Murashov, V., Hearl, F., Howard, J.: Working safely with robot workers: recommendations for the new workplace. J. Occup. Environ. Hyg. 13(3), 61–71 (2016). https://doi.org/10.1080/ 15459624.2015.1116700 11. WMF: Report of the world manufacturing forum, skills for the future of manufacturing (2019). https://worldmanufacturing.org/wp-content/uploads/WorldManufacturingFoundatio n2019-Report.pdf 12. Sadeghi, L., Dantan, J.Y., Siadat, A., Marsot, J.: Design for human safety in manufacturing systems: applications of design theories, methodologies, tools and techniques. J. Eng. Des. 27(12), 844–877 (2016). https://doi.org/10.1080/09544828.2016.1235262 13. Gomez Echeverri, J.C., Jean-Yves Dantan, J.-Y., Godot, X.: Design framework for production systems based on BES model – application in design for safety. Procedia CIRP 99, 484–489 (2021). https://doi.org/10.1016/j.procir.2021.03.068. ISSN 2212-8271 14. Breque, M., De Nul, L., Petridis, A.: Industry 5.0: Towards a Sustainable, Human-Centric and Resilient European Industry R&I. Publications Office of the European Union (2021). ISBN 978-92-76-25308-2 15. Nickel, P., et al.: Human-system interaction design requirements to improve machinery and systems safety. In: Arezes, P.M. (ed.) AHFE 2019. AISC, vol. 969, pp. 3–13. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-20497-6_1 16. Martin, P., Nouiri, M., Siadat, A.: Transition to a sustainable model: societal, economical and environment. In: Cardin, O., Derigent, W., Trentesaux, D. (scientific eds.) Digitalisation and Control of Cyber Physical Industrial Systems, Chap. 3, 20 pages. ISTE Editions (2021, in press). Cardin, O., Derigent, W., Trentesaux, D. (eds.) Digitalization and Control of Industrial Cyber-Physical Systems. ISTE Ltd, London; Wiley, New York (2022). https://doi.org/10. 1002/9781119987420. ISBN: 9-781-78945-085-9 17. Martin, P., et al.: New issues for workers safety in the factory of the future. In: Cavas-Martínez, F., Eynard, B., Fernández Cañavate, F., Fernández-Pacheco, D., Morer, P., Nigrelli, V. (eds.) Advances on Mechanics, Design Engineering and Manufacturing II. LNME, pp. 402–411. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-12346-8_39 18. Paˇcaiová, H., et al.: Methodology for complex, efficiency evaluation of machinery safety measures in a production, organization. Appl. Sci. 11, 453 (2021). https://doi.org/10.3390/ app11010453

Workstation Design Centered to Workers’ Safe Use

87

19. NF EN ISO 12100: Safety of machinery: General principles for design, Risk assessment and risk reduction, 93p. CEN, Bruxelles (2010) 20. De Galvez, N., Marsot, J., Martin, P., Siadat, A., Etienne, A.: A new approach to hazard identification during the design process by analyzing energy transfers. Saf. Sci. 95, 1–14 (2017) 21. EN ISO 12100 - (2010): Sécurité des machines - principes généraux de conception. AFNOR 22. EN ISO 13732-1: méthodes d’évaluation de la réponse humaine au contact avec des surfaces 23. Standard NFC 18-510: opérations sur les ouvrages et installations électriques et dans un environnement électrique - Prévention du risque électrique 24. Technical specification ISO/TS 15066:2016(E). Robots and collaborative robots 25. Apfel, M., Lafaurie, S., Tronchet, L., Atain-Kouadio, J.J.: OREGE : un outil simple d’évaluation des facteurs de risque biomécaniques de TMS du membre supérieur. Notes scientifiques et techniques NS 196, Institut National de Recherche et de Sécuri-té(INRS), p. 122 (2000) 26. McAtamney, L., Corlett, E.N.: RULA: a survey method for the investigation of work-related upper limb disorders. Appl. Ergon. 24(2), 91–99 (1993) 27. Hignett, S., McAtamney, L.: Rapid entire body assessment (REBA). Appl. Ergon. 31(2), 201–205 (2000) 28. Waters, T.R., Putz-Anderson, V., Garg, A., Fine, L.J.: Revised NIOSH equation for the design and evaluation of manual lifting tasks. Ergonomics 36(7), 749–776 (1993) 29. Nelson, N.A., Kaufman, J., Kalat, J., Silverstein, B.: Falls in construction: injury rates for OSHA-inspected employers before and after citation for violating the Washington State Fall Protection Standard. Am. J. Ind. Med. 31(3), 296–330 (1997) 30. Petronijevic, J., Etienne, A., Dantan, J.-Y.: Human factors under uncertainty: a manufacturing systems design using simulation-optimisation approach. Comput. Ind. Eng. 127, 665–676 (2019). https://doi.org/10.1016/j.cie.2018.11.001. ISSN 0360-8352

Design and Manufacture of a Sustainable Recycled Plastic Shredder José M. Tejero-Olalla, José Macías-García, M. Carmen Ladrón-de-Guevara-Muñoz(B) , Óscar D. de-Cózar-Macías , Francisca J. Castillo-Rueda , and Manuel D. Marín-Granados Universidad de Málaga. Escuela de Ingenierías Industriales, C/ Doctor Ortiz Ramos s/n, 29071 Málaga, Spain [email protected]

Abstract. The design of this shredder is part of an ambitious project called “MAREA Plastic” which aims to be a meeting point between the university comunity and society through science and technology by developing its own envi-ronment based on circular economy. The project’s challenge is the creation of portable lab that allows manufacturing new products from recycled plastic waste. To date, this has only materialized in private or temporary environments. A new way of looking at recycling and reuse arises, transferring the whole process to an urban laboratory that counts on the necessary technology to carry out the transformation process in a dynamic and illustrative way. In this context, the redesign and optimization of specific machinery is needed to make the urban lab possible. The extruder or the injector machines are examples of this. However, prior to using these two machines in the converting process, plastic containers need to be shredded into small shavings. The purpose of employing affordable and sustainably obtained materials is none other than to promote social awareness on the ecosystem problems and foster the MAREA Plastic project in turning the University of Malaga into a benchmark for cultural change. Keywords: Circular economy · Recycling · Reusing · Plastic waste · Shredder

1 Introduction 1.1 Object This paper presents a physical prototype of a plastic shredder for a solid machine design to be used in an urban laboratory as part of the ‘MAREA Plastic’ project. The development of this machine plays a fundamental role in the continuation and correct performance of this research and social awareness project. The prototype itself is considered sustainable to a greater extent since it will be composed of different elements and pieces that have at some point been disposed of. These discarded objects were mostly provided by the University of Malaga. By reusing © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 88–100, 2023. https://doi.org/10.1007/978-3-031-15928-2_8

Design and Manufacture of a Sustainable Recycled Plastic Shredder

89

them, their useful life is extended giving them a new purpose for which they were not originally intended. In addition, the machine must also transmit inherent values to the project, such as raising awareness and education in environmental sustainability or deep understanding and comprehension of the plastic recycling process in an illustrative way. The communication between the shredder and the user must be dynamic, without the need for previous knowledge or experience at working with this type of machinery, and also considering it will be shown to a smaller audience. Therefore, even if it is a prototype, it is a priority that it is possible to visualize and understand perfectly what the used plastic is going through the shredder as part of the work chain. It is therefore of vital importance that the safety of the public and the user is ensured at all times, both in the prototype and in the future shredder. Therefore, understanding and visualizing why the recycled plastic goes through the shredded as part of the work chain even if it is a prototype becomes essential. All in all, it is of vital importance to ensure the public and user safety at all times, both in the prototype and the future shredder. 1.2 State of Art and Justification Drastic changes are required in order to protect the planet and its biodiversity and ensure a proper ecosystem for future generations. Plastics, microplastics and their accumulation are one of the main sources of pollution. To fight this, more and more projects are created at an international level so that civilians are involved in improving this situation. According to the World Wide Fund for Nature (WWF), 95% of all the waste floating in the Mediterranean Sea is plastic in its many different forms. Most of it comes from Turkey and Spain, followed by Italia, Egypt and France. Moreover, each year tourism at these countries increases marine litter by 40% in summer [1]. These figures are devastating and it does not get better when compared to a global level. According to Greenpeace, only 9% of all plastic ever manufactured has been recycled, while 12% was incinerated and 79% ended up in landfills or spread over the ecosystem. In addition, 21–54% of all plastic particles in the world are found in the Mediterranean basin [2]. Plastic pollution in the Mediterranean Sea has been widely documented and studied; however, the impact on biodiversity has not been fully explored. Recent studies contribute to the identification of sensitive and understudied areas where marine species may be threatened in order to find hotspots of special interest [3]. Although in science and technology fields are researching and developing new biodegradable plastics or substitutes, it may be more consistent so far to talk about the sustainability of the material itself, that is to say, plastic waste recycling. At this point is when MAREA Plastic project comes in together with the need for a shredding machine to complete the circular economy chain proposed here. Many people have already tried to provide solutions to this call, for instance, by redesigning small manual shredders [4, 5]. Among these solutions the proposal by Dave Hakkens outstands. He is the founder of Precious Plastic’, a project from which MAREA Plastic draws its inspiration. Hakkens and his team designed the necessary information to build machines able to reuse plastic waste to give shape to new products and released them to the community. Actually, they managed to do it in a very simplified way, making

90

J. M. Tejero-Olalla et al.

the wholes recycling process much simpler, but also more manual than what the industry allows [6].

2 Methodology Before referring to the methodology applied to the project, there are two important factors to be considered. The first is that the design presented here starts from a basic shredder model, freely released to the community by Precious Plastic which includes specification data for each piece according to their experience. Secondly, it must be known that the project has been in constant evolution while the prototype design and construction was taking place. Regarding the first factor, the methodology of Failure Mode and Effects Analysis (FMEA) could be under the spotlight where the quality, reliability and safety of a product or process would be improved during the developing process itself. It analyses where potential failures can occur, their cause and their probability of occurrence. Some of the features born from this methodology are the following: the use of a Variable Frequency Drive (VFD) to control the shredding speed, a closure to the Hopper in order to avoid the projection of plastic particles, or the incorporation of an intensity sensor to indicate the engine is working properly. The FMEA methodology allows to focus on the machine and user’s safety, being able to foresee the dangers the user will face or whether the machine is not completely safe, either due to mechanical or electronic parts. Related to the second factor, it should be noted that it is a Design for Manufacture and Assembly (DFMA) oriented methodology that would reduce the number of components to the minimum possible. This method poses four fundamental questions: • • • •

The need for a piece. The use of a specific material. Movement between components. Whether the assembly is hidden between parts.

Thanks to these guidelines, rigid or elastic joints have been used depending on the area and the movement to be carried out. In addition, both standardized elements and pieces of our own design have been employed to make easier the machine’s assembly and maintenance. It has not always been possible to progress in the same path since due to the lack of availability in manufacturing or waiting times for the required components reception. However, at any case, another aspect was developed in parallel meanwhile, such as the mechanical and the structural part, or the safety related operations and the electronic part. Besides, as it consists of a larger project, sharing information and experience between the different machines has played a key role for its development. There is certain similarity with Concurrent Engineering (CE) by finding this convergence in every process development whenever possible. Not only has it been a matter of time saving but also making a greater effort in the design to reduce costs and improve quality. CE allows to work entire and simultaneously with knowledge, resources and

Design and Manufacture of a Sustainable Recycled Plastic Shredder

91

experience employing computerised methods applied to design such as: CAD, CAM, CAE.

3 Background 3.1 Market Study Currently, there are a diverse large shredder models in the market designed for the industrial environment. These are large machines equipped to be operated by trained technicians thus, not corresponding to the shredder concept MAREA Plastic intends. Furthermore, this kind of shredders do not allow easy transportation due to their high weight of around 250 kg. They are definitely not aimed at promoting recycling awareness among the public since their industrial character makes them opaque meaning this that the user does not have the chance to see or understand what happens inside them, therefore, preventing the user participation along the shredding process. Despite starting from the sustainability and circular economy concept there are some similar redesigns to this project that do not take the opportunity to instruct and illustrate the consumer [7]. Since Precious Plastic released the basic model for the shredder machine many individuals have brought themselves to build it for various reasons, whether to contribute the community or to make a profit by selling a product or the machine itself. Not only is the whole machine set considered but many people and small businesses have taken the plunge to sell different parts of the shredder as well. At the Precious Plastic official website there is a ‘bazaar’ where other users can offer products for the machine construction, and even the complete set [8]. 3.2 Study of the Basic Model The shredder is the backbone of the recycling process (see Fig. 1) since it allows the plastic to be shredded into small flakes that can later be washed and stored, or moved into other workspaces to become new interesting products for its users. The components of the basic shredder model are: – – – –

Geared motor assembly. Shredding box. Safety hopper. Support structure.

‘Precious Plastic’ also provides data to the community from its experiences (see Table 1). From all the above information together with the study of the basic model, several conclusions are obtained and exposed in the following table (see Table 2).

92

J. M. Tejero-Olalla et al.

Fig. 1. Conceptual basic shredder developed by ‘Precious Plastic’. [4]

Table 1. Values according to the experience with the basic model. [8] Nominal power

Nominal torque

Output speed

1,5 kW = 2 HP

300 Nm

±70 rpm

Table 2. Pros y cons of the previous basic model. [8] Pros

Cons

Small and light

Processing load is low

Possibility to granulate small flakes

The granulation process is slow

Relatively inexpensive

High maintenance

Portable

3.3 Innovation and New Design Requirements Based on the study previously carried out on the shredder simplest version, and the initial briefing that enabled the collection of material to be given a second life, some improvements and adaptations of the conceptual machine are proposed in order to fulfil the machine’s goal in an optimal and safe way. Improvements are classified by distinguishing three essential parts according to their role in the machine: • Structure: comprising the support, fastening and transportation of the shredder assembly.

Design and Manufacture of a Sustainable Recycled Plastic Shredder

93

• Mechanics: includes the elements required to transform electrical energy in mechanical energy, able to obtain the torque and output speed of the shredder box shaft. • Electronics: automation and control of the shredder; electrical power supply is included in this section as it is subject to a microprocessor.

Structural Function An iron structure is proposed. It is reinforced at the top, where 30 mm thick chipboards are screwed in order to serve as support table for the engine. The lower part must be adapted with another reinforcement that enables the positioning and fixation of the electronic scale used for weighing the shredded plastic. The wooden board needs to be cut inside with the size of the shredder box lower part so that the plastic shavings can go out the box and through the table into the collecting bucket under the table. This hole shall be protected with a metallic rigid metal mesh that surrounds the edge of the hole covering a depth of around 20–30 cm. This distance depends on the bucket size located under the table to collect the plastic shavings. The mesh holes’ size should allow to see the plastic flakes as they pass through and fall in the collection bucket as well as be rigid enough to maintain its shape over time, providing the necessary security to prevent objects or people from coming into contact with the blades. The same metal mesh shall be employed to cover the mechanical joints of the assembly: the joint between the motor and the reducer, and the joint between the reducer and the shredder box shaft. These critical points of transmission rotate and count on bolted fastenings that could put the user at risk. The structure must have wheels strong enough to support the load above and allow the movement and transportation of the assembly. The correct mobility of this machine is very important as it is expected to comply not only with its operation environment but also be able to function and be shown to the public in diverse scenarios. A safety methacrylate-made hopper is designed to enable viewing the box interior while the user can interact with the machine using its hatches. To avoid vibrations from the motor-gearbox-blades box be transmitted to the whole structure, a sort of “silentblock” type elements shall be used in the base in order to absorb most of the vibrations, crashes and noises. All elements remain fixed to the chipboard table and iron structure by means of screwed joints. Mechanic Function It is required to design a part to transmit the movement from the reducer to blades box shaft, as far as possible reusing discarded material. Electronic Function and Power Supply A start/stop button shall be located between the three-phase socket and the Variable Frequency Drive (VFD) so that there is no chance for misleading interpretation by the user. The VFD controls the output speed of the blades box shaft. Consequently, it is the first element needed for the machine automation.

94

J. M. Tejero-Olalla et al.

A microprocessor controls the whole machine assembly according to a preconfiguration setup that includes all mechanical and electronic components. By means of a controller, the connection between the microprocessor and the VFD is ensured. Locating an end-of-stroke sensor on the side gate of the hopper makes the blades brake and the machines stops directly when the gate is not properly positioned and closed, taking out the risk for the person under use. An electronic scale is required at the base of the structure so that when the desired weight of shredded plastic is reached, the machine stops. To prevent further damage in the event of the blades blockage, a rotation inverter between the VFD and the motor is installed so that not only the manual reversal of the shaft rotation is allowed to release the load in the event of failure but the connection between both elements can also be disrupted whether it is for testing and setting up the machine, or in hazardous situations. A non-invasive current sensor is installed next to the motor, connected to the microprocessor in order to reduce or stop the movement in case the maximum working current is exceeded. This prevents de motor from working beyond its capabilities, thus extending its useful life. Since ensuring safety is essential, serious consideration has been given to the user environment together with optimizing the machine operation in terms of production and consumption. With all this in mind, considering the innovative elements on the model and the adaptations for its working environment, the solutions and processes carried out for the prototype development are explained below.

4 Solutions and Prototyping Process A fundamental part of this Project is employing discarded materials that can be re-used for the shredded construction, thus making technology not only an ally to the project motto but also a transmitter and incentive of the recycling philosophy. In this sense, the end product is not the single result of the circular economy process that carries the message, namely, it becomes a part of a continuum where awareness-raising is constant, enhancing the impact on the consumer. After studying the basic model above, an initial briefing was performed to get an idea of what changes or improvements the prototype needed in general terms. Most materials and components available for reuse were provided by various departments from the University of Malaga (Fig. 2). Among the elements reused, the following are found: • Motor: loaned by the University of Malaga. According to the identification plate, it is a ASEA electric motor with the following specifications: – – – –

MODEL - MH-90 Lb/4 Brand - ASEA Power - 2 HP–1,5 kW Frequency - 50 Hz

Design and Manufacture of a Sustainable Recycled Plastic Shredder

95

Fig. 2. Motor, reducer and shredder box. [Source: Own elaboration]

– Voltage- 2250–380 V – Speed - 1400 rpm • Gearbox/Reducer: a lower number of RPM output can be obtained without a significant power reduction, increasing the torque in a safe and efficient way. It has a worm gear configuration and was acquired from a gearbox assembly at a scrapyard. Thanks to its modular design it could be ripped apart from the motor that was burnt out and reuse it without problems due to being an entirely mechanical element. Its characteristics are: – – – –

MODEL - SK 1SI63H-IEC90-90L/4 Reduction ratio (i) - 7,5 Maximum input speed (n1) - 1800 rpm Output speed (n2) - 186 rpm

• VFD: It is a second hand component. Improving efficiency and making responsible use of electric consumption are inherent to the project logic that is why only the required amount of electric should be consumed. This is done by meeting the motor speed with that required to perform the process. Thanks to the VFD the motor’s energy consumption can be reduced by 20–70% since the VFD is able to adjust the motor’s input speed to the real demand. • Safety hopper (Fig. 3): This element is similar to a large funnel designed for channelling the plastic material into the shredder box and it is essential in the prototype design. It is a key piece in direct interaction with the user. Its purpose is to allow feeding material to the shredder in a safe and illustrative way. In order to achieve that, it is

96

J. M. Tejero-Olalla et al.

decided that the material should not be opaque, this is why methacrylate is selected. The side door is the most important since it allows the controlled feeding process by means of the microprocessor, complying with a safe shredding process. A limit switch device enables the shredder operation only when the side door is close, that is to say, if the side door is open while operation, the shredder would stop immediately.

Fig. 3. Safety hopper assembly on top of the blades box [Source: own elaboration]

• Electronic scale (Fig. 4): connected to the microprocessor through a converter that links to a load cell in charge of weighing the plastic stored in the container. This simple configuration enables the control module to stop the shredder when the pre-settled weight is reached. Thanks to this exhaustive control, the amount of shredded plastic can be quantified any moment. • Non-invasive intensity sensor: it is the most important device in terms of safety of the machine. To simplify the design, manufacture and starting-up, this device is noninvasive as connecting it directly to the wiring implies adding several control elements which would increase the prototype final cost without getting a substantial improvement in current measurement. • Microprocessor: the VFD comprises the first automation step in the machine. Its functions are controlled by a microprocessor that manages the configuration, order transmitted by an operator or by a process unit, and the data provided by various measurements such as speed, current, amount of shredded plastic, etc.

Design and Manufacture of a Sustainable Recycled Plastic Shredder

97

Fig. 4. Electronic scale assembly. [Source: Own elaboration]

5 Prototype Analysis After performing the corresponding calculations, the following data is obtained according to some the afore mentioned characteristics (see Table 3): Table 3. Values according to the prototype theoretical approach. [Own elaboration] Nominal power

Nominal torque

Output speed

1,5 kW = 2 HP

76,6 Nm

±186 rpm

The resulting torque is low and the output speed is high when compared to the results shown above. However, these results can be useful to build the final design of the shredder machine. 5.1 Prototype Experimental Analysis Two common types of plastic were used to test the prototype: PET, which is present in water bottles as the container itself (but not the cap or ring on the top), and PLA, the most frequent plastic for 3D plastic nowadays. The procedure consisted in inserting a bottle (without the cap, nor the ring since they are made of a different material, HDPE) into the hopper while the blades are in operation changing the frequency, namely, rearranging the output speed.

98

J. M. Tejero-Olalla et al.

Up to a frequency of 30 Hz the blades do not produce enough impact force to tear the plastic. Although reaching 35 Hz the shredding process is able to strip complete bottles, the blades continue to block. From 45 Hz the process begins to be performed properly, with less likelihood of jamming or blocking. Taking the frequency to its maximum, 50 Hz, the output speed is so high that the blades become blocked due to plastic accumulation between the blades strips. One problem detected along the these testing is that generally, the blades speed is too high and throws the bottle out of the box, making the task more difficult. The next test was done employing PLA coming from 3D printed pieces. PLA has good mechanical properties compared to standard thermoplastic materials. Its hardness, impact resistance and elasticity are similar to those of PET. After these tests, it was found that the element density and thickness were key aspects to consider in the process. Shredding at 40 Hz gave the best results in pieces with a low filler percentage. Actually, pieces employed for this test were not very thick, 2–3 mm thick, plate shaped; it was able to shred higher volume pieces but still with a low filler percentage. Besides, those elements that were 5 mm thick with solid filling would block the blades.

6 Conclusions On the one hand, the FMEA methodology has been fundamental to propose improvements and innovations regarding the basic model. On the other hand, DFMA methodology has further contributed in terms of physically developing the machine, as well as for its assembly and the replacement of different components. In such a collaborative project as MAREA Plastic is, where diverse machines must work all together for a purpose, a correct information transmission becomes essential, as it occurs in IC. After studying the analyses carried out and verifying that it is a functional model, significant conclusions can be drawn about the current prototype, as well as the development of the electronics, improvements and other adaptations so that the shredder performance in the project is correct. Firstly, it is noted that the speed reduction and the torque increase are not enough, so in next developments a higher speed ratio is needed, around 20 or 40 if we use a similar 2 HP motor. The ideal option would be starting from a complete motor-gearbox assembly, since the gearbox is designed to fit the motor, there would be no problem of vibration or internal imbalance. In addition, a 3HP motor could be used with a lower reduction ratio, that is to say, an output speed greater than 70 rpm may be obtained being able to reduce it by means of the VFD, keeping the torque constant around 300 Nm. This is the reason why the VFD is the best option to initiate the machine’s automation, as it opens a wide range of possibilities, being able to play with these parameters without needing a specific motor or gearbox. It becomes essential to increase the torque to avoid blockages due to accumulation so that the speed does not need to be increased to have enough boost to tear the plastic.

Design and Manufacture of a Sustainable Recycled Plastic Shredder

99

This prototype turns out to be very important in the project development, despite its limited activity, its operation allows elaborating the electronics needed for the future machine. The automation required to control the machine starts with this prototype by creating the individual code for each proposed element for the shredder: limit switch, electronic scale and non-invasive intensity sensor. The prototype is not especially bulky or excessively heavy, therefore it can be transported quite easily complying with one of the main goals of the Project: instruct, disseminate, encourage and engage the public at MAREA Plastic promotional events. Furthermore, it needs to be said that not a significant amount of the proposed budget needed to be expended on this prototype, which allows to apply the knowledge gained to the following improvements without having reduced the available funds.

7 Future Prototype Improvements Recently, a new gearbox was acquired with a speed ratio i = 40. Appling the same theoretical calculation mentioned in the previous section, the following data is obtained (Table 4): Table 4. Future values required according to the prototype theoretical analysis [Own elaboration] Nominal power

Nominal torque

Output speed

1,5 kW = 2 HP

407 Nm

±35 rpm

Considering the above data, a new experimentation stage needs to be developed employing different materials to verify that the shredder works indeed better under these conditions. However, going beyond the purely mechanical aspects, there are several features that could pose major advantages over the current prototype. For instance, the user interface could be improved by making possible to access the configuration data and important notifications through a tactile screen directly to the user. Depending on the shredded material, it could also be settled to work efficiently without unnecessary energy waste just by clicking in a button to jump from one configuration to the other. Thanks to this automation, a record of shredded plastic could be stored. This option was always in mind while designing the codes and selecting the microprocessor, that is why the last one is able to connect via Bluetooth or WiFi. Nevertheless, for the starting versions a local server connected to a computer over a private network is recommended.

References 1. Alessi, E., Di Carlo, G.: Una trampa de plástico: liberando de plástico el Mediterráneo. Wwf.org, junio 2018. Recuperado en julio de 2021, de https://wwfes.awsassets.panda.org/ downloads/trampa_plastico_wwf_definitivo.pdf?47062/Segn-WWF-el-Mediterrneo-est-enpeligro-de-convertirse-en-una-trampa-de-plstico

100

J. M. Tejero-Olalla et al.

2. Greenpeace: ¿Cómo llega el plástico a los océanos y qué sucede entonces? Greenpeace.org (2020). Recuperado en julio de 2021, de https://es.greenpeace.org/es/trabajamos-en/consum ismo/plasticos/como-llega-elplastico-a-los-oceanos-y-que-sucede-entonces/ 3. Galli, M., et al.: Microplastic abundance and biodiversity richness overlap: identification of sensitive areas in the Western Ionian Sea. Mar. Pollut. Bull. 177, xx-yy (2022). https://doi.org/ 10.1016/j.marpolbul.2022.113550 4. Aswin, A.K., Maanav Charan, K.S., Mirthul, E.S., Nandhini, C., Sivarajan, S.: Additive manufacturing of a shredder. Paper presented at the Journal of Physics: Conference Series, vol. 2115, no.1 (2021). https://doi.org/10.1088/1742-6596/2115/1/012049 5. Reddy, S., Raju, T.: Design and development of mini plastic shredder machine. Paper presented at the IOP Conference Series: Materials Science and Engineering, vol. 455, no. 1 (2018). https:// doi.org/10.1088/1757-899X/455/1/012119 6. Precious Plastic Global Community: Preciousplastic.com (2016). Recuperado en julio de 2021, de https://preciousplastic.com/people/global-community.html 7. Rahim, N.H.A., Khatib, A.N.H.M.: Development of pet bottle shredder reverse vending machine. Int. J. Adv. Technol. Eng. Explor. 8(74), 24–33 (2021). https://doi.org/10.19101/ IJATEE.2020.S2762167 8. Precious Plastic Marketplace: Preciousplastic.com (2021). Recuperado en julio de 2021, de https://preciousplastic.com/solutions/bazar.html

Critical Analysis on the Limitations of the Diffusion of Life Cycle Assessment Christian Spreafico(B)

, Davide Russo , and Daniele Landi

University of Bergamo, Via Marconi 5, 24044 Dalmine BG, Italy [email protected]

Abstract. Obtaining a quantification of the environmental impacts of its products is now necessary for a company for several reasons, e.g. planning eco-design interventions, enhancing marketing aspects, obtaining a certification for trade. Life Cycle Assessment (LCA) is one of the best known and most appreciated methodologies to support this, however its application in the most useful way for the many purposes that may exist at an environmental level struggles to establish itself in companies. To understand how to improve the application of the LCA, this paper systematically analyzes and classifies the many problems identified during research activities on the subject in various companies from 2010 to today by the same authors. Compared to other approaches in the literature, in this case, the database is much larger and heterogeneous and the authors have full knowledge of it, having personally contributed to creating it. The result is a set of problems on the application of LCA, divided into different classes: motivations, inventory, impacts calculation, interpretation of the results. Finally, a set of strategies of intervention were proposed by the same authors to limit these problems. Keywords: Life cycle assessment · LCA · Eco-design

1 Introduction Life Cycle Assessment (LCA) is an internationally structured and standardized method for estimating the impact on the environment and human health of a product or service. Although the topic is very current, the first LCA studies date back to the late 1960s. The application and diffusion of this methodology is and has always been strongly influenced by geopolitical factors, first of all the oil shocks of 1973 and 1979 which prompted many companies to invest in methods to reduce energy consumption, to internal production processes. The first definition of LCA dates back to 1990, it is to be attributed to SETAC (Society of Environmental Toxicology and Chemistry) and defines the evaluation of the environmental loads associated with a product, a process or an activity, through the identification and quantification energy, materials used and waste released into the environment. In 2006 the ISO 14040 [1] and ISO 14044 [2] standards modified the definition by introducing the concept of “potential impacts on the environment of a production system”. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 101–109, 2023. https://doi.org/10.1007/978-3-031-15928-2_9

102

C. Spreafico et al.

LCA methodology has been selected, instead of other more qualitative approaches adopted in other studies, because its proven usefulness and reliability in quantitatively evaluating the sustainability of the current technologies, to critically discuss the choices to implement during eco-design and to evaluate the environmental performances of new developed technologies [3]. In extreme synthesis, according to ISO 14040 [1] and ISO 14044 [2], it consists different steps: goal, scope and operative scenario definition, for identifying the technical system to be measured and the motivation for performing the analysis; data collection about the sources of impacts; environmental impacts assessment according to standard indicators and interpretation of the results. In an LCA analysis, the potential impacts of an entire life cycle of the product, process or activity are considered, from the extraction and treatment of raw materials, passing through manufacture, transport and distribution, use, reuse, maintenance up to final disposal. In the following years, various sectors developed their own certifications, derived in some way from the LCA. For example, in Italy between the end of the 90s and the first years of the century the Leed (1998) and Casaclima (2002) certifications began to spread to push on sustainable construction. The LCA methodology also allows us to quantitatively evaluate the commitments and responsibilities that the international community has produced to combat climate change, in accordance with the objectives of the various international protocols, e.g. [4–6]. The first famous climate agreement was the drafting of the Kyoto Protocol of 1997, which required a 5% decrease in greenhouse gas emissions compared to 1990 levels, to be achieved by 2012. Another milestone was COP21: The Agreement of Paris in 2015, in which the commitment by the whole international community was foreseen to keep the total temperature increase well below 2 °C, and possibly within 1.5 °C. In the agreement, the obligation to drastically reduce their emissions in order to reach net zero emissions by 2050. The common goal is to obtain as soon as possible in a situation in which few greenhouse gases are produced and that these can be completely reabsorbed by forests, oceans and carbon capture and sequestration technologies. The paradox is that proven methodologies and in-depth studies on the impacts of our actions on the environment are available but with poor application despite the ever increasing local and European funding. In this paper, the authors systematically analyze and classify the problems encountered in the introduction and systematic use of LCA in companies for eco-design in the context of product development. The considered knowledge base is the experience of the Bergamo research group in this area, matured since 2010, through the collaborations with public entities and private companies in Italy in the drafting of green documents, EPDs, LCAs. These problems in the strategic adoption of the LCA have been declined both at a more practical level, e.g. data collection, impact calculation, results interpretation, and contextual in relation to the cultural approach aimed more at obtaining the certification rather than a real change of the design paradigm. Alongside various problems collected, suggestions were made to limit them and encourage greater application of the LCA. This study differs from previous studies in the literature that investigated these issues

Critical Analysis on the Limitations of the Diffusion of Life Cycle

103

(see Sect. 2) due to a broader survey perspective, based on a much longer data collection time interval and on a much larger and heterogeneous number of case studies. This paper is structured as follows. Section 2 presents the studies from the literature that have already collected problems about LCA application in industrial practice. Section 3 presents the methodology followed to determine the LCA problems in several real industrial case studies. The obtained problems and the proposed suggestions to limit them are presented and discussed in Sect. 4. Finally, Sect. 5 draws the conclusions.

2 Literature Background In the literature, several authors have collected the problems in compiling the LCA and the limits to its actual implementation for eco-design. Many of them believe that the limited practical application of the LCA is due to some of the methodological problems that distort the real quantification of the impacts, without consequently providing a reliable assessment framework for the implementation of corrective actions [7]. Among the various problems identified in this sense, there is the temporal dependence of the analysis and the lack of universal standards that make it possible to set a reference time for data collection, e.g. [8]. Another limitation is represented by the ability with which the system boundaries are defined, e.g. [9]. Still another concerns the lack of flexibility of the analysis in dealing with geographical and system constraints, such as the change in the electricity mix and climatic parameters, e.g. [10]. Other problems concern the difficulties in using data from the scientific literature for performing LCA, especially when data is missing in traditional databases, e.g. [11, 12]. Other authors have instead focused on the limits in the practical implementation of LCA results in design (e.g. [13–15]). Among them there are above all the difficulty in reading the results, due to the type of representation adopted and the lack of useful results to have a real understanding of the eco-design problems to be faced. Another cause is that novel and expert designers may misinterpret the correct results provided by an LCA, distorting their correct perception due to mental inertia and preconceptions, e.g. [16, 17]. While all of these contributions have helped to highlight many typical problems that need to be worked on to ensure the targeted use of LCA in eco-design, their limitations are mainly due to their reliability. In fact, the application domains from which the LCA problems have been extracted are almost always too narrow and specific. In this way it is not possible to understand how much these problems are general or dependent on the specific field of application and on the boundary conditions.

3 Methodology The issues related to the implementation of the LCA for eco-design in industry and the proposed strategies of intervention to limited them were obtained following a systematic methodology (see Sect. 3.1) on various industrial case studies (see Sect. 3.2).

104

C. Spreafico et al.

3.1 Used Approach The systematic procedure used to identify LCA issues and propose strategies of intervention has been structured through the following steps: 1. Analysis of the industrial problem: the context of application and the reasons for which the company commissioned the LCA (e.g. marketing reasons, obtaining a certification for the sale, planning an eco-design activity) were analysed. In this way, the issues identified in the next steps can be properly read. 2. LCA on the considered case studies, carried out by University team, using the data about the products and the processes collected with the company and integrating data from the literature about the operative context. 3. Identifying eco-improvement solutions to limit the main sources of impacts product/process life cycle and emerged from the LCA results, by using selected design and eco-design methodologies. 4. Interviews with professionals in the companies to collect their perceptions about the LCA results, to learn about their interpretation and use, just before the analysis and after a certain time period, during which the product is improved on an environmental level. 5. Comparing the eco-design interventions carried out on the products/processes by the experts in the company and by the Bergamo research group (during step 3) to solve/limit the main environmental problems. In this way it is possible to evaluate the different way of acting to solve an environmental problem according to different ways of proceeding and objectives. 3.2 Considered Case Studies The considered case studies are different and belonging to different application sectors. Their rough classification can be as follows: • Class A/A + + appliances spread on the Italian and European market: electric and gas oven, induction and gas stove, coffee maker, dish washer. • Small-medium sized industrial machinery for the production of bitumen for asphalts, for the production of pellets for domestic heating, for washing dishes in restaurants, and two different types of low voltage industrial switches. • Large industrial machinery for the oil & gas industry: compressor, adsorption dryer for compressed air. Four different types of medium voltage industrial circuit breakers with SF6 insulation.

4 Results and Discussion In the following, the main limitations to the application of the LCA for eco-design, which have been collected from the company case studies, are reported, classified and discussed (see Sect. 4.1) and some strategies of intervention are proposed (see Sect. 4.2).

Critical Analysis on the Limitations of the Diffusion of Life Cycle

105

4.1 Identified Problems The identified problems were classified according to the main points of the used method (presented Sect. 3.1). Motivations The primary purpose of an LCA would be to usefully introduce improvements in environmental performance to minimize the impacts on the production environment. However, in the considered case studies, this was almost never the primary driver from which investments derive. In them, LCA was seen as communication tool, especially in those cases where a potential “green consumer” market is developing. Another option was to adapt to the competitors who have chosen to obtain environmental certificates for their products. Other reasons were: accessing to public funding or bank foundations dedicated to the green economy; obtaining economic savings through energy optimization; coping with regulations that restrict the use of some materials; comparing alternative products or systems that perform the same function from the point of view of environmental performance. Inventory Life Cycle Inventory has a huge business cost and should be faced through a proper management plan. However, the companies of the considered case studies did not have a rigorous process mapping, with full control of the flows, even if they had already started the transition towards Industry 4.0, that should be help in this sense. The main consequences of this lack were the very high costs in terms of time, required skills, e.g. the knowledge of the normative aspects, and tooling, e.g. specialized software. In addition, these costs increased for companies that had many products in their catalogue. Impacts Calculation During impact assessment, a proper level of data aggregation should be selected in order not to compromise the correct level of design to plan the most suitable eco-design interventions. On the contrary the designer cannot access to al the necessary information, e.g. the environmental impact of a certain critical component). In addition, when the analysed product is complex, and the BOMs are very large and difficult to manage, this problem is more evident. To avoid this, it is therefore necessary to set the calculation of the impacts with the right level of detail according to future interventions. Otherwise, the risk is to produce many EPDs and not being able to read the data in a proactive perspective, as happened in some of the analysed cases. Interpretation of the Results Usually in the considered companies, the skills to correctly interpret the results were not available. However, the interpretation of the results is the most important strategic step in the LCA because it allows to identify on what ITEMS to intervene to improve the current situation through eco-design. Another lacking which has always been noted was the lack of a proper visualization of the obtained results. As a result, several people involved in decision-making processes struggle to access and understand this data. Table 1 provides details about the considered case studies in terms of sectors, products and observed issues.

106

C. Spreafico et al. Table 1. Details about the considered case studies and the observed issues.

Sectors

Products

Observed issues

Class A/A++ appliances

Electric and gas oven, induction and gas stove

Uncertainties during use phase due to user’s misuses

Coffee maker

Use phase difficult to map due to the variety of compatible components made by other companies (e.g. types of pods and capsules)

Dish washer

Lack of representative standards to model the use phase

Production of bitumen for asphalts, production of pellets for domestic heating

Lack of information relating to the use phase (location and modalities) carried out by customers

Washing dishes in restaurants

Lack of representative standards to model the use phase

Low voltage industrial switches

Too disaggregated data entry items in the bill of material

Compressor, Adsorption dryer for compressed air

Uncertainties during use phase due lack of knowledge about physical modelling

Medium voltage industrial circuit breakers with SF6 insulation

Bill of material too rich in components and complex

Small-medium sized industrial machinery

Large industrial machinery for the oil & gas industry

The main limitation of this work is that the case studies has been conducted by the university team acting as consultant for some industrial companies, with a high level of experience in the field. The obtained results could be different if companies were in charge of LCA implementation. To provide an idea of what these differences are, one could consider the comparison on the interpretation of eco-misperceptions between designer novices and experts, showed in [17]. 4.2 Proposed Strategies of Intervention To respond to these problems, several proposals have been introduced by Bergamo research group. The real goal is to Introduce LCA as a heritage of people so that the engineer is always aware of the impact his decisions have on the environment during design. One of the proposals is to support the interventions during the design phase by integrating the assessment of environmental impacts into the CAD. In this regard, good

Critical Analysis on the Limitations of the Diffusion of Life Cycle

107

planning rules were introduced. A focus is on the design of the end of life with the same attention with which the regular functional operation of the product is designed [18]. Other more specific integration issues are the choice of the materials for the circular economy [19], privileging those with biological origin, minimizing the use of energy resources, integrate methodologies such as TRIZ which aim to make systems work only where and when they are needed [20]. Another proposed intervention was the improvement of the visualization of the results to associate costs with environmental impacts in LCA and by selecting the proper visualization approach, e.g. infographics, to help R&D technicians, less expert in environmental assessment, to independently make their own assessments. In addition, also the evaluation of the impacts in real time during design, the bio inspiration during eco-design to think about multifunction systems, to minimize the use of resources, to make adaptive and self-healing systems were planned. These concepts were included into eco-guidelines to support novice eco-designers [21]. These guidelines are sufficiently general to cover every kind of problem and at the same time sufficiently specific to bring the user closer to the solution without requiring too much personal inspiration. In addition, they exploit a rigorous ontology indicating how to apply a specific problem-solving strategy onto a specific part of the problem, trying to make the user aware of the environmental consequences of his design changes. These interventions are also associated with the introduction of LCA concepts in the various Engineering courses, in the course of product and process innovation and in the course of methods and tools for the product life cycle.

5 Conclusions In this paper, several problems related to the application of LCA in companies have been determined on the basis of an analysis of case studies approached by the same authors during collaborations with companies starting from 2010. All problems have been identified through a systematic procedure, analysing the industrial problem, carrying out the LCA, proposing eco-design solutions, inquiring about the use of the LCA results in the company and comparing the indications proposed with those actually implemented. The collected problems were then classified, presented and discussed. Finally, some strategies have been presented to limit these problems. From the analysis of the results obtained, it emerges that the LCA methodology is known and applied by companies but more for practice, to certify their products for sale and marketing, than as a basis for carrying out an eco-design activity. The reasons are various, e.g. the choice of a level of detail that is too aggregate to be able to correctly understand the real environmental problems to be faced, the lack of planning and the difficulty in understanding the results obtained. The strategies proposed to limit these problems, in addition to guiding the technicians to a more strategic use of the LCA in the product development process, also have the merit of showing the gap between this scenario and the current situation, providing a more concrete idea on the identified problems.

108

C. Spreafico et al.

References 1. ISO 14040: 2006 Environmental Management - LCA - Principles and Framework (2006a) 2. ISO 14044: 2006 -Environmental Management - LCA - Requirements and Guidelines (2006b) 3. Hauschild, M.Z., Rosenbaum, R.K., Olsen, S.I. (eds.): Life Cycle Assessment. Springer International Publishing, Cham (2018). https://doi.org/10.1007/978-3-319-56475-3 4. Bianco, I., Thiébat, F., Carbonaro, C., Pagliolico, S., Blengini, G.A., Comino, E.: Life Cycle Assessment (LCA)-based tools for the eco-design of wooden furniture. J. Clean. Prod. 324, 129249 (2021) 5. Albertí, J., et al.: Life cycle assessment of a solar thermal system in Spain, eco-design alternatives and derived climate change scenarios at Spanish and Chinese national levels. Sustain. Cities Soc. 47, 101467 (2019). https://doi.org/10.1016/j.scs.2019.101467 6. Rouault, A., Perrin, A., Renaud-Gentié, C., Julien, S., Jourjon, F.: Using LCA in a participatory eco-design approach in agriculture: the example of vineyard management. Int. J. Life Cycle Assess. 25(7), 1368–1383 (2019). https://doi.org/10.1007/s11367-019-01684-w 7. Cherubini, E., Franco, D., Zanghelini, G.M., Soares, S.R.: Uncertainty in LCA case study due to allocation approaches and life cycle impact assessment methods. Int. J. Life Cycle Assessm. 23(10), 2055–2070 (2018). https://doi.org/10.1007/s11367-017-1432-6 8. AzariJafari, H., Yahia, A., Amor, B.: Removing shadows from consequential LCA through a time-dependent modeling approach: policy-making in the road pavement sector. Environ. Sci. Technol. 53(3), 1087–1097 (2019) 9. Gibon, T., Schaubroeck, T.: Lifting the fog on characteristics and limitations of hybrid LCA— a reply to “Does hybrid LCA with a complete system boundary yield adequate results for product promotion?” by Yi Yang (Int J Life Cycle Assess 22 (3): 456–406. Int. J. Life Cycle Assess. 22(6), 1005–1008 (2017) 10. Antelava, A., et al.: Plastic Solid Waste (PSW) in the context of Life Cycle Assessment (LCA) and sustainable management. Environ. Manag. 64(2), 230–244 (2019). https://doi. org/10.1007/s00267-019-01178-3 11. Spreafico, C., Russo, D.: Assessing domestic environmental impacts through LCA using data from the scientific literature. J. Clean. Prod. 266, 121883 (2020) 12. Thonemann, N., Schulte, A., Maga, D.: How to conduct prospective life cycle assessment for emerging technologies? a systematic review and methodological guidance. Sustainability 12(3), 1192 (2020) 13. Musacchio, A., Vicarelli, M., Colantoni, S., Bartocci, P., Fantozzi, F.: Application of designfor-LCA methodology to compare architectural alternatives for the compressor rotor of an industrial gas turbine. In: Turbo Expo: Power for Land, Sea, and Air, vol. 58608, pp. V003T03A015. American Society of Mechanical Engineers (2019) 14. Deng, J., Du, L., Xia, J., Li, Y., Cai, X.: Eco-design learning framework on integrating life cycle assessment into engineering courses for sustainability competency enhancement. IEEE Access 9, 160596–160606 (2021) 15. Navajas, A., Uriarte, L., Gandía, L.M.: Application of eco-design and life cycle assessment standards for environmental impact reduction of an industrial product. Sustainability 9(10), 1724 (2017) 16. Spreafico, C., Landi, D.: Investigating students’ eco-misperceptions in applying eco-design methods. J. Clean. Prod. 342, 130866 (2022) 17. Spreafico, C., Landi, D.: Using product design strategies to implement circular economy: differences between students and professional designers. Sustainability 14(3), 1122 (2022) 18. Russo, D., Rizzi, C.: Structural optimization strategies to design green products. Comput. Ind. 65(3), 470–479 (2014)

Critical Analysis on the Limitations of the Diffusion of Life Cycle

109

19. Spreafico, C., Spreafico, M.: Using text mining to retrieve information about circular economy. Comput. Ind. 132, 103525 (2021) 20. Spreafico, C.: Quantifying the advantages of TRIZ in sustainability through life cycle assessment. J. Clean. Prod. 303, 126955 (2021) 21. Russo, D., Spreafico, C.: TRIZ-based guidelines for eco-improvement. Sustainability 12(8), 3412 (2020)

Implementation of a Sustainable Washing System Applied to Waste Recycling Álvaro Millán-Estepa, Elena Díez-Martínez, M. Carmen Ladrón-de-Guevara-Muñoz(B) , Óscar D. de-Cózar-Macías , E. Beatriz Blázquez-Parra , and Isidro Ladrón-de-Guevara-López Universidad de Málaga. Escuela de Ingenierías Industriales. C/Doctor Ortiz Ramos S/N, 29071 Málaga, Spain [email protected]

Abstract. Precious Plastic Project proposes a decentralized way for plastic waste recycling by giving it value catching the consumers’ eyes through manufacturing beautiful objects and products from that plastic waste. This project is spreading over the world but it does not include a method or solution to clean domestic plastic products when they are collected. In this paper, an efficient and economical washing system that helps save water in the recycling process, make it more sustainable, improve the manufacturing quality and ensure the loop of circular economy, is proposed and suggested for application in other areas. In addition, an Open-Source microfilter adapter for domestic washing machine is presented aiming at reducing microplastic contamination at the source of the problem and suggested as a temporary solution to the microplastics contamination problem. Keywords: Plastic waste · Recycling · Wash · Water · Circular economy · Sustainability · Contamination · Microplastics

1 Introduction There is no doubt that more and more industries and its processes must be sustainable and environmentally friendly. Many products and business techniques are being redesigned to meet the responsibility of environmental stewardship. New ideas and alternatives to tackle different problems are constantly emerging: This is the case of the Precious Plastic [1] project which with a social and community perspective facilitates the dissemination of recycling techniques by distributing and providing tools and knowledge so that anyone can build a local recycling point. However, when looking closely to their documentation, certain weaknesses and inefficiencies that directly affect the impact generated by the recycling activity can be identified. Precious Plastic proposes a set of minimum two machines: the shredder that converts the plastic into small particles and the extruder or injector that shapes these particles and creates new products (Fig. 1).

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 110–121, 2023. https://doi.org/10.1007/978-3-031-15928-2_10

Implementation of a Sustainable Washing System Applied to Waste Recycling

111

Fig. 1. Precious plastic workshops around the world [Source: precious plastic website, map section]

Energy and consumables are used in the process, resources that may also come from unsustainable sources. Precious Plastic advises but does not propose knowledge for the creator to take this aspect into account. Especially regarding waste management, no established method for washing waste is found. This research starts from this problem as the water used in the process may or may not be an accessible resource for some and a bad practice would increase its consumption almost to the level of an inefficient recycling process. Poor waste management implies: • More maintenance of machinery. • Reduced machine life and reliability. • Loss of quality in production. To address this problem, the design and development of an energy and environmentally sustainable plastic waste washing machine is proposed. This system is also compatible with other problems, for example, domestic washing of clothes or utensils in regions where water or electricity is limited. Following the line of collective knowledge and globality proposed by Precious Plastic [Precious Plastic Website, Solutions section] and after further testing, the system will be documented and placed in the hands of the community for its development, improvement and replication. The proposed solution is a redesign of a domestic washing machine to which, a water filtering system that conserves water for a certain number of uses, taking full advantage of the resource is added. Furthermore, the system is modular and customisable so that each creator can adapt the design to their context and workload. In the design process, different existing solutions are discussed, and the idea is developed and validated through small experiments.

112

Á. Millán-Estepa et al.

1.1 The Microplastic Problem Microplastics are usually defined as particles less than 5 mm in size that come from plastic products manufactured by the industry. At the end of the several products’ life, they have generated and continue to generate microplastics. These end up in environments such as the air or the sea. One of the sources of microplastics is from textile products. When they are washed or rubbed, fibres are released, therefore making this one of the most important sources of microplastics [2]. As the proposed washing system in this paper uses a domestic washing machine adapted to make it more sustainable, solutions and improvements compatible with the problem of microplastics and microfibres are proposed. As mentioned before, Precious Plastic’s machines process plastic waste with the aim of create new products. Implicit in this process is the use of water to prepare the material before processing, and this is when microplastics are produced that contaminate the water used. This leads to extra contamination and makes recycling slightly less sustainable.

2 The Washing System 2.1 Experimentation To ensure an effective solution and save budget, materials, and energy a small Experiment was projected. With this experiment it is intended to validate pre-established hypotheses and to obtain sufficient information to design the washing system which should be: • Effective and efficient - cleaning as much as possible with the minimum energy and consumables. • Sustainable – generating the minimum waste possible.

Set up A scale model set-up was assembled using ¼” size pipes, a small pump which flow is 550 L/h and a fillable filter as shown in Fig. 2. The complete assembly is shown in Fig. 3. Together with the set-up scheme. Filters used in this experiment are a DIY sand filter and a commercial Active Carbon (AC) filter but other filters such as a glass particles’ were tested. It was decided to focus on sand and AC (as optional) because these filters are easy to access. The main reason to use AC was to remove odours, as it is a requirement for the workspace where the system will be installed and also, to enlarge the useful life of water and filters. Another requirement was to provide clean water in the workshop that may help other processes of the recycling process.

Implementation of a Sustainable Washing System Applied to Waste Recycling

113

Fig. 2. Set-up of the experiment [Source: own elaboration]

Fig. 3. Set-up scheme [Source: own elaboration]

Dirty Water Solution A mixture of pollutants was prepared composed of common fouling agents in consumer plastics containing 80% of organic substances, 10% of lipid substance and the rest is composed of a random mixture of common chemical agents. This solution (Fig. 4) represents typical dirtiness that can found in domestic used plastic [3]. Cycles Each cycle was conducted from the source, dirty water solution in the main container (Fig. 3), to the result of filtration, the output container. In each of them a sample was extracted. A turbidity and pH sensor were used to gather data about the experiment. These data are sufficiently representative to verify the cleaning effect of the filters tested. First, cycles were carried out only with the sand filter to check its effectiveness. Then both filters were employed consecutively. AC tests alone were not conducted because this filter is a special requirement from the lab. The accuracy of the turbidity sensor is +− 5 NTU1 and the effective range is from 0 to 4550 NTU. The Factory datasheet was used to truncate the values according to the laboratory temperature at 27 ºC. Sand cycle – Cycle I and cycle II Turbidity lectures reflect a large drop from 1954 NTU to ~800 NTU in the first cycle. 1 Nephelometric Turbidity Unit.

114

Á. Millán-Estepa et al.

Cycles was conducted without pressure, so the sand filter action was accomplished by gravity fluid flow. At the second cycle the NTU number drops to ~600 NTU. Note that not an effective particle filter was assembled which made the previous sample contaminate the cleaned water, as shown in Figs. 5 and 6, so this cycle did not help a lot to reduce turbidity but it proves the limit of the sand’s filtering capacity.

Fig. 4. Dirty solution [Source: own elaboration]

Fig. 5. Consequences of an insufficient particle filter [Source: own elaboration]

The acidity of the fluid also was measured, and it was not sensitively affected. The measured value was 4.8 pH which in sand filtering it is a usual behaviour [3]. As the water is not intended for drinking or irrigation, acidity is not an important component. Consecutive Filters – Cycle III Firstly, an only-gravity test was conducted; then, by means of a pump extra pressure is applied to accelerate the process. A new sample was prepared for this experiment (Experiment I).

2500

NTU Experiment 1

2000 NTU Experiment 2

1500 1000 500 0 Sample

Cycle I

Cycle II

Cycle III

Fig. 6. Turbidity drop in both experiments in different cycles [Source: own elaboration]

Implementation of a Sustainable Washing System Applied to Waste Recycling

115

For the Experiment II, a less dirt-concentrated solution was used. The sand filter stages were conducted employing gravity force while in the AC filtration the extra pressure of the pump is used. In this second attempt, the sand filter works better than in the previous test since the particle filter was improved. Figure 7 shows the turbidity changes in each cycle:

Fig. 7. Turbidity samples. From left to right: Sample sand filter cycle alone (gravity flow), sand filter cycle (gravity flow) plus a cycle with AC filter (pump powered). [Source: own elaboration]

Observations • Gravity filtration for the Sand filter is slow but effective. The observed results show that gravity filtration is more effective than using the pump, but takes much longer (and does not consume extra energy). • As mentioned before, AC filtration with extra-pressure resulted less effective than the gravity option. • The sand filter is sufficient to keep the water optimal for washing. • AC filter significantly improves water quality and thus extends the filter’s lifetime and even provide better odours’ results. • Additional aspects should be considered to improve the washing procedure and eliminate microbes and other metal/chemical substances as recommended [3] but the result is acceptable to use the water for cleaning plastic parts (and not for hygienic use).

3 Premises for a Sustainable Washing System With the information gathered in the previous experiments, the system can now be projected with far more knowledge and trustability.

116

Á. Millán-Estepa et al.

3.1 Conceptualization and Ideas The challenge is not only to design a machine that cleans plastic, but also that is sustainable, easy to use and maintain, cleans effectively and can be properly integrated into the workshop. The Washing Processes (Fig. 8) Depending on the source and conditions of the plastic waste different approximations should be considered. It is important to note that sometimes a pre-washing task is recommended before mixing the dirtier plastic with others to enlarge the filters lifetime. The service life of the filter has not been tested as it depends on a random factor, which is the degree of soiling of the plastic waste. Other facts must be considered, i.e. some plastic packaging should be shredded before washing and labels or no plastic/different plastic components have to be separated.

Fig. 8. Flow charts of the washing process [Source: own elaboration]

Washing already shredded plastic is the most desirable because it draws better results than washing containers or packaging/entire products since the shredded particles/flakes will experience more friction inside the machine drum.

Implementation of a Sustainable Washing System Applied to Waste Recycling

117

Design Specifications and Solutions Efficiency: • Gravity force must be preferably used before adding water pumps. As a result, the filling of the washing machine drum must be optimal and extra water must be available to make up for it if the used water has not yet been filtered out. • Water and power efficient domestic washing machine will be preferred before building a custom machine. This will ensure the quality of the wash and possible associated problems. Universality: • To employ standardized and commonly used parts and dimensions in global plumbing applications. • Special/extra parts should be made with an accessible manufacturing process/machine. Usability: • To provide clean water to workshop operators. • To have a pre-washing process for the dirtiest plastics and those that need pre-treatment before shredding will ensure the washing effect. Sustainability. • To take care of the resultant subproducts released in the process. Sand of the filter could be reused to fill and add weight to the resultant products made in the workshop. • To extend the water useful life by employing it in as many cycles as possible. • To avoid the use of chemical cleaning products. • To avoid the use of extra electric energy. 3.2 Construction A reused domestic laundry machine Fagor FE-810 was decided to be used as the starting point due. This kind of machine has everything that is needed for an effective cleaning: • • • • •

Water heating Centrifugation at variable rotational speed and drying Pumps to manage the fluid Controlled electricity consumption To ensure a high efficiency, the laundry machine’s pump will be the only one to move the fluid. The only way to make a reliable circulation of the fluid is by using gravity, thus, this main idea will condition the system design. Considering that laundry machines are designed to only work when a minimum water pressure is supplied, the water feeding system was modified using servo-valves instead of electro valves.

For the water to circulate and fill the drum of the washing machine, large diameter connectors (racors), pipes and valves (1” and Ø 25 mm respectively) were used. In

118

Á. Millán-Estepa et al.

addition, the layout and assembly of the pipes must be carefully looked after to get the most out of the gravity force (Figs. 9 and 10). The main container is a domestic tank with a capacity of 57 L. Then, the machine was tested to verify that the main pump had enough power (30 W) to move the water through the filters and afterwards, to the main container. As mentioned previously, some of the water used in the system will be trapped in the filter circuit, improving the quality of the outgoing water. This implies the provision of an extra tank for refilling water. This tank will also be used to renew the water in the system when it is saturated. This tank is showed in Fig. 9 but not in Fig. 10.

Fig. 9. Scheme of the washing system [Source: Own elaboration]

Fig. 10. Prototype of the washing machine [Source: own elaboration]

Filters There are no commercially available small sand filters (but yes CA filters), so it was decided to build one with the help of 3D printing. The starting model employed comes from those used in swimming pools (Fig. 11). First model uses only fine sand grain, but other ideas could be tested [3] like the use of different grain size of sand or stone. Filter is filled with ~1 kg of sand. The filter’s lifetime is very complex to estimate as mentioned before. Therefore, in addition to using transparent tanks and pipes, a turbidity sensor was installed in the tank using same electronics and sensor used in the experiment to supervise the water quality. This piece was manufactured by 3D printing using recycled TPU2 plastic and it has a removable particle filter to avoid unblocking. When the filter collapses, there are various procedures for reusing the sand inside the filter: • Burying • Disinfecting and using it as a filler material or mixture for plastic products [4]. • Reusing it in another way as long as it has no contact with life beings. Clean it with clean water. 2 TPU: Thermoplastic polyurethane which stands out for its flexibility and strength.

Implementation of a Sustainable Washing System Applied to Waste Recycling

119

Fig. 11. Sand filter design. Left: concept sketch; Right: 3D design [Source: own elaboration]

Fig. 12. 3D printed sand filter design, Isometric view.

4 Future Work 4.1 Divulgation of the Information in Open-Source Communities The machine still needs further testing to decide next improvements and release to the community an open-source design, recommendations and tips to make this machine accessible, functional and context-adaptative as possible. This information is intended for already established and new Precious Plastic workshops and set-ups. 4.2 Design and Testing of a 3D Printed Microplastics Filter All domestic washing machines have a small drain where the pump is installed. This drain is accessible for the user to clean in case of clogging. In some washing machines, there is a small cleanable filter inside the drain. The problem is that these filters are designed to retain large particles and fibres, so the smaller one’s escape. In the case of plastic washing, for large volumes, it is necessary to provide a solution avoiding the dumping of microplastics and thus decreasing their sustainability.

120

Á. Millán-Estepa et al.

Work is underway to redesign these filters so that they can trap microplastics and also be compatible with smaller fibres for clothing application emissions, solving the problem as well as offering a domestic solution that can contribute to reduce emissions at source. (Fig. 12). In this terms, enough particles could be trapped so that the impact on the environment decreases. More and more, the industry and companies have started launching other solutions, like Cora Ball, Particles bags or external filters [4], however washing machines manufacturers do not seek the same. This is why we propose the creation of an accessible filter that would at least retain the emission of these particles even if the water, as explained in the washing machine water recycling system, is not reused. The disadvantage of the washing system proposed above is that it is not accessible for domestic use but this microplastic filter could at least avoid the microplastic contamination (Fig. 13).

Fig. 13. Laundry machine pump filter (usually located in the bottom part of the machine) [6]

5 Conclusions In this work, water filtering that had been contaminated with different substances and chemical agents as well as microplastics was tested resulting in an economical and easyto-assemble sustainable washing system that can be adapted to any domestic washing machine. In addition to proving its effectiveness with plastics, it shows that this solution can be a great alternative to save water in washing tasks at home or in small industry applications although this system can be scalable. The construction of a water filter for washing machines is accessible and economical and this information is intended to be released so that other people/companies can replicate it. This system has proven to work sustainably and be adapted to different situations. As a result of the work documentation, the designs, details and schematics produced here will be provided to the Precious Plastic community to help reduce the impact and improve the recycling process to make it more sustainable.

References 1. Precious Plastic website. https://preciousplastic.com. Last Accessed 21 Jan 2021

Implementation of a Sustainable Washing System Applied to Waste Recycling

121

2. Kärkkäinen, N., Sillanpää, M.: Quantification of different microplastic fibres discharged from textiles in machine wash and tumble drying. Environ. Sci. Pollut. Res. 28(13), 16253–16263 (2020). https://doi.org/10.1007/s11356-020-11988-2 3. Streit, A.F.M., et al.: Development of a pre-treatment process of polymeric wastes (HDPE, LDPE/LLDPE, PP) for application in the qualification of selectors of recyclable materials (2021) 4. Santhmayor, K.D., Shiri, N.D., Asiya, I., Krafft, M.S.: Development of water filtration unit for wastewater generated from waste plastics recycling machines. In: AIP Conference Proceedings, vol. 2236, pp. 050009 (2020). https://doi.org/10.1063/5.0007039 5. UN Environment: Programme, Plastic bricks in Kenya - NZAMBI MATEE - Young Champion of the Earth 2020 for Africa. In: YouTube. https://www.youtube.com/watch?v=QbZKP4UAtL8 (2020). Accessed 21 Jan 2022 6. Ecommerce Laundry Parts: Part LA0939100. https://www.relemat.es/es/repuestos-para-electr odomesticos/8917-filtro-bomba-para-lavadora-fagor-edesa-la0939100-4201683990152.html. Accessed 25 Jan 2022

Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer Domenico Speranza1 , Romolo Di Bernardo1 , Massimo Martorelli2 Antonio Gloria3 , Claudio Pensa2 , and Stefano Papa2(B)

,

1 University of Cassino and Southern Lazio, 03043 Cassino, Italy 2 University of Naples Federico II, 80125 Naples, Italy

[email protected] 3 National Research Council of Italy, 80125 Naples, Italy

Abstract. The paper presents a preliminary design activity and virtual prototyping of an innovative boat equipped with hydrofoils and hybrid propulsion, with the aim of extending the foil technology from the field of competition boats to recreational day-cruiser yachts and creating a craft with minimal environmental impact. Hydrofoils allow boats to rise from the water, greatly reducing resistance and increasing performance. The current work dealt with the preliminary design of a daysailer with foil technology and hybrid propulsion that allow to combine green and comfortable navigation both under sail and motor and that, when required, can sail in a more performing way by exploiting the foil technology and the thrust of the wind. After having deepened the theory and physics of sailing on foils, a MATLAB code was created to integrate the stability equations that characterize hydrofoil sailboats: connecting the acting forces and allowing to define the dimensions of the geometries, the code was fundamental in speeding up the iterative preliminary design process. The next step was to model the geometry of the hull and the appendages in the CAD environment and, subsequently, the wing movement mechanism so that it could both manage the incidence of the wings and retract the foils when the boat is moored. The hull, profiles, and wings were subsequently placed in a CFD and VPP virtual environment for testing their resistance. Future developments will include a detailed design and the physical prototyping of a first boat for water testing. Keywords: Virtual prototyping · Hydrofoil · Green · Hybrid

1 Introduction In the last decade, in the world of boating and racing sailing, the creation of ultraperforming boats is becoming increasingly popular which, with the use of foil or hydrofoil, can reach very high speeds, unthinkable for any type of displacement or planing boat. In foiling navigation, the only surfaces that remain submerged underwater are the foil appendages (main and wheelhouse) and since the hull no longer interacts directly with

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 122–134, 2023. https://doi.org/10.1007/978-3-031-15928-2_11

Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer

123

the sea, the drag decreases (composed by pressure and friction drag), increasing the speed of navigation of the boat and improving the comfort for the crew. The paper presents a preliminary design activity and virtual prototyping of an innovative boat equipped with hydrofoils and hybrid propulsion, with the aim of extending the foil technology from the field of competition boats to recreational day-cruiser yachts and creating a craft with minimal environmental impact.

2 State of Art There are those who claim, based on some sketches found, that even Leonardo da Vinci had already had the intuition of the multihull with hydrofoil [1]. The first examples of the construction of “flying” boats date back to the early 1900s, thanks to the studies carried out by officer Eng. Enrico Forlanini managed to create the first hydrofoil prototype by modifying a hydroplane, adding a structure with supporting fins. A few years later, at the Italian Air Force base in Bracciano, the tests on aeronautical propulsion engines and propellers were carried out using hydroplanes with hydrofoils such as the “Barchino Idroplano” designed and built by the engineers of the Genio Militare Italiano, Arturo Crocco and Ottavio Ricaldoni and preserved in the A.M. Museum of Bracciano. The first prototype of a hydrofoil sailboat was born in the 1950s in the United States when US Admiral Gordon Baker built the first prototype of the “Monitor”. The boat, conceived as a US Navy spy vehicle, could rise on the hydrofoils with about 13 kn of wind. In some tests it reached speeds between 30 and 40 kn. In the 1960s, the technology of hydrofoils for civilian uses took hold in the world of nautical transport, many boats were built that allowed rapid movements on rivers, lakes and at sea. These boats are still used today for the transport of civilians in not excessively formed sea conditions. Hydrofoils make their appearance in the world of sports boating in the early 2000s with the creation of the first surfboards, windsurfing, multihulls and sup. In 2009 the Hydroptère, a trimaran equipped with foil piercing, set a new speed record for a sailboat exceeding 52 kn. The real boom in the world of sailing takes place thanks to the 34th America’s Cup (2013) held with the AC72 catamarans equipped with hydrofoil fins that allow, for the first time in the history of sailing, to reach speeds of over 40 kn in regatta. The Olympic federation includes, among the sailing classes participating in the Olympics, the Nacra17 from Tokyo 2020 and the iFoil windsurfing from Paris 2024. The final term of the technology regarding foiling technology is represented by the AC75 class, built for the 36th America’s Cup (2021): the 21-m monohull uses two weighted “arms” to generate both the lift sufficient to lift the hull from the water and the necessary straightening moment.

124

D. Speranza et al.

3 Introduction to the Foil-Sailing Theory 3.1 The Lift Generated by a General Wing For a generic foil profile placed in a single-phase fluid or in a mono fluid, lift is calculated with the follow [2]: Lift → L =

1 2 ·S CL · ρ · w∞ 2

(1)

   2 kg where, ρ is the density of the fluid m 3 , S is the wing surface in plan m , w∞ is the   relative speed fluid-profile ms , CL is the Lift Coefficient, which is function of the angle of attack and the characteristic of profile. 3.2 The Behavior of a Wing in Water A hydrofoil interacts both with the water in which it is immersed and with the air that is above the free surface, consequently, it is possible to say that when a wing is at a dive greater than 1.5 chord it does not affect by the disturbance effect of the free surface, in the opposite case, there is a decrease in the hydrodynamic resultant determined by an empirical multiplicative coefficient which can be called Submergence Factor [3]:  Fs = 1 − 0.222

(1.5 ∗ chord ) − dive chord

2 (2)

Consequently, the lift for hydrofoils can be obtained as: Lift → L =

1 2 CL · ρ · w∞ · S · Fs 2

(3)

3.3 Introduction to the Stability of a Foiling Sailboat If considered a boat equipped with foil that accelerates thanks to the force generated by the sails, three phases of navigation can be distinguished. In the beginning, the speed of the boat is not sufficient to generate enough lift to allow take-off. The stability of the boat is studied with the classical physics of displacement boats. Accelerating, the boat reaches a speed called “take-off speed”: the hull is partially still immersed in water. In this phase the resistance of the boat is maximum and cavitation or ventilation of the wing can occur. After the take-off of the hull, the sailing is full foiling, the only surfaces that are immersed in water are the foils. In this phase the stability of the boat can be treated in a very similar way to the stability of an airplane in flight. Considering a reference system x, y, z (Fig. 1) it is possible to evaluate the stability of the boat.

Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer

125

Fig. 1. Reference system for stability studies

Pitch Stability in Foiling. Considering the boat in foiling navigation, in the z direction of the xz plane the balance of forces turns out to be: FS,z + W = LF + LR + LK

(4)

where the weight force W is applied in the center of gravity G, while the main foils and those of the rudder generate a lift equal to foil lift LF and rudder lift LR , and the keel generates a keel lift LK (Fig. 2).

Fig. 2. Equilibrium on the boat in foiling

126

D. Speranza et al.

On the horizontal plane in the x direction the following forces acting on the boat can be identified: FS,x = DF + DR + DK

(5)

Lift force LF is placed at a distance aF from the center of gravity of the boat and, for this reason, it generates a nose-up moment which tends to raise the bow; in opposition there is the force LR applied at a distance aR from the center of gravity of the boat, counterbalancing the nose-up moment generated by the main foils. F, D  R, D  K respectively applied to distances Sails and foils, with the forces F S,xz , D h, dF , dR and dK , produce a nose-down moment which must be counterbalanced by the  of the boat.  F and the weight force W generated by the lift of the main foil L The total equilibrium of the moments acting on the boat in the xz plane with respect to the reference system in G appears to be: Mfoil + Msail + Mrudder + Mkeel = 0

(6)

A fundamental function of the rudder foil is to stabilize the boat in the event of a sudden gust, exactly as the horizontal tail in airplanes. Increasing the force F S,xz the boat will tend to a nose-down sailing, the rudder profile will have to generate sufficient downforce (or lift) to stabilize the boat. Roll and Heeling Stability in Foiling. Considering the boat in foiling navigation, in the yz plane (roll) and xy plane, the static and moment balance will be guaranteed by the thrust of the sails, the weight force of the boat and the lift force generated by the submerged surfaces. As for the pitch, to obtain the balance of the boat, the equations for the balance of forces and moments have been written in a similar way.

4 Basic Design and Virtual Prototyping of “ROMa 33” Project 4.1 Hull Concept The first step was to analyze in depth projects already carried out. The key points of the boat design were the following: • Hull length less than 10 m (33 ft) and generous hull beam to improve stability and habitability on board. • A voluminous bow that can reduce the risk of broaching and, potentially of capsizing. • Sufficient internal and external spaces for good general comfort. • Light displacement, to achieve better performance, the lowest resistance in displacement sailing and the maneuverability of the boat. • Not high foiling navigation speeds, being a daily cruiser and managed by a control system. • The appendages shall be designed to generate sufficient lift at the take-off speed, approximately 10–11 kn, and to sail the boat in a stable way at the cruise speed of 18–20 kn.

Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer

127

• Possibility of converting part of the ballast with metal hydrides to store energy to be exploited in a hybrid propulsion system. As a preliminary basis for defining the fundamental quantities of the boat and the hydrofoils, a MATLAB code was created by numerically implementing the fundamental mathematical relations that could provide the quantities to be adopted in the design and allowing a faster iterative process of component sizing (Fig. 3).

Fig. 3. Flowchart of the iterative process of component sizing

Hypothesized Sail Plan. The sail plan has been studied with reference to projects already completed and to generate sufficient thrust for both displacement and foiling navigation. Through simulations, an average and maximum thrust generated by the sails was obtained which was then used to study the stability of the boat and to identify where to best position the thrust centers of the appendages [4, 5] (Table 1).

Table 1. Dimensions of the hypothesized sail plan Mainsail

Jib

Mainsail height

15 m

Jib height

12 m

Boom length

4.2 m

Distance clew head foil

6m

Sailing center, ref: zero pt x

7.5 m

y

4.28 m

4.2 Hull Design Hull modeling was done using NURBS Rhinoceros surface modeling software. The first step in the design of the hull was to freehand draw a rough geometry that could

128

D. Speranza et al.

respect the key points of the project. The drawing was then scanned and imported into the software where the real virtual design began. Hull Curves. The fundamental element for the realization of the surfaces of the boat are the basic lines of the hull and can be compared to the “skeleton” of the geometry. The hull lines were drawn several times making sure that each line was faired as much as possible and did not have excessive curvature or negative curvature (Fig. 4).

Fig. 4. Basic curves of “Roma 33” hull.

Check the Water Lines and Surfaces. In the first phase, the lines obtained from the plane-surface intersection showed irregularities such as drafts or negative curvatures. It was necessary to modify the geometries of the initial curves of the skeleton in an iterative process until the water lines were as faired and regular as possible. The creation of clean and homogeneous water lines in the curvature allows to optimize the efficiency of the hull and to decrease the hydrodynamic resistance. The final step was to verify that all surfaces match perfectly forming a closed geometry that can be converted into a solid if desired. The closed project has been exported and inserted in the programs (CFD, rendering, etc.) to visualize the virtual prototype in a realistic aspect and perform needed analyses [6, 7]. Materials and Weight of the Hull. The estimated weights for the hull and for the fin to generate a sufficient righting moment are shown in the following table (Table 2).

Table 2. Hull weights table Hull

Mixed carbon fiber glass sandwich structure

1.9 g/cm3

Hull area 54.048 m2 Thickness 1.5 cm

1540 kg

Deck

Polyester structure with high density foam

0.6 g/cm3

Deck area 26.23 m2 Thickness 2 cm

350 kg

Bulb

Lead

11.34 kg/dm3

800 kg 2700 kg

Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer

129

4.3 Hydrofoil Design The boat, not designed as a competition boat, must be designed to navigate in foiling mode in a stable way, without reaching extreme speeds for safety reasons and offering minimal hydrodynamic resistance. The key features of the designed wing are as follows: • T configuration: the balancing of forces even at lower speeds is better manageable than a classic L configuration foil which for construction reasons would generate a more distant lift vector, with a consequent greater righting moment and requiring greater thrust of the sails to balance the boat. • Minimization of the interference friction between the stem and the wing by creating geometry with angles greater than 90°. • Insertion of winglets with NACA 63–412 profile, scaled by a factor of 2 in height, at the ends to reduce edge effects and induced drag. NACA 63–412. The chosen profile for the hydrofoil is the NACA 63–412 which guarantees excellent C L -C D performance in the hypothesized sailing conditions and has been extensively tested on the wings already made in other foiling racing sailboats (Fig. 5).

Fig. 5. CL vs CD and CL vs Alpha for NACA 63–412.

The wing have been tried to work in the minimum of C L -C D diagram. The characteristic curves of NACA 63–412 are evaluated at different Reynolds numbers and angles of incidence obtained with the X-Foil software and compared with the values obtained by literature [8]. NACA 63–012. The symmetrical profile NACA 63–012 was chosen for the vertical stem of the hydrofoils as for the horizontal wings of the rudders [9]. The horizontal wings of the rudders will consist of a fixed part and a movable flap placed at 70% of the chord from the leading edge linked with the control system to stabilize the boat at various flight conditions [10].

130

D. Speranza et al.

Foil Mechanism Sketch. The hydrofoil movement mechanism is composed of a box in which the wing stem can slide. It is contained in a structure on which it is hinged and can move through a hydraulic actuator to change the angle of the wing (Fig. 6). The mechanism has two basic functions: • Allowing the stem of the foil to enter and exit easily. • Change the angle of attack of the main wing.

Fig. 6. Conceptual 3D of foil mechanism.

4.4 Positioning of the Bulkhead Containing the Hydrofoil Since it is impossible to guarantee the complete seal of the mechanism that allows the extension and control of the foil, a watertight bulkhead has been designed that contains the entire wing movement system. This compartment must be inspectable both from the inside and from the outside of the boat and its characteristics are as follows: • The lower surface of the bulkhead was designed above sea level and with a slight slope to let the water flow towards the outside of the bulkhead. • In correspondence with the bulkhead, a hole is made on the hull with a shell-shaped sea intake applied to allow the water inside the bulkhead to flow outside. • Stiffening and thickening of the hull in the surfaces close to the mechanism in order to resist the forces acting.

5 Roma 33 Performance Simulations Hull Displacement Drag Simulation. To test the performance of the built hull, it was possible to test the hull in a “virtual naval tank” using classical numerical methods and comparing the results obtained with those of an already built “Melges 32” racing hull with a length of 32 ft (Fig. 7).

Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer

Resistance

10000

[N]

131

5000

0 Speed 6 kn Speed 8 kn Speed 10 kn Melges 32 Roma 33 Fig. 7. Resistance displacement test for classic racing hull.

From the analyses carried out, the design of the hull of “Roma 33” significantly reduces the force of resistance to advancement with respect to the reference boat according to all the mathematical models. VPP – Velocity Prediction Program (displacement). The VPP simulation made it possible to estimate the performance of the boat at different wind speeds and at different speeds when it is in displacement navigation. The result obtained indicates that the performance of the designed hull of Roma 33 is superior to the reference racing hull (Tables 3 and 4).

Table 3. VPP for reference hull. True Wind Speed [kn]

True Wind [deg.]

Hull speed [kn]

20

30

5.6

60

6.6

90

7.5

120

7.8

150

7.8

True Wind Speed [kn]

True Wind [deg.]

Hull speed [kn]

10

30

4.8

60

6.2

90

7.2

120

7.5

150

7

132

D. Speranza et al. Table 4. VPP for “Roma 33” hull. True Wind Speed [kn]

True Wind [deg.]

Hull speed [kn]

20

30

7.1

60

9.3

90

10.6

120

10.5

150

8.8

True Wind Speed [kn]

True Wind [deg.]

Hull speed [kn]

10

30

6.2

60

8.1

90

9.8

120

9.4

150

7.5

5.1 Main Hydrofoil CFD Analysis Using the Numeca Fine Marine CFD (FV-Based) software it was possible to simulate the hydrofoil designed in a multi-fluid environment. The analyses carried out in the current work were used to assess whether the created geometry generates the thrust necessary to lift the boat from the free surface of the water in which it navigates. CFD analyses are made at the take-off condition and in the cruise condition [11]. In the two different situations, the lift generated is enough to take off the boat out of the water. 5.2 Overall Appendices CFD Analysis The CFD simulation of the total surfaces also served to evaluate how the various wings interact [12]. Through a visual analysis of the flow lines, the rudders, the main hydrofoil and the keel are positioned so as not to affect each other (Fig. 8). 5.3 Hydrogen Propulsion and Hydrides As for the energy that can be accumulated on board through the hydrides, referring to the replacement of a ballast of 800 kg and a hydride with a gravimetric density of 5%, up to 40 kg of hydrogen can be accumulated, for an energy content, calculated on the lower calorific value, equal to 3600 MJ. This is equivalent to about 100 kg of diesel, enough to get the boat moving without constant refueling problems.

Basic Design and Virtual Prototyping of a Hydrofoil Hybrid Daysailer

133

Fig. 8. Interaction between main hydrofoil and rudder

6 Conclusion A basic design activity and virtual prototyping of an innovative boat equipped with hydrofoils and hybrid propulsion have been reported, with the aim of extending the foil technology from the field of competition boats to recreational day-cruiser yachts and creating a craft with minimal environmental impact. The hydrofoils designed with the movement mechanism allow the Roma 33 boat to rise from the water guaranteeing foiling navigation. During displacement navigation, the designed hull allows to drastically reduce resistance by increasing the navigation range for the same energy used. The use of a ballast formed by metal hydrides instead of the classic steel or lead allows the implementation of a hydrogen hybrid propulsion system.

References 1. Lodigiani, P.: Un’introduzione al Capire e Progettare Barche. Bca Demco Kit (2004) 2. Lodigiani, P.: Capire e Progettare le Barche: Aero e Idrodinamica Della Barca a Vela. Manuale per Progettisti Nautici, Hoepli (2015) 3. Vellinga, R.: Design. Peacock Hill Publishing, Build, Fly (2009) 4. Calì, M., Speranza, D., Cella, U., Biancolini, M.E.: Flying shape sails analysis by radial basis functions mesh morphing. In: Rizzi, C., Andrisano, A.O., Leali, F., Gherardini, F., Pini, F., Vergnano, A. (eds.) ADM 2019. LNME, pp. 24–36. Springer, Cham (2020). https://doi.org/ 10.1007/978-3-030-31154-4_3 5. Calì, M., Oliveri, S.M., Cella, U., Martorelli, M., Gloria, A., Speranza, D.: Mechanical characterization and modeling of downwind sailcloth in fluid-structure interaction analysis. Ocean Eng. 165, 488–504 (2018). https://doi.org/10.1016/j.oceaneng.2018.07.011 6. Papa, S., Lanzotti, A., Di Gironimo, G., Balsamo, A.: A new interactive railway virtual simulator for testing preventive safety. In: WIT Transactions on the Built Environment, vol. 181, pp. 367–378, ISSN: 1746-4498 (2018). https://doi.org/10.2495/CR18033

134

D. Speranza et al.

7. Lanzotti, A., et al.: Interactive tools for safety 4.0: virtual ergonomics and serious games in tower automotive. In: Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y. (eds.) IEA 2018. AISC, vol. 822, pp. 270–280. Springer, Cham (2019). https://doi.org/10.1007/9783-319-96077-7_28 8. Abbot, J.: Theory of Wing Section including a Summary of Airfoil Data (1959) 9. Larsson, L., Eliasson, R.E.: Principles of Yacht Design. Adlard Coles Nautical (2000) 10. Cella, U., Salvadore, F., Ponzini, R.: Coupled Sail and Appendage Design Method for Multihull (2016) 11. Ferdinando, M.: Analisi e Sviluppo di Imbarcazioni a Sostentamento Idrodinamico (2016) 12. Numeca Fine Marine. Documentation and Tutorials (2021)

Virtual Design for Repair: The Case of a Gearmotor Daniela Francia(B) , Davide Bove, Gian Maria Santi, Patrich Ferretti, Giampiero Donnici, Leonardo Frizziero, and Alfredo Liverani ALMA MATER STUDIORUM – University of Bologna, Risorgimento, 2, 40136 Bologna, Italy [email protected]

Abstract. ‘Repair’ in the design process of products can prolong the life cycle of parts: this is substantiated by a few examples that put this ideology into practice. Among many other products, home printers could be a good example, with huge numbers of printers ending up in landfill after a relatively short life; often due to blocked print heads that are either impossible or too expensive to replace. The act of fixing things can both prolong the life of an artifact, and create new values through the process of engagement for its users. However, the prohibitive cost of repair makes it inaccessible or unfavorable this practice for many. In this paper a preliminary approach to Design for Repair is proposed, in order to virtually test an industrial case study and to show a comparison between a product, that was intended to disposal after a failure of some components, and the same product re-designed, by taking into account how to repair parts in easy and effective way. Redesign of parts in some cases can be very effective and the virtual test can be easily reproposed in practice, for industrial products. Advantages in the repair of parts is evident in terms of sustainability and circular economy pursuit. This paper suggests a sequenced method to approach the Design for Repair and provide the virtual model of a re-designed solution that could replace the previous one in order to make the repair of components easy and effective. The economic analysis on the effective convenience of repair faced to the disposal of a product was not developed within this context. Keywords: Design for Repair · Design for Disassembly · Gearmotor · Sustainability · Circular economy

1 Introduction The three ‘R’s of sustainability remind us to Reduce, Reuse, Recycle: ‘Reuse’ can be a challenge when so many products are not designed taking into account maintenance or repair. The claim that consideration for repair in the design process of products can prolong the life cycle of these objects is substantiated by a few examples that put this ideology into practice. Home printers are a good example, with huge numbers of printers ending up in landfill after a relatively short life; often due to blocked print heads that are either impossible or too expensive to replace. Heating devices like toasters with their frail, but cheap to produce, resistive heating element are a classic example of an appliance © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 135–147, 2023. https://doi.org/10.1007/978-3-031-15928-2_12

136

D. Francia et al.

designed for a short lifespan. On the other hand, the recent releases of mobile-phones and laptops that come with difficult-to-replace fused retina displays and bonded batteries are a testament to the planned obsolescence built into our devices. More and more products are being designed to be “sealed boxes” that can neither be opened nor fixed, and end up in a landfill sooner than later. As the linear economy has brought us to a global ecological crisis, alternative systems such as a circular economy are needed to overcome these problems [1]. Designers play a significant role in mitigating today’s environmental problems, and in the transition towards a circular economy [2]. The future designers must be prepared for alternative scenarios and be equipped with the relevant knowledge to be able to deal with environmental issues [3]. Society’s growing awareness of sustainable issues such as recycling and reuse, and therefore in general the reduction of resource use, cannot go unnoticed by those who must think first and foremost about the products that society must use, products belonging to every market segment. For this reason, if in the case of products destined for B to C (Business to Customer) markets, companies are already starting to design more sustainable products because they are requested and appreciated by the sensitivity of customers; in B to B (business to business) markets, these aspects of sustainability are not always implemented. Often, as in the case we will analyze, the choice is made to replace a component rather than to repair it. This choice is made because it is often the quickest and fastest, so as to reduce equipment down-time as much as possible. This replacement then generates waste, which must be disposed of, and the placing on the market of a new equivalent, if not identical, product. We are talking about more or less complex components that are replaced in the event of the malfunction of a single part or a small group of parts that make up the entire assembly, making “waste” even the components that still perform their task. Such approach causes the necessity to realize from zero for the new assembly parts that worked correctly on the old one, therefore practically with a double use of the necessary resources like raw materials and energies. In recent years, but especially with the start of the post-pandemic recovery, the availability of the most widely used raw materials in mechanics, such as aluminum alloys and steel, has decreased in the face of a pressing demand that has caused a high increase in prices (Fig. 1).

Fig. 1. Aluminum and steel’s price trends in the last 5 years [4].

In addition to the growth of raw materials demand, there has also been a strong increase in energy, which could be seen with an effective increase in the list price of companies working with these materials in the mechanical field, between 8% and 10%. This situation could favor the philosophy of repairs in the mechanical field as it would

Virtual Design for Repair: The Case of a Gearmotor

137

allow the reuse of many parts of the product, which still perform their task, very well without having to be disposed of and reproduced to go to equip the new component, with an increasingly higher savings of materials, energy and therefore the costs associated with them. The act of fixing things can both prolong the life of an artifact, and create new values through the process of engagement for its users. However, the prohibitive cost of repair makes it inaccessible or unfavorable this practice for many. Designed products are mostly mass produced which helps in offsetting the cost of production and makes it cheaper to buy new things. Repair, on another hand, is usually a small-scale operation that needs to be competitively priced for economic feasibility. In this context, additive manufacturing via 3D printing can enable the efficient and cost-effective replacement of damaged parts that can be produced at any manufacturing location and even when the parts are discontinued but their 3D CAD models are available. In addition, damaged portions of a part can be reconstructed by utilizing 3D printing. This technology combination can help to expand the repair attitude and design for the long life of things. The new idea of Repair cafes is also gaining momentum, where volunteers help visitors to repair broken items. In this paper a preliminary approach to Design for Repair is proposed, in order to virtually test an industrial case study and to show a comparison between a product modelled by traditional design and the same product redesigned, by taking into account how to repair parts in easy and effective way. Redesign of parts in some cases can be very effective and the virtual test can be easily reproposed in practice, for industrial products.

2 The Method In view of these considerations, the objective of this paper is to analyze the design of a component, which in the face of a malfunction has been replaced in-toto, by means of a work of reverse to observe the design choices that have determined the current design. Once this is done, through the use of methods such as DfD (Design for Disassembly), it will be possible to define what the sequences for disassembly and reassembly may be, then evaluate them through the estimation of the time required to perform these operations [5]. In the specific case, the focus of this study is a reducer, as shown in Figs. 2 and 3, with an aluminum alloy AlSi12Cu2Fe case suitable for pressure casting. As a first analysis, the reducer has no bolted couplings except for the flange to adapt it to the electric motor. The manufacturer’s catalogue indicates the amount of lubricant required, specifically oil. This lubricant should have been sufficient for the entire life of the gearmotor; in fact, there is no input on the housing for auxiliary lubrication. In view of this characteristic, it is clear that the product does not have maintenance intervals and therefore the manufacturer does not provide for the possibility of disassembling the component. Usually the weakest component, without considering commercial components such as bearings, oil seals, etc., for this type of gearmotors is the bronze crown which is made directly by casting it on the steel or cast-iron hub [6].

138

D. Francia et al.

The absence of obvious bolted couplings means that disassembly and reassembly is not easy, since there are no obvious connections that are easy and quick to remove, allowing for inspection and repair. For these reasons, the goal is first to identify a sequence that will allow the gearmotor to be repaired and then to try to improve the design to facilitate this. Subsequently, we want to propose design alternatives that can reduce the time and therefore the cost of repair of the component and perhaps through inductive reasoning go to determine the guidelines to be followed to optimize the design for repair of different products.

Fig. 2. The top, front and two sides’ views of the gearmotor in analysis.

Fig. 3. Exploded view of assembly.

Figure 3 shows the parts of the gearbox and the exploded view can suggest their relative positioning. A sequence of operation has been planned, that would combine some design strategies in order to define a method for approaching the Design for Repair.

Virtual Design for Repair: The Case of a Gearmotor

139

After the analysis of the components of the assembly, also intended to remark some malfunction and wear of parts, the failure cause have been investigated, in order to trace components that may have been damaged in a way that could affect the operation of the product. Then the 3D reconstruction of parts and their assembly have been made by reverse engineering and virtual 3D models and, through them, the product re-design will be proposed, taking into account the guidelines applied to the design for disassembly and design for maintainability with those of the design for assembly [7, 8] applied in contexts where the handling and the presence of equipment is not correctly thought out as in the assembly lines. A method that sequences all the above-mentioned activities is proposed in this paper and the diagram of Fig. 4 explains all the phases. By the physical product, shown in Fig. 2, the parts of the gearbox have been identified and their assembly have been analyzed in order to explain their functions. The failure cause has been hypothesized and some details are given in Section 3.2 that follows. By means of a reverse activity all the parts have been digitalized and the virtual assembly has been reproduced by CAD. The disassembly of parts has been texted and a detail section is provided in Section 3.1 that follows. At the end, taking into account all the aspects above discussed, the redesign of parts is proposed, in order to optimize the disassembly of parts and aimed at facilitating the access to parts that are most likely to be subjected to failure.

Assembly analysis (BoM)

Fail causes

3D reverse engineering

DfD/DfM/DfA

ReDesign for Repair

Fig. 4. The method for DfR’s flow-chart.

140

D. Francia et al.

3 Results and Discussion 3.1 Disassembly Analysis Listed below, in Table 1, are the timelines that were required to disassemble the BW30 gearmotor and the sequence used. The gearmotor was stuck, so it was completely disassembled to understand the failure. Table 1. The BW30 gearmotor’s disassembly operations and quantification Operations

Time [min]

1. Removing 4 screws TSEI M5×10

2

2. Flange and spacer removal

0,3

3. Disassembling the rear cap 24×6

5

4. Seeger D26 removal

0,5

5. Bushing realization – worm extraction

5

6. Locked worm screw. Unlocked + heat on gearbox

10

7. Worm shaft removal

2

8. Bearing removal

0,5

9. Oil seal removal

10

10. Removing seeger and washers

0,5

11. Placement on press for removing gear wheel and bearings due to oxidation

5

12. Gear wheel and bearings removal

0,5

13. Oil seal removal

0,2

14. Removing seeger and washers

0,5

15. Removal of bearing from worm shaft

0,5

16. Removal of bearings from gear wheel shaft

2

Total time without to extraordinary activities(5-6-11)

24,5

The impact of various operations on total disassembly time has been examined and the following Fig. 5 shows also the operations related to extraordinary activities, such as placement and realization of equipment necessary to disassemble the gearmotor, especially in view of the high state of oxidation of the components. In the upper count, about 20 min were not considered necessary to understand the correct disassembly sequence since no exploded views or drawings of the assembly were available.

Virtual Design for Repair: The Case of a Gearmotor

141

16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

2

4

6

8

10

Fig. 5. The impact of disassembly operations on the whole activity (in red extraordinary activities). (Color figure online)

3.2 Failure Causes The literature suggests that the main cause of failure of worm gearmotors is the total absence of lubrication [9]. As can be seen from the catalog excerpt, shown in Fig. 6, the reducer under analysis should have about 0.04 L of oil inside it, which should ensure proper lubrication throughout its life. No traces of lubricant were detected inside the gearbox, which caused both high wear of the bronze gear wheel and seizure of the gear wheel bearings, as shown in Figs. 7 and 8. In Fig. 8 there is also a red zone where can see what remains of the crown tooth damaged, probably by the extraction of the worm screw.

142

D. Francia et al. Standard selecon minimum factor service 1.0 BWQ weight without motor Type

The gearboxes of sizes BW30,40,50,63,75, and 90Q are supplied with lifeme lubricaon, i.e. synthec oil AGIP TELIUM BSF and therefore can be mounted in all mounng posions provided in the catalog, with the excepon of sizes BW75Q and BW90Q in posion V5 / V6 for which it is necessary to contact our technical service to evaluate the condions of use. The BW110Q and BW13Q gearboxes are supplied with AGIP BLASIA 460 mineral lubricant.

kg

BW30Q

1,2

BW40Q

2,3

BW50Q

3,5

Oil volume (l)

BW63Q

6,2

BW75Q

9

B3

BW…Q

30

3

4,5

BW90Q

13

B8

2,2

3,3

BW110Q

35

B6-B7

2,5

3,5

BW130Q

48

V5/V6

3

4,5

0,04

40

0,08

50

0,15

63

0,03

75

0,55

90

1

110

130

Fig. 6. The BW30 catalog’s lubricating standards.

The disassembly of the worm screw requires a rotation of the gear wheel, but this movement was impossible due to the locking of the bearings and therefore with the removal of the worm screw the teeth already heavily damaged gave way. On the other hand, the worm screw in addition to oxidation phenomena due to the prolonged absence of lubricant, does not show particular signs of wear in view of the hardness of the material, higher than that of the material of the gear wheel.

Fig. 7. The bronze wheel teeth wear

As mentioned above, the main cause of the breakage was the lack of lubrication which caused wear of the wheel and locking of the bearings. This cause is identified as one of the main causes of bearing failure by a database that collects data on the reliability of different mechanical components [10]. The absence of lubrication can be attributed to possible seal failure.

Virtual Design for Repair: The Case of a Gearmotor

143

Fig. 8. Locking of the sprocket, due to the bearings and total wear of the teeth (area in yellow). (Color figure online)

From the database found, it is observed that failure rates for bearings and seals under standard conditions for commercial applications, against significantly lower accruals for gears. Figures 9–11 gives some detail about this. Part class : GASKET AND SEAL Type: GENERAL ENVIRONMENT DOR GRF GRM A A AUT HEL SHS

APPLICATION MIL . COML. X X X X X X X X

FALIURE RATE/10^6 HOURS 60% CONFIDENCE INTERVAL NUMBER NUMBER 60% UPPER LOWER UPPER OF FAILED 0,004 ----3 0 --1,52 1,704 3 16 --1,05 1,256 4 99 --65,251 66,33 3 10,647 --0,294 4,518 1 1 --113,516 131,803 1 140 --30,046 32,967 3 301 --2,797 5,073 1 11

--1,388 1,148 65,788 1,486 122,271 31,594 3,767

OPERATING HOURS ( x 10^6) 221,68 11,959 86,248 161,838 0,673 1,145 9,527 2,92

Fig. 9. Failure rates for seals. Part class : BEARING Type: BALL ENVIRONMENT DOR SAT GRF GRF GRM A A AI HEL SHS SUB

APPLICATION MIL . COML. X X X X X X X X X X X

0,01 --1,148 13,975 0,094 5,133 1,372 4,829 13,398 --4,728

FALIURE RATE/10^6 HOURS 60% CONFIDENCE INTERVAL 60% UPPER LOWER UPPER --0,007 0,014 0,688 ------1,001 1,319 --10,356 19,41 --0,054 0,159 --4,787 5,507 --0,272 4,171 --3799 6,148 --10,963 16,408 0,053 ------1,923 10,22

NUMBER OF 3 2 8 1 1 2 1 1 2 2 1

NUMBER FAILED 9 0 44 9 4 158 1 16 22 0 2

OPERATING HOURS ( x 10^6) 903,04 1,332 38,32 0,644 42,554 30,784 0,729 3,313 1,642 17,156 0,423

Fig. 10. Failure rates for bearings.

3.3 The Gearmotor Re-design From the catalog it can be seen that the maximum resistant moment allowed at the exit of the reducer is 27 Nm.

144

D. Francia et al. Part class : MECHANICAL DEVICE Type: GEAR

ENVIRONMENT GRF GRF SHS

APPLICATION MIL . COML. X X X

--0,169 0,073

FALIURE RATE/10^6 HOURS 60% CONFIDENCE INTERVAL NUMBER NUMBER 60% LOWER UPPER OF FAILED UPPER --0,175 --4 0 ----0,129 1 14 ----0,016 2 1

OPERATING HOURS ( x 10^6) 5,23 83,067 13,641

Fig. 11. Failure rates for gears.

From the catalog data and the formulas, the forces acting on the wheel are: • Tangential force Ft of cogwheel: 1095 N • Radial Force of cogwheel: 398 N • Toothed wheel axial force Fa: 136 N Both are applied on the primitive diameter of the toothed wheel at the point of contact between the wheel and the worm screw.

Fig. 12. The loads acting on the toothed wheel.

Following the described loads, shown in Fig. 12, we hypothesize a re-design of the shaft and of the toothed wheel with the objective to allow the replacement of the toothed wheel in case of wear of the teeth. We used 4 screws ISO 4762 M4×16 8.8 that guarantee with a good safety margin the axial force necessary to ensure the transfer of torque. As can be seen, under the head of the screws has been placed a washer, made of steel C72, which serves to better distribute the pressure of the screw head against the low hardness of the bronze used for the gear. The centering of the sprocket is done directly on the shaft. As highlighted, the longest times were those found in the disassembly of the oil seals and the rear cap that, due to the difficulty, are often damaged with the risk of damaging the seat. In this regard, to overcome this problem, the rear cap has been revised and replaced with a cast and turned component. The seal is entrusted to an OR and the cap also acts as a stop for the bearing that was previously entrusted to a Seeger. In addition to the rear cap, as mentioned, more time was also required to disassemble one of the two oil seals to access the Seeger that held the shaft of the gear wheel in place.

Virtual Design for Repair: The Case of a Gearmotor

145

In order to reduce this time, the solution adopted has a lateral closure, as shown in Fig. 13, that allows the bearing to be held in position where the seat for the oil seal is located. Its removal allows for easy access to the shaft that carries the sprocket. In addition, the design of the latch, once removed, allows the oil seal to be easily removed from it. It is noted that the seal of the side closure on the case is made by means of an OR, a similar solution to the one seen above. Finally, since the failure of the component under consideration resulted from a lack of lubrication, the possibility of inserting lubricating oil in case it is needed has been added to the re design of the component. The seal is made by a copper washer DIN 7603, shown in Fig. 14. Figure 15 finally shows the exploded view of the re-designed gearbox (Table 2).

Fig. 13. The assembly of the re-designed solution

Fig. 14. The oil lubricating insertion into the new reducer

Fig. 15. Exploded view of the re-designed assembly

146

D. Francia et al.

Table 2. The BW30 disassembly operations of re-designed gearmotor and quantification Operations

Time [min]

1. Removing 4 screws TSEI M5×10

2

2. Flange and spacer removal

0,3

3. Disassembling the rear threaded cap

1

4. Worm shaft removal

2

5. Bearing removal

0,5

6. Removing 4 screws TCEI M5×16

2

7. Oil seal flange holder and spacer removal

0,3

8. Oil seal removal

0,2

9. Gear wheel and bearings removal

0,5

10. Oil seal removal

0,2

11. Removing spacers

0,2

12. Removal of bearing from worm shaft

0,5

13. Removal of bearings from gear wheel shaft

2

14. Total time without to extraordinary activities(5-6-11)

11,7

4 Conclusions In the future, with the growth of attention to environmental issues, there will be more and more incentives for repair, as in Sweden where a policy of detaxation has been initiated for those who repair objects, so the cost margin could be increasingly in favour of repair. The application of these criteria that want to encourage design for repair must increasingly take into account the safety of the product and the user both in the manufacturing and repair phases. Repair can be carried out at various levels and by various people. This situation could create issues with respect to safety and warranties. So, such approaches can be evaluated directly by companies to undertake a closer relationship with the customer that allows them to quickly provide a remanufactured replacement and take back the non-functioning assembly so it can be repaired. Remanufactured product support at this point would only be effective if the cost of repair is less than the cost of production. From this perspective, effective designfor-repair could make a difference, and it could also overcome the problem that sometimes special and proprietary equipment is needed to perform certain assemblies and disassemblies. Finally, taking a broken component back for repair would allow companies to have a significant amount of data on the failures of their products in order to improve product design in many ways. In this paper a real case of a damaged reduced have been analyzed by studying the assembly, the failure causes, the reverse engineering of parts, the disassembly and,

Virtual Design for Repair: The Case of a Gearmotor

147

finally, the re-design of the product. The advantages of a solution designed in order to allow the repair of parts are evident in terms of sustainability circular economy pursuits. However, the costs of the repair faced to the cost of the components that could be reused and the cost of a partial disposal of the product, that has not become totally waste but only its non-repairable parts, have not been quantified in this paper. This will allow to go to evaluate what can be the effective difference of cost between the repaired component and the component replaced in toto, so to evaluate how much the proposed design can result sustainable also from the economic point of view. This present assessment does not take into account some economic aspects that may go to affect these assessments but are difficult to estimate, such as improving the image of the product and other aspects relating to customer care and data collection. It could be a further development of the current in-progress work.

References 1. Van den Berg, M.R., Bakker, C.A.: A product design framework for a circular economy. In: Cooper, T., Braithwaite, N., Moreno, M., Salvia, G. (Eds.) Proceedings of the Product Lifetimes and the Environment Conference, Nottingham, UK, 17–19 June 2015, Nottingham Trent University, Nottingham, UK (2015) 2. Andrews, D.: The circular economy, design thinking and education for sustainability. Local Econ. 30, 305–315 (2015) 3. Terzioglu, N., Wever, R.: Integrating repair into product design education: insights on repair, design and sustainability. Sustainability 13, 10067 (2021) 4. https://markets.businessinsider.com 5. Francia, D., Ponti, S., Frizziero, L., Liverani, A.: Virtual mechanical product disassembly sequences based on disassembly order graphs and time measurement units. Appl. Sci. (Switzerland) 9(171), 3638 (2019) 6. Honkalas, R., Deshmukh, B., Pawar, P.: A review on design and efficiency improvement of worm and worm wheel of a gear motor. J. Phys. Conf. Ser. 1969, 012023 (2021) 7. Srinivasan, H., Gadh, R.: Efficient geometric disassembly of multiple components from an assembly using wave propagation. J. Mech. Des. 122, 179–184 (2000) 8. De Mello, L.S.H., Sanderson, A.C.: AND/OR graph representation of assembly plan. IEEE Trans. Robot. Autom. 6, 188–199 (1990) 9. Shah, P.B., Motka, C.: Analysis and optimization of gearbox efficiency – a review, IJIRST 1(8) (2015) 10. Nonelectronic Parts Reliability Data (dtic.mil). Accessed 25 Feb 2022

Can MaWR-Method for Symmetry Plane Detection be Generalized for Complex Panfacial Fractures? Luca Di Angelo1

, Paolo Di Stefano1 , Lapo Governi2 and Yary Volpe2

, Antonio Marzola2(B)

,

1 Department of Industrial and Information Engineering, and of Economics, University of

L’Aquila, via G. Gronchi 18, 67100 L’Aquila, Italy 2 Department of Industrial Engineering, University of Florence, via di Santa Marta, 3,

50139 Firenze, Italy [email protected]

Abstract. When dealing with craniofacial impairments, restoring the morphological condition is as crucial as restoring the functional ones to avoid psychosocial disabilities for the patient. For this aim, the accurate location of the midsagittal plane is essential for performing reliable symmetry analyses and guiding effective surgery planning. To provide a fully automatic and landmark-independent approach, capable of providing a midsagittal plane for craniofacial skeleton even from anatomical models with high asymmetries, an innovative method, called MaWRmethod, was developed by the authors in a previous work. This paper further investigates the MaWR-method by evaluating its capacity to produce a successful outcome even in the worst-case scenario that may be considered in maxillofacial surgery, namely panfacial fractures. In all the test cases considered in this work, the method proved robust and reliable in its original design. It provided a consistent result requiring no user involvement, even when dealing with extreme asymmetries because of extensive and complex fractures. Keywords: Feature recognition · Symmetry analysis · Mid-sagittal plane · Symmetry plane detection

1 Introduction In craniofacial reconstructive surgery, the restoration of the morphological condition is just as crucial as the restoration of the functional impairments: the face is the most important means of connecting with other people, and any noticeable alteration on it could lead to significant psychosocial disability [1–4]. As a result, meticulous planning is frequently necessary to pre-define the surgical processes and the repositioning of bone segments: the main goal is to maximize symmetry between the two halves of the craniofacial skeleton since face symmetry is one of the most critical factors to achieve an excellent aesthetic result [5–10]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 148–158, 2023. https://doi.org/10.1007/978-3-031-15928-2_13

Can MaWR-Method for Symmetry Plane Detection

149

To achieve this goal, the knowledge of the three-dimensional craniofacial mid-sagittal plane (MSP) is essential for facial symmetry analysis, both in the pre- and post-operative stages and for guiding surgery planning and simulation. In addition, MPS is a powerful tool to help in preoperative designing of patient-specific devices [11–14]: as an example, when dealing with unilateral defects, it is used as the mirror plane to superimpose the contralateral healthy half on the defective one, providing a template for its restoration [8, 11]. Because of its importance, many approaches for locating the face skeleton plane of symmetry have been proposed [14]. Despite that, its location in clinical practice is still a subjective procedure, relying on the manual identification of specific anatomical landmarks on patients’ 2D or 3D diagnostic images. The selection involves (i) landmarks that, as per their formal definition, are located on the midplane (named central landmarks, e.g., Nasion, Basion, Pogonion, …) or (ii) landmarks arranged symmetrically on either side of the facial skeleton so the midpoint of the segment joining them is located on the midplane (e.g., Porion, Frontozygomatic suture, …) [14]. Since landmarks are often challenging to identify precisely due to, for example, the vast interpersonal variability or to the low resolution of diagnostic images, this strategy is prone to errors and is solely related to the user’s skill [15]. Furthermore, the anatomy under investigation is usually deformed or injured, resulting in significantly displaced landmarks from their standard locations. This aspect is significant with panfacial bone fractures, namely facial fractures simultaneously involving the face’s upper, middle, and lower thirds. These complex fractures are usually extensive, multiple, and comminuted, often very unstable and characterized by hypermobility of the bone segments. In these cases, the anatomy under consideration is very distant from that expected: this makes inefficient the application of the landmark-based methods for the definition of the symmetry plane because most of the anatomical references have been lost. Consequently, dealing with cases such as panfacial fractures requires a robust approach that is landmarks-independent and can filter out possible strong asymmetries that will undoubtedly affect the result. Even approaches based on machine learning could lead to questionable results when applied to such complex cases since the model to be analyzed differs significantly from all the models used for training the network [16]. Among the eligible methods that could be found in the related literature, the method proposed by the authors in a previous paper [14] can be considered as a potential solution to find the symmetry plane also in complex cases such as the panfacial fractures. The method (named MaWR-method: Mirror and Weighted Registration method), fully automatic and landmarks-independent, can extract a consistent mid-sagittal plane from the 3D voxel-based model of the anatomical region under consideration. It is a mirroringand-registration approach based on an iterative registration algorithm that minimizes an original weighted objective function by using a Particle Swarm Optimization algorithm. The weighted objective function is appropriately designed to filter out any asymmetries. The algorithm proved to perform a robust evaluation with significant asymmetries affecting the cranial bones [14]. In the present study, the algorithm’s usefulness range is expanded by examining its capacity to detect the midplane even in complex and severe craniofacial fractures. In particular, the algorithm was tested on 10 cases of panfacial fractures. The activity was carried out in cooperation with the Maxillofacial Surgery

150

L. Di Angelo et al.

Unit of the Careggi/CTO University Hospital of Florence (Italy), which selected the 10 test cases and provided the patients’ anonymized Computed Tomography (CT) scans. Each set of CT scans was segmented by a skilled operator using Materialise Mimics® software to obtain the 3D voxel-based model in STL format of the craniofacial bones. No further operations have been performed on the obtained STL models before being used as inputs for the MaWR-method. The resulting midplane was revised by the surgeons and analyzed according to an improved formulation of the Asymmetry Value (AV ) index defined in [14]. In all the 10 cases, the method, in its original implementation, proved to be robust and reliable and provides the craniofacial midplane without requiring any interaction or further modification by the user also with complex panfacial fractures.

2 Methods and Tools 2.1 MaWR-Method As previously mentioned, the MaWR-method is a landmarks-independent algorithm based on a mirroring and registration approach to detect the midsagittal plane specifically designed for the human craniofacial skeleton. The mirroring and registration strategy is based on an iterative registration algorithm that minimizes the initial weighted objective function OF, described in Eq. (1), employing a Particle Swarm Optimization (PSO) algorithm. A comprehensive description of the method is provided in [14]. The MaWRmethod is employed without modification in the tests presented in this study, utilizing the original Matlab® script. However, for the sake of clarity, a quick overview of the procedure is provided in the following. Operatively, the MaWR-method starts from a 3D voxel-based model (PC) and evaluates its symmetry plane by a preliminary first-attempt, carried out with a Principal Component Analysis (PCA) algorithm, which is then refined iteratively until its final estimation. The final estimation of the MSP is obtained by the PSO algorithm that iteratively minimizes the objective function OF i , whose expression at the i-th step is: n j=1 [wi, j · dHauss (pj , TS(PCm, i ))] n OFi = (1) j=1 wi, j where: – – – – – –

n is the number of points of the source point cloud PC; pj is the j-th point belonging to PC; PC m,i is the PC mirrored around the MSP estimated at the i-th iteration TS(PC m,i ) is the tessellated surface of PC m,i ; d Hauss (pj ,TS(PC m,i )) is the Hausdorff distance between pj and TS(PC m,i ); wi,j is the weight associated to pj at i-th iteration.

The weights wi,j (Eq. (2)) represent the proposed method’s most important innovation: it plays an essential role in the functionalities of the MaWR-method. 

wi, j

1 − = 2e σr

dHauss (pj , TS(PCm, i )) σr

2

(2)

Can MaWR-Method for Symmetry Plane Detection

151

The weight wi,j is expressed according to the Leclerc function and σ r defines the radius value for which the weight is the 36.79% of its maximum value [17]. The weight wi,j works as a filter that excludes from the registration process any local asymmetries, whether they are near or far from the symmetry plane. Large values of the parameter σ r guarantee a robust registration, whereas small values afford a considerably accurate registration of the symmetric parts, excluding all asymmetries. In the test cases reported hereafter, the value of σ r is maintained the same as assumed in [14], namely the 50 percent of the maximal width of the skull since it proved to be effective in locating the MSP also for real skulls with significant asymmetries. In Eq. (1), the Hausdorff distance is calculated between each point pj on the source PC and the tessellated surface TS(PC m,i ), instead of point-point, to avoid the asymmetry in surface sampling and to make the distance as independent as possible on points density. 2.2 Asymmetry Value Index As stated in [14], evaluating the consistency of a MSP is not a trivial task, and it remains an open issue because discussing symmetry in human anatomy is only an abstraction: the symmetry plane defined by its precise definition cannot exist. This means that, whatever is the considered MSP, one half is never a perfect mirror of the other in the human body. In [14] authors proposed an index, named Asymmetry Value Index (AV ), able to summarize in a single number the overall differences between the two halves defined by a considered MSP. An AV value equal to zero indicates a perfectly symmetric model. For real skulls, the AV value cannot be zero; however, it can evaluate and compare different MSPs. Operatively, a lesser AV value refers to a most symmetric model. The AV index proposed in [14] by the authors is defined as follows: AV = median(δ)

(3)

with δ = dist(pj , TS(PCm )). The vector δ contains the Euclidean distances between each point of the source point cloud (pj ) and its closest tessellated surface triangle (TS(PC m,i )) of the mirrored configuration. The mirrored configuration is obtained reflecting the PC upon the MSP to be evaluated. The point-triangle distance allows for the avoidance of surface model sampling asymmetries. To increase the index robustness, the AV* here proposed and defined in Eq. (4) is calculated using the mean instead of the median. However, when using the mean, it is necessary to reduce the weight of small, severely asymmetrical regions (outliers are filtered out in a preliminary stage, as explained in the following Sect. 2.3). This goal is achieved by considering the data contained in δ modelled by a normal distribution N (δ, S) (where δ is the mean and S is the standard deviation of the distribution δ). A new vector δ˜ is obtained by filtering out from the vector δ the data that are far away ±3S from the mean value δ. As a result, the implemented AV can be defined as follows: ˜ AV ∗ = mean(δ)

(4)

This new index enables a more robust evaluation than the original formulation, especially in cases with localized strong asymmetries, but ensuring the same consistency: the robust and the original formulation, in fact, are supposed to give similar values in most

152

L. Di Angelo et al.

of the cases dealt with. In the section Results, the values calculated with the classical formulation of the AV index are reported along with the AV* computed with the new approach. 2.3 Starting Data The test cases were selected by surgeons from the Maxillofacial Surgery Unit of the Careggi/CTO University Hospital of Florence (Italy) and included 10 patients (7 males, 3 females, mean age 38.6) affected by complex panfacial fractures. Henceforward, the 10 test cases will be identified by the codes from PF#1 to PF#10. The starting data were the anonymized diagnostic images of each patient acquired by a Spiral Computed Tomography (CT) of the skull and provided in DICOM format. CT scans are suitable for hard tissues since bones appear with a well-identifiable greyscale intensity window. So, simply grey-value thresholding with an easily recognizable lower and upper bound can be applied to filtering out the surrounding tissues from the ROI by generating a binary mask partitioning the images in the meaningful segment and the background (Fig. 1).

Fig. 1. a) a CT slice; b) histogram of the grey level in the CT slice; c) segmentation

The DICOM files were processed by a skilled user employing Materialise Mimics® 24.0, which provides internal tools for such segmentation and the subsequent generation of the correspondent 3D discrete scalar field, known as voxel-based model. The software allows saving the voxel-based model into a Standard Triangulation Language (STL) file for easier manipulation. The STL format is a widespread standard format supported by many commercial software packages and is widely used in Reverse Engineering and Additive Manufacturing applications. After the segmentation, the surgeon inspects each STL model of the craniofacial skeleton one last time before importing it into Geomagic Design X® software. At this stage, the outliers are filtered out using the software’s proper tool. Moreover, each model is decimated as far as possible, imposing a maximum error of 0.1 mm for the original model. As a result, the 10 models have around 310,000 points on average (minimum 195,840, maximum 531,236). This operation ensures to limit the computational time for the subsequent operation. Since the MaWR-method is an iterative method based, as said, on a PSO algorithm, the larger the model, the longer the computational time.

Can MaWR-Method for Symmetry Plane Detection

153

No further operations were performed on STL models before being used as inputs for the MaWR-method. Figure 2 shows 4 of the 10 starting STLs of the addressed cases as an example.

Fig. 2. a) PF#1, b) PF#2, c) PF#3 and d) PF#4

154

L. Di Angelo et al.

3 Results MaWR-method was applied to the 10 test cases without modifications with respect to the method presented in [14] and briefly described in Sect. 2.1. Figure 3 shows the frontal view of the test cases shown in Fig. 2 with their MSP superimposed. The MSP is calculated using the MaWR-method. As regards the computational burden, it is closely related to the resolution of the surface model (the number of vertices forming the starting STL model is limited as described in Sect. 2.3). For the addressed test cases, the computational time was between 10 and 35 min.

Fig. 3. Frontal view of the test cases a) PF#1, b) PF#2, c) PF#3, d) PF#4. The red line represents the MSP computed with the MaWR-method

Expert maxillo-facial surgeons from the Careggi/CTO University Hospital reviewed the consistency of the resulting 3D MSPs for all the test cases. Such a revision is also performed by checking the actual location of each MPS with respect to the central landmarks. Because of the lack of anatomical references, since extensive fractures severely compromised the models, only the available landmarks were used to evaluate each model. As an example, since the fractures in PF#1 do not affect the upper region of the facial

Can MaWR-Method for Symmetry Plane Detection

155

skeleton, it is expected that, in this case, the MPS passes close to the Crista Galli (defined as the most superior point of the Crista Galli) and the Nasion (most anterior point of the frontonasal suture in the mid-sagittal plane) landmarks. On the contrary, the lower region of the facial skeleton (namely mandible and maxilla) is less affected by the fracture in PF#4, so the Pogonion (defined as the most anterior point of the bony chin), and the Incisive Foramen (the midpoint of the incisive foramen) landmarks are considered in the outcome evaluation process. Figure 4 and Fig. 5 show the location of the MSP with respect to the reference landmarks for, respectively, PF#1 and PF#4.

Fig. 4. MPS for the PF#1 and its location with respect to the a) Crista Galli and b) Nasion

Fig. 5. MPS for the PF#4 and its location with respect to pogonion and incisive foramen

Table 1 reports the AV value computed for the 10 test cases using, as explained in Sect. 2.2, both the robust (named AV*) and the original formulation (AV ). To make

156

L. Di Angelo et al.

possible a comparison between the new and the original approach, in Table 1 are also reported the AV values computed for the two healthy skulls, TC#1 and TC#2, presented in [14]. As mentioned above, the AV* index (such as the AV ) summarizes in a single value a quantitative evaluation of the model asymmetry: the more significant the asymmetry of the model, the higher the value of the index (the AV index of a perfectly symmetric model is 0). A simple comparison of the values in Table 1 shows that models PF#1 to PF#10 are affected by a much more significant asymmetry than TC#1 and TC#2, even without knowing the actual condition of the models under examination. In addition to measuring a model’s asymmetries, AV* may advise successful presurgical planning by quantitatively comparing different reconstructive strategies: the smallest AV*, among surgical alternatives, indicates the best surgery outcome. Moreover, it can be used to assess the surgery outcome by comparing the postoperative AV* with the preoperative one and the planned index. Table 1. AV values computed through the new and the original formulation. Test case

AV*

AV

PF#1

1.63

1.44

PF#2

2.04

1.83

PF#3

1.39

1.27

PF#4

1.67

1.96

PF#5

1.53

1.39

PF#6

2.12

1.91

PF#7

1.98

1.81

PF#8

1.85

1.63

PF#9

1.51

1.36

PF#10

1.64

1.42

TC#1 [14]

1.03

0.95

TC#2 [14]

0.75

0.66

Table 1 also shows that AV * and AV return very similar values: AV * has been introduced only to make the index more robust in the case of models with significant localized asymmetries and wide symmetrical areas. AV and AV* return comparable values in the other cases (such as those under investigation), for how the two indices have been defined. As shown in Table 1, AV* is usually greater than AV: this is because the areas of strong asymmetry are normally localized around the defects, while the rest of the anatomy is quite symmetrical. This lead, as said in the Sect. 2.2, to underestimate the asymmetry when it is computed using the median, as with the AV index. When, on the other hand, areas of asymmetry are widespread throughout the anatomy, the AV index tends to increase and may even be greater than AV* (as is the case for PF#4).

Can MaWR-Method for Symmetry Plane Detection

157

4 Conclusions The knowledge of the midsagittal plane location is crucial in neurosurgical and maxillofacial surgery, mainly for analyzing craniofacial symmetry and guiding effective surgery planning. Proposing a robust automatic method to provide a reliable MSP is a breakthrough. For this reason, in a previous paper, the authors proposed a promising method called MaWR-method, based on a mirroring-and-registration approach that uses Particle Swarm Optimization algorithm to minimize a weighted objective function specifically designed to provide the MSP for the craniofacial skeleton. The strength of this method is that it is fully automatic and landmark-independent and produces a reliable result when dealing with an asymmetric model. In the present paper, the applicability of the MaWR-method is increased by investigating its ability to detect the mid-sagittal plane of the human craniofacial skeleton also in the complex case of extensive and severe panfacial fractures. The reliability of the resulting MPS was revised and confirmed by expert maxillofacial surgeons of the Careggi/CTO University Hospital. Even when dealing with very complex models, the method proved to be resilient and trustworthy in its original design, providing a consistent result requiring no user interaction. The present paper also presents an improved version of the Asymmetry Value (AV ) index, presented in its original formulation in [14] to provide a quantitative evaluation of craniofacial asymmetries. The improved formulation aims to increase the robustness of the AV since, in its original formulation, it could underestimate the asymmetry value with models characterized by vast symmetrical regions and localized strong asymmetries. Driven by these promising results, future efforts will be addressed to introduce this tool in the pre-operative phase, helping the surgeon define effective planning to maximize the outcome of the intervention for complex craniofacial fractures. For this purpose, a retrospective study will be conducted in collaboration with the Careggi/CTO University Hospital’s maxillofacial surgeons. The study will compare the actual outcome of the intervention of patients who have already undergone reconstructive surgery with the virtual presurgical planning performed using the MaWR-method. The objective will be mainly focused to evaluate the effectiveness of MaWR-method as a tool able to allow a more reliable and precise pre-surgical evaluation in terms of analysis of the extent of skeletal displacements for the surgical reconstruction, for evaluating the comparison between pre- and post-operative facial symmetry as well as pre- and post-operative orbital volumes and the variation of their position in the three dimensions.

References 1. Rhodes, G., et al.: Perceived health contributes to the attractiveness of facial symmetry, averageness, and sexual dimorphism. Perception 36(8), 1244–1252 (2007) 2. Gangestad, S.W., Thornhill, R., Yeo, R.A.: Facial attractiveness, developmental stability, and fluctuating asymmetry. Ethol. Sociobiol. 15(2), 73–85 (1994) 3. Rhodes, G., Proffitt, F., Grady, J.M., Sumich, A.: Facial symmetry and the perception of beauty. Psychon. Bull. Rev. 5, 659–669 (1998) 4. Fink, B., Neave, N., Manning, J.T., Grammer, K.: Facial symmetry and judgements of attractiveness, health and personality. Pers. Individ. Differ. 41(3), 491–499 (2006)

158

L. Di Angelo et al.

5. Svalina, A., Vuollo, V., Serlo, W., Sinikumpu, J.-J., Silvola, A.-S., Salokorpi, N.: Craniofacial characteristics and cosmetic satisfaction of patients with sagittal and metopic synostosis: a case–control study using 3D photogrammetric imaging. Child’s Nervous System 38, 781–788 (2021). https://doi.org/10.1007/s00381-021-05434-y 6. Jones, B.C., Little, A.C., Penton-Voak, I.S., Tiddeman, B.P., Burt, D.M., Perrett, D.I.: Facial symmetry and judgements of apparent health: Support for a “good genes” explanation of the attractiveness–symmetry relationship. Evol. Hum. Behav. 22(6), 417–429 (2001) 7. Wermker, K., Kleinheinz, J., Jung, S., Dirksen, D.: Soft tissue response and facial symmetry after orthognathic surgery. J. Cranio-Maxillofac. Surg. 42(6), e339–e345 (2014) 8. Gibelli, D., et al.: Assessing symmetry of zygomatic bone through three-dimensional segmentation on computed tomography scan and “mirroring” procedure: a contribution for reconstructive maxillofacial surgery. J. Cranio-Maxillofac. Surg. 46(4), 600–604 (2018) 9. De Momi, E., et al.: Automatic extraction of the mid-facial plane for cranio-maxillofacial surgery planning. Int. J. Oral Maxillofac. Surg. 35(7), 636–642 (2006) 10. Burgert, O., et al.: A VR-system supporting symmetry related cranio-maxillofacial surgery. Stud. Health Technol. Inf. 94, 33–35 (2003) 11. Buonamici, F., et al.: Reverse engineering techniques for virtual reconstruction of defective skulls: an overview of existing approaches. Comput. Aided Design Appl. 16(1), 103–112 (2018) 12. Marzola, A., Governi, L., Genitori, L., Mussa, F., Volpe, Y., Furferi, R.: A semi-automatic hybrid approach for defective skulls reconstruction. Comput. Aided Design Appl. 17(1), 190–204 (2020) 13. Marzola, A., Servi, M., Volpe, Y.: A reliable procedure for the construction of a statistical shape model of the cranial vault. In: Rizzi, C., Andrisano, A.O., Leali, F., Gherardini, F., Pini, F., Vergnano, A. (eds.) ADM 2019. LNME, pp. 788–800. Springer, Cham (2020). https://doi. org/10.1007/978-3-030-31154-4_67 14. Di Angelo, L., Di Stefano, P., Governi, L., Marzola, A., Volpe, Y.: A robust and automatic method for the best symmetry plane detection of craniofacial skeletons. Symmetry 11(2), 245 (2019) 15. Tan, W., et al.: An approach to extraction midsagittal plane of skull from brain CT images for oral and maxillofacial surgery. IEEE Access 7, 118203–118217 (2019) 16. da Silva, R.D.C., Jenkyn, T.R., Carranza, V.A.: Convolutional neural networks and geometric moments to identify the bilateral symmetric midplane in facial skeletons from CT scans. Biology 10(3), 1–15 (2021) 17. Leclerc, Y.G.: Constructing simple stable descriptions for image partitioning. Int. J. Comput. Vision 3(1), 73–102 (1989)

Mixed Reality-Based Support for Total Hip Arthroplasty Assessment Chiara Innocente , Pietro Piazzolla , Luca Ulrich(B) , Sandro Moos , Stefano Tornincasa , and Enrico Vezzetti DIGEP, Politecnico di Torino, C.so Duca degli Abruzzi 24, 10129 Turin, Italy [email protected]

Abstract. The evaluation of hip implantation success remains one of the most relevant problems in orthopaedics. There are several factors that can cause its failure, e.g.: aseptic loosening and dislocations of the prosthetic joint due to implant impingement. Following a total hip arthroplasty, it is fundamental that the orthopaedist can evaluate which may be the possible risk factors that would lead to dislocation, or in the worst cases, to implant failure. A procedure has been carried out with the aim of evaluating the Range of Movement (ROM) of the implanted prosthesis, to predict whether the inserted implant is correctly positioned or will be prone to dislocation or material wear due to the malposition of its components. Leveraging on a previous patented methodology that consists in the 3D reconstruction and movement simulation of the hip joint, this work aims to provide a more effective visualization of the simulation results through Mixed Reality (MR). The use of MR for the representation of hip kinematics and implant position can provide the orthopaedic surgeon with a deeper understanding of the orientation and position of implanted components, as well as the consequences of such placements while looking directly at the patient. To this end, an anchoring system based on a body-tracking recognition library was developed, so that both completely automatic and human-assisted options are available without additional markers or sensors. An Augmented Reality (AR) prototype has been developed in Unity 3D and used on HoloLens 2, integrating the implemented human-assisted anchoring system option. Keywords: Computer-aided surgery · Total hip arthroplasty · THA assessment · Mixed reality · HoloLens 2

1 Introduction Total Hip Arthroplasty (THA) is currently one of the most performed surgical procedures worldwide, intended to replace the damaged bone with prosthetic components [1]. Given the great magnitude of the problem and the high prevalence of this surgery, it is fundamental that the orthopaedist can evaluate which may be the possible risk factors that could lead to dislocation, or in the worst cases, to implant failure [2]. Among the most recognized causes of dislocation, intraoperative implant displacement and inadequate patient compliance with postoperative precautions provided by the surgeon play a key role [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 159–169, 2023. https://doi.org/10.1007/978-3-031-15928-2_14

160

C. Innocente et al.

The occurrence of any complication can represent a risk to the patient’s health status and quality of life as well as an increased risk to the patient of implant revision. Moreover, it is related to a greater consumption of economic resources borne by the health care system [4]. From this, it emerges how the sustainability of costs is an important problem for the public administration. It is therefore evident that it is necessary to find tools to reduce complications by preventing them during surgery, and consequently reducing the resulting costs. A procedure has been carried out with the aim of evaluating the Range of Movement (ROM) of the implanted prosthesis, to predict whether the inserted implant is correctly positioned or will be prone to dislocation or material wear due to the malposition of its components. The ROM is defined as the maximum angular excursion of the joint before impingement occurs. For a healthy hip, it is defined by the following values [5]: 120° for Flexion, 30° for Extension, 45° for Abduction, 35° for Adduction, 45° for Intra-Rotation, 45° for Extra-Rotation. Among the goals of THA, ensuring the stability of the prosthetic components throughout the ROM necessary for daily activities is of paramount importance [6]. Up-to-now the revision process is supported by an already patented methodology that consists in the 3D reconstruction of pelvis and femoral prosthesis and the movement simulation of the hip joint [7]. This work aims to provide a more effective visualization of the simulation results using Mixed Reality (MR). Many studies have shown how MR visualization has allowed to optimize surgical tools and devices positioning, to avoid multiple X-rays scans, and consequently to reduce radiation exposure, and to decrease surgical time in numerous orthopaedic procedures [8]. MR has found a wide range of applications in orthopaedics, showing great potential to change the current practice of medical training and clinical routine [9]. Different studies have been configured to visualise MR patient anatomy during surgery [10, 11], or to use MR as a guide for insertion of prosthesis [12, 13], pedicle screws [14, 15] and guide wires [16, 17]. In addition to surgery and intraoperative applications, MR is also establishing itself as an important surgical simulation [18, 19] and intraoperative training tool [20, 21], as well as a rehabilitation tool for patients [22]. In this work, through MR, the orthopaedic surgeon can gain a deeper understanding of the patient’s status due to the ability to manipulate the 3D virtual model at will, while looking directly at the patient. In this way, the orthopaedic can carefully evaluate every aspect of the prosthesis, from its placement to its orientation, and to understand which are the contact points that may be sources of possible complications. To this end, an anchoring system based on a body-tracking recognition library was developed, so that both completely automatic and human-assisted options are available without additional markers or sensors. Thus, it is possible to automatically recognize the right and left joint centres of the patient’s hip and pin the joint hologram to the corresponding location in the real world. The method will be treated in Sect. 2, the results will be shown in Sect. 3 and in Sect. 4 conclusions will be drawn.

Mixed Reality-Based Support for Total Hip Arthroplasty Assessment

161

2 Methods The platform used to develop our MR-based methodology is Unity 3D (https://unity. com/). Implementation builds on an already patented procedure that comprises the 3D reconstruction of pelvis and femoral prosthesis and the movement simulation of the hip joint [7]. The reconstruction of the virtual 3D model of the joint and the implant are performed starting from DICOM images provided by CT scan, hence the algorithm for the measurement of the ROM of the prosthesis highlights the criticalities related to the femoral head and the acetabulum after the implantation of the prosthesis. Results obtained comes visualized using MR and used to simulate the movement of the femur on the pelvis, while looking directly at the patient. For each kinematic movement simulated, reference values that the model should be able to achieve were set. Reference values have been taken from the work of Röling et al. [23] and have been reported in Table 1. Table 1. Reference values that the model should be able to achieve. Movement

Value

Flexion

120°

Abduction

50°

Intra-rotation

50°

IN with 30 FL

60°

IN with 60 FL

40°

IN with 90 FL

30°

IN with 30 FL 20 AD

50°

IN with 60 FL 20 AD

40°

IN with 90 FL 20 AD

30°

EXT with 15 EXL

15°

Each movement is simulated in MR using a local reference system of the femur. During the simulation, the femur moves along predefined directions until it reaches the pelvis in a range of angles from the neutral position (0°) to the previously calculated angle of impingement. Each movement is simulated as a single rotation or as a composition of rotations along predetermined axes using quaternions. To allow the simulated joint ROM to be compared to the patient’s actual ROM, an anchoring system based on a body-tracking recognition library was developed without the use of additional markers or sensors. Anchoring the 3D virtual model to the patient’s physiological joint allows the orthopaedist to perform joint ROM simulation while looking directly at the patient, obtaining real-time feedback of the simulation results on the anatomical hip.

162

C. Innocente et al.

To avoid the well-known issues regarding the marker-based motion capture systems [24] and not to introduce additional devices such as depth cameras, the OpenPose library [25] has been chosen. Two options have been made available for the anchoring system, automatic or humanassisted, to identify the X and Y coordinates of the key points corresponding to human body joints. Then, according to the considered hemi-side of the patient, the joint relative to the hip has been selected, and its coordinates appropriately mapped into real world coordinates. A prototype has been developed in Unity 3D and then deployed on HoloLens 2 (https://docs.microsoft.com/en-us/hololens/hololens2-hardware). As seen in Fig. 1, interaction with the AR application occurs through three main panels. Users define to which hemi-side of the patient the loaded 3D models belongs (Parameters & Anchoring), choose which movement to simulate (Movements), and sets on which model to visualize the intersection, either pelvis or femoral prosthesis, and their transparency value (Bones Visibility).

Fig. 1. Unity scene view of panels and holograms at Start.

In this Augmented Reality (AR) application the orthopaedist can position and manually manipulate the patient’s 3D femur and pelvis models obtained from CT images after THA surgery, and interact with holograms through hand gestures, gaze, and head movement. By observing the joint model and interacting with it, the surgeon can verify the kinematics through the implemented simulation and check the impingement limits for each movement while looking directly at the patient (Fig. 2 and Fig. 3). Besides, to improve the visualization of the zone of impingement without the hindrance of the model, there is the possibility to modify the transparency of femoral prosthesis and pelvis, or to remove the models completely. This will make it easier for the orthopaedist to visualize bones intersections and related prosthetic components, and carefully evaluate the bone points affected by impingement.

Mixed Reality-Based Support for Total Hip Arthroplasty Assessment

163

A user interface provides the orthopaedist with the results obtained from the simulator, allowing to study the situation without ambiguity. For each simulated movement the maximum angular excursion reached before the articular impingement is reported. The impingement zone at the limit of the movement comes coloured red by the simulator and displayed alternatively on the pelvis on the femur model (Fig. 4).

Fig. 2. 3D model anchored to the patient hip joint while lying on his back.

Fig. 3. 3D model anchored to the patient hip joint while performing Flexion movement simulation.

164

C. Innocente et al.

Fig. 4. Intersection for intra-rotation with 30° of flexion movement visualized on femur. To improve visualization of the impingement volume, the close-up image of the intersection condition has been obtained removing the background and modifying transparency of pelvis exploiting the feature of our application.

3 Results and Discussion The developed methodology has been designed to simulate the hip joint movement after the THA in order to assess the correct positioning of the prosthetic implant. Indeed, a bad positioning could result in excessive wear or even potential dislocations. At present, surgeon provides indications to the patient after some tests consisting of carefully making him move the hip joint along predefined directions. If the patient feels pain after performing specific movements or cannot reach the expected ROM, countermeasures must be taken, possibly a new intervention in the worst case. These tests are susceptible to errors. The presented system can predict critical movements and allow the surgeon to provide more specific indications to the patient about the correct prosthesis use. By doing so, a simpler and more specific communication can also lead to improve the doctor-patient relationship. Some parameters to describe the performance of the developed methodology were evaluated. Spatial anchoring system ensure the orthopaedist to visualize dynamic updates of the position and orientation of the 3D model of the joint related to the position of the anatomical joint in the real world [26]. Currently, several strategies of spatial anchoring exist, that leverage on sensor-based, marker-based or markerless tracking system. OpenPose library has been used as a markerless pose-detection tracking system

Mixed Reality-Based Support for Total Hip Arthroplasty Assessment

165

for human re-identification, retargeting, and human-computer interaction [25]. Nakano et al. [27] calculated the accuracy of the used body tracking library by comparing its performance with that of an optical marker-based tracking system. Simulation accuracy is of paramount importance to correctly perform tests on the patient after THA and can be considered ± 1 degree along the predefined position [7]. Anchoring should be as robust as possible to provide the surgeon a tool for the simulation visualisation directly on the patient. Nonetheless, anchoring does not influence the simulation results, hence the accuracy obtained by Nakano et al. [27], below 30 mm in 80% of cases, has been evaluated enough for this work, also considering that the system in not intended to substitute the surgeon during the assessment, but to support him/her. That being said, a future work could focus on anchoring accuracy to improve visualization and better support the test operator. To assess performance of our automatic anchoring system, 9 sample videos were analysed. The mean frequency of automatic update of the 3D model position was calculated to be about 9.30 times per second. This is a software limitation, considering that HoloLens 2 would be able to reach update frequencies even 10 times higher. As regards reliability of the anchoring system, target was recognized in 97.22% of cases, with a standard deviation of 2.10%. Target detection failure can occur in cases of overlapping parts, missing or false parts detection and blurred images. Tests have been performed to evaluate the methodology and the usability of HoloLens 2 for the intended purpose, considering the interaction with the virtual content and its visual perception. Seven subjects among biomedical and computer engineer students were recruited for the study and subjected first to a training phase to learn gestural interaction, and then to a test phase in which they had to complete certain tasks in sequence. The questionnaire submitted to the participants was formulated in order to assess visual perception, interaction, and ergonomics. The questionnaire, which is reported in Table 2, comprises 20 items, each rated using a Likert 5-point scale (from 1 “strongly disagree”, to 5 “strongly agree”). The items express positive and negative attitudes towards a specific feature of the application. Participants were asked to indicate, for each item, their degree of agreement or disagreement with what the statement expresses. The items were drafted in such a way that favourable (affirmative sentences), and unfavourable (negative sentences) items could be compared. In particular, all statements were formulated in such a way that a high score (5) corresponded to a positive aspect for our study. The aim of these judgements was to delineate the subject’s attitude towards the object of the test. Users expressed overall satisfaction with their own performances (Table 2). Positive feedback was obtained on the visual perception of the simulation results (items 1 to 8), showing that AR visualisation could improve the understanding of the implant situation and facilitate its evaluation. Gesture interaction was also found to be effective regardless of previous level of experience with AR and HoloLens. In terms of ergonomics (items 9 to 13), users rated the use of the HoloLens positively and had no difficulties following the simulation and viewing the results.

166

C. Innocente et al.

Table 2. User testing questions. Items 1 to 8 refer to visual perception, items 9 to 13 refer to interaction and ergonomics, items 14 to 20 refer to workload and engagement. The table contains the occurrences of the answers given (1, 2, 3, 4, or 5) among the participants. Item

Questionnaire items

1

2

3

4

5

1

The virtual content is correctly aligned to real objects

0

0

0

3

4

2

It is easy to perceive the spatial relationships between real and virtual objects

0

0

1

5

1

3

I did not notice motion of virtual content

0

0

0

1

6

4

I did not notice latency (lag, delay) between virtual content and real objects

0

0

1

4

2

5

I did not notice jitter (high frequency shaking of the virtual content)

0

0

1

3

3

6

I did not experience double vision

0

0

0

1

6

7

I did not notice colour separation

0

0

1

4

2

8

The field of view (FOV) is adequate for the application

0

1

3

1

2

9

I did not experience postural discomfort during the application

0

0

1

2

4

10

I did not experience visual fatigue

1

1

1

2

2

11

Gesture interaction is easy and intuitive

0

1

1

4

1

12

It is easy to follow the movement simulation

0

0

1

2

4

13

It is easy to visualize the intersection volumes obtained from the simulation

0

0

1

3

3

14

The task was not mentally demanding (mental demands)

0

0

2

4

1

15

The task was not physically demanding (physical demands)

0

0

0

0

7

16

The pace of the task was not hurried or rushed (temporal demands)

0

0

1

3

3

17

I was successful in performing the task (own performance)

0

0

1

2

4

18

I have worked hard to achieve my level of performance (effort)

2

0

2

1

2

19

Performing the task is engaging (engagement)

0

0

0

1

6

20

I was not insecure/discouraged/irritated/stressed/ annoyed while performing the task (frustration)

1

0

1

2

3

Lowest score regards HoloLens field of view, that was judged insufficient by some of the users, and level of effort involved. This could be explained considering that, especially up close, it is not possible to view all the user interface panels and the 3D model at the same time. A solution would be to optimise the size of the panels for the user’s field of view, so that the user never looks away from the simulation. Another important aspect concerns the user interfaces, designed in attempt to give the user maximum freedom of visualisation. Their correct use therefore requires learning the gestures for interacting with different types of buttons (simple buttons, radio-buttons, toggle buttons, sliders, etc.), suggesting that a high training should be required.

Mixed Reality-Based Support for Total Hip Arthroplasty Assessment

167

4 Conclusion Total hip arthroplasty is a surgical procedure that could greatly benefit from 3D simulation. In this context, the possibility of assessing the surgery success immediately after prosthesis insertion could play a key role. In this work, we present an innovative post-operative THA assessment tool, which uses patient-specific 3D modelling to carefully evaluate the outcome of the surgical case under investigation. Using MR, the quality of the simulation is improved thanks to a more effective visualisation of results, and realistic real-time feedback is obtained by directly observing the simulation results on the patient. Moreover, an interactive and immersive experience for the orthopaedist is built, enriching the field of view with information that facilitates the evaluation of the surgery compared to the normal follow-up medical examination. Future works will be focused on setting up a prototype equipped with a completely automatic anchoring system. This task will be faced both from the software and the hardware side identifying the most suitable solutions to link the body tracking library and the smart glasses. Then, the whole procedure will be tested on a more extended set of case studies for the validation.

References 1. Ferguson, R.J., Palmer, A.J., Taylor, A., Porter, M.L., Malchau, H., Glyn-Jones, S.: Hip replacement. The Lancet 392, 1662–1671 (2018). https://doi.org/10.1016/S0140-673 6(18)31777-X 2. Ulrich, S.D., et al.: Total hip arthroplasties: What are the reasons for revision? Int. Orthop. 32, 597 (2008). https://doi.org/10.1007/S00264-007-0364-3 3. Bowler, D.J.M., Flandry, F.: Prevalence of femoroacetabular impingement in younger patients undergoing total hip arthroplasty. J. Surg. Orthop. Adv. 21, 122–125 (2012). https://doi.org/ 10.3113/JSOA.2012.0122 4. Gademan, M.G.J., Hofstede, S.N., Vliet Vlieland, T.P.M., Nelissen, R.G.H.H., Marang-Van de Mheen, P.J.: Indication criteria for total hip or knee arthroplasty in osteoarthritis: a state-ofthe-science overview. BMC Musculoskelet. Disord. 17, 463 (2016). https://doi.org/10.1186/ S12891-016-1325-Z/TABLES/4 5. Turley, G.A., Ahmed, S.M.Y., Williams, M.A., Griffin, D.R.: Establishing a range of motion boundary for total hip arthroplasty. Proc. Inst. Mech. Eng. H 225, 769–7682 (2011) 6. Giachino, M., et al.: Dynamic evaluation of THA components by Prosthesis Impingement Software (PIS). Acta bio-medica: Atenei Parmensis. 92, e2021295–e2021295 (2021) 7. Ulrich, L., Montrucchio, B., Vezzetti, E., Aprato, A., Massè, A.: Sistema e metodo di stima di motilità di un’articolazione oggetto di impianto protesico 8. Gribaudo, M., Piazzolla, P., Porpiglia, F., Vezzetti, E., Violante, M.G.: 3D augmentation of the surgical video stream: toward a modular approach. Comput. Methods Programs Biomed. 191, 105505 (2020). https://doi.org/10.1016/J.CMPB.2020.105505 9. Jud, L., et al.: Applicability of augmented reality in orthopedic surgery - a systematic review. BMC Musculoskelet. Disord. 21, 103 (2020). https://doi.org/10.1186/s12891-020-3110-2 10. Hiranaka, T., et al.: Augmented reality: The use of the picolinker smart glasses improves wire insertion under fluoroscopy. World J. Orthop. 8, 891–894 (2017). https://doi.org/10.5312/wjo. v8.i12.891

168

C. Innocente et al.

11. Maria, A., et al.: Camera-augmented mobile C-arm (CamC): a feasibility study of augmented reality imaging in the operating room. Int. J. Med. Robotics Comput. Assist. Surg. 14(2), e1885 (2018). https://doi.org/10.1002/rcs.1885 12. Ogawa, H., Hasegawa, S., Tsukada, S., Matsubara, M.: A pilot study of augmented reality technology applied to the acetabular cup placement during total hip arthroplasty. J. Arthroplasty 33, 1833–1837 (2018). https://doi.org/10.1016/J.ARTH.2018.01.067 13. Tsukada, S., Ogawa, H., Nishino, M., Kurosaka, K., Hirasawa, N.: Augmented reality-based navigation system applied to tibial bone resection in total knee arthroplasty. J. Exp. Orthop. 6(1), 44 (2019). https://doi.org/10.1186/s40634-019-0212-6 14. Elmi-Terander, A., et al.: Feasibility and accuracy of thoracolumbar minimally invasive pedicle screw placement with augmented reality navigation technology. Spine 43, 1018–1023 (2018). https://doi.org/10.1097/BRS.0000000000002502 15. Gibby, J., Cvetko, S., Javan, R., Parr, R., Gibby, W.: Use of augmented reality for imageguided spine procedures. Eur. Spine J. 29(8), 1823–1832 (2020). https://doi.org/10.1007/s00 586-020-06495-4 16. van Duren, B.H., Sugand, K., Wescott, R., Carrington, R., Hart, A.: Augmented reality fluoroscopy simulation of the guide-wire insertion in DHS surgery: a proof of concept study. Med. Eng. Phys. 55, 52–59 (2018). https://doi.org/10.1016/J.MEDENGPHY.2018.02.007 17. Befrui, N., et al.: 3D augmented reality visualization for navigated osteosynthesis of pelvic fractures. Der Unfallchirurg 121(4), 264–270 (2018). https://doi.org/10.1007/s00113-0180466-y 18. Logishetty, K., Western, L., Morgan, R., Iranpour, F., Cobb, J.P., Auvinet, E.: Can an augmented reality headset improve accuracy of acetabular cup orientation in simulated THA? a randomized trial. Clin. Orthop. Relat. Res. 477, 1190–1199 (2019). https://doi.org/10.1097/ CORR.0000000000000542 19. Luciano, C.J., Pat Banerjee, P., Brad Bellotte, G., Michael, O., Lemole, M., Charbel, F.T., Roitberg, B.: Learning retention of thoracic pedicle screw placement using a high-resolution augmented reality simulator with haptic feedback. Operative Neurosurg. 69, ons14–ons19 (2011). https://doi.org/10.1227/NEU.0b013e31821954ed 20. Yeo, C.T., Ungi, T., Thainual, P.-U., Lasso, A., McGraw, R.C., Fichtinger, G.: The effect of augmented reality training on percutaneous needle placement in spinal facet joint injections. IEEE Trans. Biomed. Eng. 58(7), 2031–2037 (2011). https://doi.org/10.1109/TBME.2011. 2132131 21. Condino, S., et al.: How to build a patient-specific hybrid simulator for orthopaedic open surgery: Benefits and limits of mixed-reality using the Microsoft hololens. J. Healthc. Eng. 2018, 5435097 (2018). https://doi.org/10.1155/2018/5435097 22. Sharma, A., Niu, W., Hunt, C.L., Levay, G., Kaliki, R., Thakor, N.V.: Augmented reality prosthesis training setup for motor skill enhancement (2019) 23. Röling, M.A., Visser, M.I., Oei, E.H.G., Pilot, P., Kleinrensink, G.-J., Bloem, R.M.: A quantitative non-invasive assessment of femoroacetabular impingement with CT-based dynamic simulation - cadaveric validation study. BMC Musculoskelet. Disord. 16(1), 50 (2015). https:// doi.org/10.1186/s12891-015-0504-7 24. Kanko, R.M., Laende, E.K., Davis, E.M., Scott Selbie, W., Deluzio, K.J.: Concurrent assessment of gait kinematics using marker-based and markerless motion capture. J. Biomech. 127, 110665 (2021). https://doi.org/10.1016/j.jbiomech.2021.110665 25. Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: OpenPose: realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 43, 172–186 (2018). https://doi.org/10.1109/TPAMI.2019.2929257

Mixed Reality-Based Support for Total Hip Arthroplasty Assessment

169

26. Teatini, A., Kumar, R.P., Elle, O.J., Wiig, O.: Mixed reality as a novel tool for diagnostic and surgical navigation in orthopaedics. Int. J. Comput. Assist. Radiol. Surg. 16, 407–414 (2021). https://doi.org/10.1007/S11548-020-02302-Z/FIGURES/4 27. Nakano, N., et al.: Evaluation of 3d markerless motion capture accuracy using openpose with multiple video cameras. Front. Sports Active Living 2, 50 (2020). https://doi.org/10.3389/ fspor.2020.00050

A Semiautomatic Procedure to Assist Physicians in Paediatric Airway Stenting Chiara Santarelli1(B)

, Luca Puggelli1 , Yary Volpe1 and Rocco Furferi1

, Paola Serio2 ,

1 Department of Industrial Engineering, University of Florence, Florence, Italy

{chiara.santarelli,luca.puggelli,yary.volpe, rocco.furferi}@unifi.it 2 Respiratory Endoscopy Unit, Department of Paediatric Anesthesia and Intensive Care, Meyer Children Hospital, Florence, Italy [email protected]

Abstract. Tracheomalacia (TM) is a tracheal weakening that causes the cartilage structure to collapse during breathing. The term tracheobronchomalacia (TBM) is used when the bronchi are also implicated in the disease. This illness can be treated in a variety of methods, including the insertion of stents in the cartilage-restricted region. Stents are commercially available in numerous sizes (diameter and length), different shapes and materials. The shape of the stent is chosen based on where the disease is located, while the size is derived from measurements on CT images or by endoscopic investigations. This task is extremely critical for the patient’s health since improper stent can have serious and even deadly repercussions for the patient. Therefore, it is necessary to choose a stent that fits the patient’s anatomy as closely as possible. To this end, it is possible to employ a three-dimensional model of the investigated anatomy and choose the most suitable type of stent following measurements made on the model itself. In this work, a method to assist physicians in the choice of the stent to be used in TM or TBM patients, is proposed. By analyzing the tracheobronchial tree and measuring successive cross-sections along the centerline of the lumen, the automated procedure allows to localize the diseased area and to identify the optimal shape and size of the stent to be implanted. Four case studies of paediatric patients were performed and the results were validated by a team of physicians specialized in the treatment of upper and lower airways. Keywords: 3D modelling · Reverse engineering · Stent · Airway obstruction

1 Introduction Tracheomalacia (TM), a structural weakening of the tracheal wall and supporting cartilage, is one of the most frequent congenital central airway anomalies. Excessive expiratory collapsibility of the trachea (softened or damaged), which is prone to occlude while breathing or coughing, is a result of this desease. The anomaly can involve the whole trachea or can be localized to one segment only. Depending on the part involved © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 170–180, 2023. https://doi.org/10.1007/978-3-031-15928-2_15

A Semiautomatic Procedure to Assist Physicians

171

in the disease, different terms are used. If the mainstem bronchi are involved as well, the term tracheobronchomalacia (TBM) is employed [1]. TM and TBM are rare conditions that usually presents at birth, when the cartilage of a malacic trachea does not properly develop during pregnancy, remaining weak and flabby. Nevertheless, there are reported cases in which the anomaly occurred later in life [2]. In these cases, the airway weakening is generally caused by a long intubation time or a recurring inflammation/infection of the trachea. Both conditions lead to a difficulty in breathing (dyspnoea) that worsens over time and can result in death. TBM and TM represent more than half of all cases of congenital pathologies which affect airways in children. Due to the severity of the disease and to the possibility of death, TBM and TM must be properly and rapidly treated to avoid serious complications. Several treatment options are available to restore proper patency of the tracheobronchial tree, including surgery, non-invasive positive pressure ventilation (NIPPV) and the placement of a stent in the damaged region of the airway. Stents have come a long way in the last 30 years, by being an experimental device to a vital component in several sectors of medicine and surgery. The purpose of a tracheal stent is to provide support for the trachea and the surrounding tissue and to keep open the tracheal lumen. Airway stenting in adults is often used in a palliative way with malignancy or in older patients who cannot support curative surgery in benign lesions. In pediatric patients tracheal or bronchial stent placement usually concern benign diseases and it can be useful either for temporary luminal stabilisation after airway surgery or for relief of severe airway malacia or stenosis when all other medical and surgical options have failed or are contraindicated [3]. In all cases, the use of internal stents must take into account the usually benign nature of the obstruction, the soft airway structures, and the substantial luminal growth that must be treated by the internal device. To achieve the best possible fitting with the airway’s tissue, stents are available in numerous sizes (diameter and length), different shapes and materials [4]. Concerning the manufacturing material, the stents can be classified in three main categories: metal, silicone and biodegradable stents. Metal stents are expandable wire mesh tubes which can be classified into two categories: 1) stainless steel balloon expandable metal stents, 2) self-expandable metal stents (SEMS) (see Fig. 1A), which can also be in hybrid form, i.e., made of nitinol with a polymer coating such as silicone or polyurethane [5–7]. All metal stents, present a low probability of migration, due to their complete epithelialization, a condition that, however, makes them very difficult to remove [8–11]. Furthermore, due to the expanding pressure exerted, this condition raises the likelihood of airway perforation and fistula development [12, 13]. In addition, being rigid, they do not easily adapt to the complex shapes of pathological tissues [14, 15]. For what concerns silicone stents, two main types are commercially available: Dumon™ stents [16], considered the gold standard in airway stenting, and the Polyflex™ stent, a self-expanding stent, rarely used in paediatric patients [15]. The Dumon stent is a tubular silicone stent with external pins positioned at regular intervals to improve the grip with airway walls (see Fig. 1B). The main advantage of this type of stent is that it is easy to insert and remove, so it is appropriate in patients who require short-term treatment. A major disadvantage is the occurrence of frequent migration often due to inappropriate stent diameter. Granulation tissue formation and occlusion with secretions are also frequently occurred because such type of stent tend to impair mucociliary clearance

172

C. Santarelli et al.

Fig. 1. Different type of commercially stent. (A) Metallic stent; (B) Dumon™ stent by NOVATECH® GSS™; (C) Biodegradable stent.

[17]. In recent years, to overcome the limitations arising from using metal and silicone devices, polydioxanone (PDS) stents have been introduced [18, 19] (see Fig. 1C). Being resorbable, they do not require to be removed and are completely reabsorbed within 3 to 4 months after their placement [20–22]. However, their resorbability can also be a great disadvantage if the tracheal walls are not sufficiently rigid once the stent has been completely absorbed. Whatever type of stent is use (metallic, silicon, resorbable), the shape of the stent to be used is chosen according to the localization of the pathology, while the size depends on the extension of the pathological area and the anatomical geometry of the patient. These dimensions are usually determined during an endoscopic examination by using flexible bronchoscopy [14]. It is worth to noting that this is a delicate task, since an improper stent can have serious and even deadly repercussions for the patient, such as device migration or incorporation of the mucosa which can lead to possible erosion and perforation. Therefore, it is necessary to choose a stent whose size fits as closely as possible to the patient’s anatomy. The assessment of the airway diameter measurement by using flexible bronchoscopy is a very complex task especially in case of paediatric patients who have small airways [23]. To this end a three-dimensional model of the investigated anatomy may be used. More specifically, starting from the patient’s CT images, the 3D model of the tracheobronchial tree can be reconstruct and the most suitable type of stent can be selected on measurements taken on the 3D model [24]. This leads to a more accurate analysis of the patient’s geometry that allows more complete measurements than those performed on two-dimensional endoscopic and diagnostic images of planar anatomical sections. In this work a method to assist physicians in the choice of the stent to be used in patients affected by TM or TBM, is proposed. In particular, the procedure allows for the analysis of the anatomical geometric features of the pathological area to be treated and, consequently, the selection of the most suitable stent (shape and size) among those currently available on the market. Starting from the three-dimensional model of the tracheobronchial tree, the proposed procedure automatically identifies the position of the branches, distinguishes the trachea and bronchi and how the airway lumen varies. The various tracts are distinguished and examined independently in order to identify and accurately pinpoint the malacic region(s) in which the stent must be placed. To this aim, the tracheobronchial tree is deeply analysed by automatically retrieving and measuring successive cross sections along the lumen centerline. Once the malacic region

A Semiautomatic Procedure to Assist Physicians

173

is intercepted, nearby cross-sections are used to define the ideal shape, length, and diameter of the device. The procedure of identification and analysis of the diseased area was applied to four different airways of paediatric patients. This work has been carried out also with a perspective to be then extended for the realization of custom stents specifically designed on the anatomical geometry of the patient. Thanks to the progress of 3D printing and new additive manufacturing techniques, following an automatic procedure it would be possible to produce custom stents optimized for a particular patient to prevent potential problems from using commercial ready-made stents.

2 Method The procedure begins with the segmentation of diagnostic images of the main anatomical regions of interest for the specific application. Imaging data can be used as input into specialised software programs like Materialise Mimics®, a certified diagnostic image management application, to identify such anatomical structures. A human user determines the pixels that correspond to the tissue we want to recreate in each picture of the dataset using software techniques based on grey levels. An example of tracheobronchial tree segmentation is shown in Fig. 2.

Fig. 2. Tracheobronchial tree segmentation: (A) coronal section; (B) axial section; (C) sagittal section; (D) three-dimensional model.

Segmentation can be performed automatically by selecting a specific range of grey level values for the whole image dataset or manually by identifying for each slice the pixels of interest. In case of anatomies comprising multiple tissues, such as the trachea and bronchi, the segmentation process begins with an automated threshold phase, in which pixels belonging to a specified range are detected, followed by a phase in which previously excluded pixels are manually identified on each image. The obtained segmentation is than converted into a three-dimensional geometric model of the specific anatomy (see Fig. 2D) using existing well-known voxel-based 3D reconstruction algorithms [25, 26]. This strategy, whose reliability is demonstrated in the scientific literature, is considered today a gold standard, widely adopted in multiple biomedical applications, such as for the prosthesis design and/or simulators used in preoperative planning [27]. The accuracy of the CT-based reconstruction has been demonstrated in [28]. Once the threedimensional model of the tracheobronchial tree is obtained, the centerline of this model

174

C. Santarelli et al.

is created using the Centerline Fit function in Materialise Mimics®, which calculates the centerline along a tubular structure (see Fig. 3). Finally, the 3D model and the centerline are exported in STL (Stereo Lithography interface format) and STEP (Standard for the Exchange of Product data) file format respectively.

Fig. 3. Centerline of the reconstructed tracheobronchial 3D model

2.1 Stent Characterization Procedure Once polygonal mesh and corresponding centerline are obtained, a fully-automatic procedure checks the presence of TBM and, in positive case, localises airway occlusion(s) and defines the geometrical properties of the optimal stent (in case of TBM). In particular, the procedure aims at defining 1. Stent shape (cylindrical or Y-shape), 2. Diameter/s (one diameter in case of cylindrical shape, three in case of Y-shape), length (one dimension in case of cylindrical shape, three in case of Y-shape). Centerline Optimization After the 3D model of the tracheobronchial tree is obtained and the centerline is retrieved, a series of task needs to be performed to the centerline. Such a step, called Centerline optimization, aims at detecting the “starting point” of the tracheobronchial tree, its branches and to retrieve their structure. In fact, the input file (IGES) does not provide any information about the distinction in branches (e.g. trachea, right main bronchus and so on) and the overall structure of the tree. Since the centerline of a discrete geometry (STL, voxel) is a polygonal chain, IGES file format reports the 3D coordinates of pairs of connected vertices, usually listed in consecutive order. In this way, the first couple is given by vertices A and B, the second B and C, the third C and D and so on. Unfortunately, due to tree-like geometry, there is no certainty of pairs’ consecutiveness after the first airway bifurcation as shown in Fig. 4.

E D A

B

C

X

Fig. 4. Error in pairs consecutive order in STEP file.

A Semiautomatic Procedure to Assist Physicians

175

To overcome this issue, the procedure starts with the detection of the centerline keyvertices, which are ending vertices and bifurcation vertices (hereinafter defined as EV and BV respectively). This operation is quite simple, by exploiting the structure of the IGES file. As a matter of facts, it is possible to retrieve which points are repeated only one time (ending vertices), two times (central vertices) and three times (bifurcation vertices). Referring to Fig. 4, the vertices pairs would be A-B; B-C; C-D; D-E; C-X: vertices A, E and X are repeated only one time, and they are EV (only one connection), B and D are repeated two times and they are central vertices (two connections), C is repeated three times and it is the BV. After retrieving the EVs, the one relative to the tracheal section is defined as starting vertex (SV). This can be easily retrieved since it corresponds to the one with the higher z coordinate. This is since patients position during head CT scans is predefined, with the patient lying supine on the bed, with the head on the side of the CT scanner. Consequently, DICOM z-axis correspond to the patient longitudinal axis, pointing to the patient’s head. The connectivity list is then re-organized in branches (i.e. centerline sections comprised between two consecutive key vertices) by tracing the centerline starting from SV. In this way, the first analyzed branch is the trachea. At each tracing step, the following vertex is retrieved by analyzing vertex pairs. Once detected, it is added to the branch matrix and the vertex pairs is deleted. The iterative branch tracing procedure ends when a bifurcation or an ending vertex is reached. The overall procedure starts with the trachea, then proceeds with main bronchi (both starts from the first bifurcation vertex BV1).

A BV1 BV4

BV1 BV2

C

BV3

E BV5

BV2

BV3

A

B

Fig. 5. (A) First BV (BV1) and concurring branches (main bronchi); (B) Centerline completely partitioned in sections.

The order in which each bronchus is analyzed depends on STEP file. Starting from BV1, a first branch is traced until next BV (Fig. 5A). Then, the second branch is traced, restarting from BV1. Left and right are discriminated by analyzing the direction of −−−−−−−−→ −−−−−−−→ vectors BV1 − BV2 and BV1 − BV3. The overall tracing proceeds in the same way for what concerns successive branches. The right side is finished first, followed by the left. The procedure ends when all EVs are reached. Finally, each branch is simplified by means of a Douglas-Peucker based algorithm [29], with a pre-defined tolerance. This last operation is required to avoid instability issues in cross-sections’ planes definition, in the following step. At this point, the centerline is correctly partitioned in sections (Fig. 5B).

176

C. Santarelli et al.

Cross-Sections’ Analysis Based on the original centerline (CL) and the optimized one (OCL), it is possible to proceed with the analysis of each branch, by means of the analysis of several crosssections. For a sake of clarity, let the authors take as example the branch “j”, which starts from the OCL point Pj,start and ends to the OCL point Pj,end . A set Nj of unit vectors ni that maps the direction of each subset of the branch is defined as follows: Nj = [nj, start , . . . , nj, i , . . . ., nj, end −1 ] nj, i =

Pj, i+1 − Pj, i Pj, i+1 − Pj, i

(1) (2)

in which Pj,i is a generic point of OCL that belongs to branch “j”. Let’s call OCLj and CLj the portions of OCL and of CL that belongs to branch “j”. Being OCL and CL sorted in the same order and being OCL a subset of CL (so that each point of OCL also belongs to CL), it is possible to collect the points pj, m ∈ CLj in subsets Kj, i comprised between two generic consecutive points [Pj, i , Pj, i+1 ] ∈ OCLj . It is now possible to associate at each subset pj, i, m ∈ Kj, i the unit vector nj, i . It is now possible to define a set of planes πj, i, m , which originate at pj, i, m and whose unit normal vectors is nj, i . These planes are locally normal to the centerline, being normal to its direction, and are properly spaced from each other, since CL is directly obtained from CT-scan and has approximately the same spatial resolution as CT-scan voxels. Finally, the obtained planes are used to detect a set of cross-sections of the polygonal model. For each one, both perimeter γj, i, m and area Aj, i, m are evaluated. TBM Detection and STENT Characterization Once each cross-section geometric feature has been evaluated, both median (Aj,MED ) and minimum areas (Aj,MIN ) are evaluated for each branch “j”. Based on a set of percentage thresholds (one for each branch, i.e., one for trachea, one for right main bronchus and so on) suggested by expert physicians, it is possible to define a threshold for each branch, as expressed in Eq. (3): Tj = Tj% · Aj,MED

(3)

where: • T j = threshold value, for branch “j”; • T j% = percentage threshold on the median value, for branch “j”. If a generic Aj,MIN < Tj , the branch “j” is considered as critical. In this case, the critical cross-section is retrieved, and the analysis is performed to the nearby region. Is in fact expected that the criticality is extended to a set of contiguous cross-sections, which identify the presence of malacia. In Fig. 6 an example is shown. All cross-sections with Aj,i,m < Tj are considered occluded, and consequently a stent insertion is required.

A Semiautomatic Procedure to Assist Physicians

177

Fig. 6. Critical cross-section detection

Once critical area has been correctly detected, it is possible to define the shape of the stent. If the critical region is at least 5 mm away from any BV, the shape will be cylindrical, Y otherwise. In case of cylindrical stent, the optimal length of (L OPT ) is defined as the length of the occlusion, extended on each side by 5 mm. In this case, the optimal diameter DOPT is calculated according to the median perimeter γL,MED of the healthy cross-sections included in L OPT : DOPT =

γL,MED π

(4)

In case of Y-shaped stent, L OPT is considered in the same way for the occluded branch, and 10 mm on the healthy ones (this measure can be manually adjusted). The same considerations made on diameters for cylindrical stents are still valid for the occluded branch. For what concerns the healthy branches, diameters are evaluated considering all cross-section included in L OPT.

3 Case Studies During this work, four paedriatric case studies were analysed. Written informed consent was obtained from the four patients’ parents, including for publication of both subjects’ data and all accompanying images. The first patient studied was a three-year-old child with tracheomalacia. Once the 3D polygonal model and the corresponding centerline were obtained, the procedure identified and distinguished the various parts of the tracheobronchial tree (see Fig. 7A). From the analysis of the malacic section, a cylindrical stent, whose geometric dimensions are reported in Table 1, was identified as optimal. The second patient is a twelve-yearsold child with TM. The tracheal occlusion is located similarly to previous case, even if the shape of airways and malacic region is sensitively different. The algorithm was able to recognize all parts of the tracheobronchial tree and to correctly detect the malacic region in the tracheal section (see Fig. 7B). Also in this case, the algorithm has identified as an optimal stent one of cylindrical shape whose measures are reported in Table 1. The third case is about a two-years-old child with bronchomalacia, in the right lower secondary bronchus. The procedure distinguished the various parts of the tracheobronchial tree and detected critical region (see Fig. 7C). In this case, the algorithm was unable to identify any type of stent due to severe malformation in right main bronchus. The last case study is from a healthy 13-year-old boy. As in previous cases, the procedure successfully

178

C. Santarelli et al.

Fig. 7. Identification of tracheobronchial tree regions and malacic segment in paediatric patients.

Table 1. Shape and geometric dimensions of the optimal stent retrieved for each patient. Patient #

Shape

DOPT (mm)

1

Cylindrical

7,73

2

Cylindrical

16,88

L OPT (mm) 12 39,60

recognized all airways’ parts. Since all the analyzed sections’ values are above threshold values, no critical sections have been reported (see Fig. 7D). The four patients were examined by a team of physicians specialized in the treatment of upper and lower airways at Meyer Children’s Hospital, and the data obtained using the proposed algorithm were confirmed. Based on the values obtained, the commercially available stent can be selected for each patient. In the first case, for example, GSS™ TF by NOVATECH® with an external diameter of 8 mm and a length of 20 mm has been identified as suitable.

4 Conclusion In this work, a semi-automated procedure to assist physicians in the choice of the optimal stent to be used in patients affected by TM or TBM, is proposed. The procedure analyses the 3D model of the tracheobronchial tree to extract the anatomical geometric characteristics of the pathological area to be treated and consequently to identify the most suitable stent to be implanted. To this aim, after a centerline optimization step, the procedure analyses several cross-sections along each branch. For each one, perimeter

A Semiautomatic Procedure to Assist Physicians

179

and area are evaluated and compared with threshold values, predefined under physicians’ suggestions, to detect possible criticalities/occlusions (i.e., regions for which a stent insertion is required). Finally, the shape and size (length and diameter) of the stent are calculated, considering both the location of the critical region and the geometric characteristics of the surrounding healthy areas. To test the proposed method, four case studies of paediatric patients were performed. In the first two cases, the procedure was able to recognize all parts of the tracheobronchial tree and correctly identify the malacic region in the tracheal section. In both cases, the algorithm identified as the optimal stent one of cylindrical shape. In the third case, the procedure correctly localized the malacia in the bronchial region but did not identify any type of stent due to severe malformation. In the last case, the algorithm did not identify any critical section. The data provided by the algorithm was then submitted to the judgment of medical experts, who after a diagnostic analysis on the patients confirmed the results obtained. Future work will be addressed to a more-in-depth evaluation of the proposed procedure, with a more intense testing campaign on new cases study. In a successive step, it will be eventually possible to develop the proposed procedure for the modelling of custom stents, specifically designed on the anatomical geometry of the patient. By employing 3D printing and new additive manufacturing techniques, it would be possible to produce stents optimized for a particular patient and avoid potential problems arising from the use of commercial stents. Finally, the first part of proposed procedure will be used to speed up the modelling phase of high-fidelity lower airway mannequin, to be used to train young physicians in critical airways treatment [30].

References 1. Choi, S., Lawlor, C., Rahbar, R., Jennings, R.: Diagnosis, classification, and management of pediatric tracheobronchomalacia: a review. JAMA Otolaryngol. - Head Neck Surg. 145, 265–275 (2019) 2. Sams, V.G., Lawson, C.M., Shibli, A.B., Taylor, D.A., Branca, P.R.: Severe tracheobronchomalacia after prolonged intubation of multitrauma patient. Case Rep Surg. 2011, 627012 (2011) 3. Eber, E., et al.: ERS statement: interventional bronchoscopy in children. Eur. Respir. J. 50, 1700901 (2017) 4. Rafanan, A.L., Mehta, A.C.: Stenting of the tracheobronchial tree. Radiol. Clin. North Am. 38, 395–408 (2000) 5. Ratnovsky, A., Regev, N., Wald, S., Kramer, M., Naftali, S.: Mechanical properties of different airway stents. Med. Eng. Phys. 37, 408–415 (2015) 6. Avasarala, S.K., Freitag, L., Mehta, A.C.: Metallic endobronchial stents: a contemporary resurrection. Chest 155, 1246–1259 (2019) 7. Dooms, C., De Keukeleire, T., Janssens, A., Carron, K.: Performance of fully covered selfexpanding metallic stents in benign airway strictures. Respiration 77, 420–426 (2009) 8. Sommer, D., Forte, V.: Advances in the management of major airway collapse: the use of airway stents. Otolaryngol. Clin. North Am. 33, 163–177 (2000) 9. McLaren, C.A., Elliott, M.J., Roebuck, D.J.: Tracheobronchial intervention in children. Eur. J. Radiol. 53, 22–34 (2005) 10. Jacobs, J.P., et al.: The role of airway stents in the management of pediatric tracheal, carinal, and bronchial disease. Eur. J. Cardiothorac. Surg. 18, 505–512 (2000)

180

C. Santarelli et al.

11. Arda, I.S., Boyvat, F., Ötgün, I., Güney, L.H., Hiçsönmez, A.: Preliminary experience with tracheal stent application in children with tracheal stenosis. Eur. J. Pediatr. Surg. 17, 241–243 (2007) 12. Lim, L.H.Y., Cotton, R.T., Azizkhan, R.G., Wood, R.E., Cohen, A.P., Rutter, M.J.: Complications of metallic stents in the pediatric airway. Otolaryngol. Head Neck Surg. 131, 355–361 (2004) 13. Saito, Y.: Endobronchial stents: Past, present, and future. Semin. Respir. Crit. Care Med. 25, 375–380 (2004) 14. Serio, P., Fainardi, V., Leone, R., Baggi, R., Grisotto, L., Biggeri, A., Mirabile, L.: Tracheobronchial obstruction: Follow-up study of 100 children treated with airway stenting. Europ. J. Cardiothorac. Surg. 45(4), e100–e109 (2014) 15. Wallis, C., McLaren, C.A.: Tracheobronchial stenting for airway malacia. Paediatr. Respir. Rev. 27, 48–59 (2018) 16. Dumon, J.F.: A dedicated tracheobronchial stent. Chest 97, 328–332 (1990) 17. Herth, F.J.F., Eberhardt, R.: Interventional bronchoscopy. Minerva Pneumologica. 43, 189– 201 (2004) 18. Loskot, J., et al.: Raman spectroscopy as a novel method for the characterization of polydioxanone medical stents biodegradation. Materials 14(18), 5462 (2021) 19. Griffiths, B.T., James, P., Morgan, G., Diamantopoulos, A., Durward, A., Nyman, A.: Biodegradable stents for the relief of vascular bronchial compression in children with left atrial enlargement. J. Bronchology Interv. Pulmonol. 27, 200–204 (2020) 20. Serio, P., Mirabile, L.: Endoscopia respiratoria: diagnosi e terapia. In: Mirabile, L., Baroncini, S. (eds.) Rianimazione in età pediatrica, pp. 171–191. Springer Milan, Milano (2012). https:// doi.org/10.1007/978-88-470-2059-7_16 21. Di Dedda, G., Mirabile, C.: Use of a biodegradable, oversized stent in a child with tracheomalacia secondary to vascular external compression. Cardiol. Young 27, 196–198 (2017) 22. Vondrys, D., Elliott, M.J., McLaren, C.A., Noctor, C., Roebuck, D.J.: First experience with biodegradable airway stents in children. Ann. Thorac. Surg. 92, 1870–1874 (2011) 23. Nicolai, T.: Airway stents in children. Pediatr. Pulmonol. 43, 330–344 (2008) 24. Natale, G., et al.: The use of 3D printing model as tool for planning endoscopic treatment of benign airway stenosis. Transl. Cancer Res. 9, 2117–2122 (2020) 25. Bücking, T.M., Hill, E.R., Robertson, J.L., Maneas, E., Plumb, A.A., Nikitichev, D.I.: From medical imaging data to 3D printed anatomical models. PLoS ONE 12, 1–10 (2017) 26. Rengier, F., et al.: 3D printing based on imaging data: review of medical applications. Int. J. Comput. Assist. Radiol. Surg. 5, 335–341 (2010) 27. Mussi, E., et al.: Current practice in preoperative virtual and physical simulation in neurosurgery. Bioengineering 7, 1–14 (2020) 28. Puggelli, L., Uccheddu, F., Volpe, Y., Furferi, R., Di Feo, D.: Accuracy assessment of CT-based 3D bone surface reconstruction. In: Cavas-Martínez, F., Eynard, B., Fernández Cañavate, F.J., Fernández-Pacheco, D.G., Morer, P., Nigrelli, V. (eds.) Advances on Mechanics, Design Engineering and Manufacturing II. LNME, pp. 487–496. Springer, Cham (2019). https://doi. org/10.1007/978-3-030-12346-8_47 29. Douglas, D.H., Peucker, T.K.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica: The Int. J. Geogr. Inform. Geovisualization 10(2), 112–122 (1973) 30. Santarelli, C., Puggelli, L., Carfagni, M., Governi, L.: Preliminary study of a high-fidelity simulator for the management of paediatric tracheal pathologies. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 773–784. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_78

An Engineering Method to Evaluate Care Processes and Introduce Televisits Anna Savoldelli1(B)

, Daniele Regazzoni1

, Simona Ludi2 , and Caterina Rizzi1

1 Department of Management, Information and Production Engineering,

University of Bergamo, viale Marconi 5, Dalmine (Bergamo), Italy {anna.savoldelli,daniele.regazzoni,caterina.rizzi}@unibg.it 2 Azienda Socio Sanitaria Territoriale (ASST) Bergamo Est, via Paderno 21, Seriate (Bergamo), Italy [email protected]

Abstract. During past years, the pandemic has revealed the importance of having a solid care system prepared to face emergencies. In this context, digital solutions demonstrated a high potential in dealing with critical conditions and ensuring the delivery of care. However, telemedicine has not yet succeeded in becoming a stable part of ordinary care. The integration of innovative telemedicine technologies with a set of well-organized activities plays a crucial role in the release of high-quality services. Processes modeling before the introduction of telemedicine services is a leverage to prepare the base for an effective and efficient shift to digital care. Hence, the present research customizes a modeling technique in four steps for a preliminary analysis of processes where to introduce televisits. A special attention is given to the collection of consistent knowledge about care processes, often lack and incomplete in public hospitals scenarios. The approach has been applied to the AS-IS process of the heart failure clinic of a large Italian hospital before the introduction of televisits. Integrated Definition for Function Modeling (IDEF) diagrams have allowed the hierarchical decomposition of complex phases in simpler tasks, the acquisition of consciousness and the updating of information. Diagrams have been created and used as a source of a common language to discuss about weaknesses of the current process and its possible improvements. Obstacles to the upcoming televisits services have been objectively highlighted, such as the need to reduce employed applications, the removal of printed material and the streamlining of unnecessary operations. Keywords: IDEF0 · Process modeling · Healthcare processes · Televisits

1 Introduction The severe impact of coronavirus disease 19 (Covid-19) on the health systems of different countries has revealed the necessity to better organize processes of care, in order to guarantee the right service levels. Indeed, the delay of non-urgent procedures due to hospitals overcrowding has generated a negative effect, especially on frailty and chronic patients, who require ongoing assistance [1–3]. In such a context, telemedicine solutions © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 181–191, 2023. https://doi.org/10.1007/978-3-031-15928-2_16

182

A. Savoldelli et al.

have proved to be effective instruments to face difficult scenarios and prevent the possible future lack of care continuity [4]. Telemedicine ensures the communication among health professionals, patients and caregivers by means of video communication and the fast exchange of documents and medical records [5]. Its advantages can be summarized as: accessible care, increased convenience, enhanced comfort, greater confidentiality to patients and families, and reduced risk of Covid-19 contagion [6]. The introduction of telemedicine services and the consequent modernization of instrumentation is a choice adopted to create a more solid and stable health care system [7]. However, several barriers are responsible for the limited diffusion of telemedicine activities, which are often not included in ordinary care yet. Among the found limitations, the principal concern technological aspects, the organization of activities and the quality of interaction between physicians and patients [8, 9]. Working on hospital processes can be a way to better plan and control how care services are delivered. Process management techniques are already used in the healthcare field for designing and monitoring activities carried out by healthcare professionals [10]. In particular, the representation of processes is a tool useful for helping medical personnel in understanding weaknesses and discussing future improvements. Accordingly, the investigation and the mapping of care processes can be the preliminary step for the effective future introduction of telemedicine services inside hospital clinics. It can allow the detection of the workflow, the consumption of resources and the sharing of places during indoor activities among personnel. However, the availability of complete and structured information as a source for a realistic healthcare process representation is still limited. Based on the exposed context, the present research investigates a methodology for the acquisition of formal knowledge and the representation of hospital processes, before the introduction of televisits for a guided reorganization of the activities. Following sections investigate the background, including the research of business process modeling in the health and e-health field. After the definition of the objective, further sections define the main steps of the proposed methodology and its application to a case study. Discussions and conclusions are finally drawn.

2 Scientific Background The combination of new researches and innovative technical equipment allowed the definition of high-quality care solutions, paying attention to satisfy patients’ needs. On the other hand, the limited availability of resources and the need for new administrative and medical requirements have led to the necessity of redesigning clinical processes and reorganizing their flow of activities. The management of processes is a strategic leverage to improve operational performance, enhance service quality and ensure regulations and compliance [11]. Previous researches have already investigated the application of business process techniques in the healthcare field. De Ramon Fernandez A. et al. [10] declared that the management of the process is a powerful instrument for the redesign of clinical activities, thanks to the simplification of the workflow and the elimination of non-value-added tasks. In particular, process modeling is a support for a deep understanding of complex systems, that

An Engineering Method to Evaluate Care Processes and Introduce Televisits

183

otherwise could be very unclear. Mapping techniques are the cue for designing processes and subdividing them into phases and subphases [12]. Cozjnsen L. et al. [13] try to detect the reasons why industrial methods sometimes do not work in healthcare. They declared that industrial and business techniques can be used as an inspiration in the healthcare sector, taking into consideration the deep differences between the two sectors. Antonacci G. et al. [14] have proposed a systematic review in which the implementation of process management in the healthcare field is investigated. Despite its potential, the variance in reporting and the poor adherence to a principle often found in the healthcare field could compromise results. However, the application and reporting of process management are encouraged granting rigor. The main aim of process management in the e-health sector is the organization and coordination of new types of activities. Szelagowski M. et al. [15] discussed the representation of clinical pathways as a communication tool among healthcare professionals, patients and caregivers, improving the quality of delivered services and reducing risks. Broekhuis M. et al. [16] represented the workflow process of stakeholders for help in the development of a telemedicine service. Despite significant improvements that have been achieved by working on processes in healthcare, there are still shortfalls in the existing techniques [17]. In particular, according to Ahmed E. S. et al. if flowchart is one of the most chosen methods in this context, it has been demonstrated to be abstract and poorly clear in complex domains. In literature, other techniques are available for representing hospital and care processes. Integrated Definition for Function Modeling (IDEF) is one of the most known for defining processes [18]. It allows the description of the relationship between process and sub-processes and activities of each phase. Each activity is represented as a box, while arrows contain the objects related to each function (i.e. inputs, controls, outputs and mechanisms) [19]. Moreover, as described by Fu M. et al. [20], the logical relationship between operational tasks allows the transformation into Unified Modeling Language (UML) models for the creation of shared language. Although these diagrams are much appreciated in the industrial context, Bevilacqua M. et al. [21] have identified that IDEF0 is easy to learn, logical in its formulation and supported by computer programs easy to adopt also in health studies. Hence, they can be accepted also by non-professionals of the modeling techniques [22]. Also, Kammoun A. et al. [23] identified IDEF0 as an eligible method for the structured analysis and design techniques. However, this representation procedure is not still used for the evaluation of processes in which to introduce telemedicine solutions. According to what was previously discussed, the present research aims at creating a methodology useful for a preliminary analysis of care processes chosen for the implementation of telemedicine services. The procedure for the acquisition of objective and complete information and the process representation based on the IDEF0 diagrams is presented as the starting point for the analysis of the efficient implementation of remote care techniques. In particular, an application study explains the AS-IS map process before the introduction of televisits in the heart failure clinic of the cardiac rehabilitation department of a large Italian hospital. The formalized information contained in the diagrams is the basis for discussions about the reorganization of the department activities

184

A. Savoldelli et al.

and a sustainable and continuative addition of remote care to conventional face-to-face visits.

3 Method and Tools The adopted methodology is subdivided into four steps, as described in Fig. 1: collection of information about the process, representation of the process with IDEF0, check of preliminary mapping and discussion on the introduction of televisits.

Fig. 1. Four main steps of the proposed methodology

In the first step, different information related to the examined process has to be collected. Initially, available internal procedures and protocols have been extrapolated from the hospital database. Formal documents about diagnostic and therapeutic care pathways of patients can be used to reconstruct the activities flow according to the different clinical situations. Additional details are extrapolated actively involving professional figures who daily perform care activities. A preliminary brainstorming allows the definition of the main phases of the process. Some preset tables have been designed in order to facilitate the acquisition of knowledge intended for IDEF0 diagrams. Figure 2 (a) shows the structure of the grid, organized into five categories, following diagrams features: inputs, personnel and tools, performed activities, controls and outputs. Inputs section concerns activities carried out on patients, considering the hospital’s point of view. In detail, inputs are mainly related to the identification of patients who receive actions from medical personnel. The patients’ condition varies during different stages of the process (e.g. scheduled patient, examined patient, etc.). The second item has to be completed by adding the personnel involved during activities and the employed instrumentation. Even if physicians and nurses are the most present during the process, other professional figures included in single activities (e.g. biomedical engineers, other medical specialists) can be added. Moreover, the section

An Engineering Method to Evaluate Care Processes and Introduce Televisits

185

requires the distinction among informatic instrumentation (e.g. computers, software, web applications), medical equipment (e.g. electrocardiograph, oximeter, echo, etc.) and conventional tools (e.g. printer, written documents, telephone). The activities section is designed for the clarification of the medical and organizational work performed on patients by health professionals. In controls item, medical guidelines, national regulations and internal formal procedures required for the management of patients could be added. Finally, outputs help in specifying the outcomes of performed activities. Patients after receiving treatments and their updated documentation have to be considered. Therefore, the medical personnel can work in a team and fill in preset tables, that guarantee uniformity and consistency in collecting knowledge. Consistent and unambiguous information is set up to be introduced in IDEF0 diagrams (Fig. 2 (b)). During additional focus groups, the involved personnel is asked to rewrite the information of tables on cards and create the real sequence of activities, attaching them to a wall.

Fig. 2. Structure of table used for the acquisition of information (a) and its relationship with IDEF0 representation (b).

In the second step, data acquired with medical personnel is used to graphically represent the process under examination. IDEF0 diagrams have been chosen as one of the most suitable tools due to the variability and intricacy of activities required to cure patients. Indeed, IDEF0 facilitates the representation of complex processes [24]. In detail, they allow the hierarchical decomposition of the process into smaller parts and an easier separated description of each subsystem. First, the main activities of the process are represented. Then, a more detailed graphical description can be added for each primary activity. The most suitable level of detail can be chosen according to the specific situations. A tree node and a final glossary can be created and attached to the map for clearly declaring the process hierarchical structure.

186

A. Savoldelli et al.

During the third step, the draft version of IDEF0 is checked with the medical staff, who perfectly knows the activities, which they perform daily. Direct interviews permit a comparison of the correctness of sequence activities and the patient path within the process. Specific questions are created to fix those steps requiring a better understanding. Furthermore, the investigation of the workplace during each phase of the process offers the possibility for checking additional information. The clarification on the employed instrumentation, the workplace organization and the role of each professional is an additional stimulus for correcting and updating IDEF0 diagrams. In the final step, the definitive representation of the process can be used as a base to discuss the method of activities execution in the clinic. Diagrams constitute a source for the updating of the internal documentation stored in hospitals. Thus, simplified diagrams concerning specific parts of the process can be extrapolated. Subsequently, they can be used for investigating the consumption of resources (i.e. personnel, technical instrumentation, money and time), underlying weaknesses and possible improvements. Furthermore, IDEF0 is the first step for discussing future improvements, as the reorganization of workspaces, activities and instruments for the efficient introduction of televisits as an alternative to ordinary care.

4 Application to a Case Study The proposed methodology has been applied in the heart failure (HF) clinic of the cardiological rehabilitation department of the Italian hospital ASST Bergamo Est. An AS-IS map has been created with the IDEF0 technique for representing the process of caring for patients affected by HF, before the introduction of televisits. 4.1 Collection of Information Documents related to the activities of the HF clinic have been extrapolated from the hospital database. Internal and external procedures have represented a useful source for detecting the organization adopted by medical personnel. The different typologies of monitoring and follow-up activities, which change according to the medical condition of patients, have been investigated. The documentation clarified the relationship between professional figures internal and external to the hospital. Members of the HF clinic have been involved in four meetings, scheduled to overcome the lack of information for an exhaustive understanding of the process. The teamwork included all professional figures taking part in the process: doctors, nurses and a physiotherapist. Theoretical lessons about process modeling have been combined with more practical activities. During meetings, four main phases have been identified as the core of the care process, as represented in Fig. 3: patient’s access to HF clinic, performing of first medical examination, monitoring and performing follow-up examinations and activities. The use of preset tables helped in the strategic collection of knowledge useful for the following graphical representation of the process. Thereafter, details collected in the table have been rewritten on cards, and the flow of activities has been discussed with the medical personnel.

An Engineering Method to Evaluate Care Processes and Introduce Televisits

187

Fig. 3. Four main phases of the process of caring for patients with HF.

4.2 Representation of Process with IDEF0 According to the data collected in the previous phase, the process has been subdivided into four hierarchical levels. Figure 4 represents the general structure of IDEF0 diagrams.

Fig. 4. Hierarchical levels of the process.

The A-0 layer summarizes the most general information about the process of care for HF patients. The four main phases of the process have been converted to infinitive verbs and introduced inside activity blocks of the A0 level. Hence, each of them has been specified in sub-levels (A1, A2, A3, A4), useful to better investigate more aspects. These layers contained between 4 and 6 activities; each of them has been further examined

188

A. Savoldelli et al.

(A11, A12, A13, etc.). According to this particular application, special attention has been turned to the identification of roles and the employed instrumentations. Some activities were not included in the fourth level because an acceptable detail level had been already reached. 4.3 Check of Preliminary Mapping The check of IDEF0 diagrams has been subdivided into two separate moments. In the first step, results of a preliminary mapping have been presented and discussed with the medical teamwork during additional meetings. Possible modeling errors and misunderstandings have been directly clarified and updated on graphs. This has allowed the staff to integrate different points of view and to agree on a final common version. Moreover, during meetings health professionals have understood how to read and interpret IDEF0 diagrams. In the second step, the in-person checking on the actual workplace allowed monitoring of real activities carried out during the entire process, such as the first medical examination, the monitoring and the delivery of follow-up activities. The observation of doctors’ and nurses’ actions during their daily work constitutes a complementary source of information, sometimes more detailed than what has been collected during meetings. The combination of the two moments contributes to create clearer and more complete final IDEF0 diagrams. 4.4 Discussion on the Introduction of Televisits The final IDEF0 version has been shared again with the medical team, to exhibit found results and offer a source of discussion. Weaknesses and criticalities found during the care process have been highlighted and analyzed with professional figures. The updated diagrams and their contents have been used as a starting point for understanding the AS-IS situation. Hospital documents have been updated with written procedures and simplified flowcharts. According to the new information, next steps required for a better introduction of new telemedicine care services could be hypothesized and planned. The organization setting and the involved personnel during a televisit section can be designed and compared with the actual resource consumption.

5 Discussions Process modeling techniques are not new in hospital scenarios. However, the proposed methodology offers a different approach, customized for studying care processes in which integrate telemedicine services. The employment of structured tables intended for the well-ordered investigation of information, combined with the use of IDEF0 diagrams, has allowed the detection of critical aspects that, if not properly managed, could compromise the effectiveness and efficacy of telemedicine applications. In the present case study, IDEF0 representation has revealed the presence of two principal issues, to be solved before the introduction of televisits in the HF clinic. First, a portion of documents related to patients’ history (i.e. registered medical prescriptions, examinations, or medical reports) is still based on paper documents. The presence of a

An Engineering Method to Evaluate Care Processes and Introduce Televisits

189

physical database limits the information sharing and it is not suitable for the planned introduction of televisits. Supplementary printed copies create waste; they increase the workload of medical personnel and the management of the storage. The employment of digital medical folders allows the fast sharing of information among patients, health providers and the database by means of platforms and mobile applications. The possibility to share medical reports, examinations and other types of information is the base for effective employment of digital care. The second issue concerns the fragmentation of activities due to the use of a lot of different software and applications. In particular, the work of the medical personnel is subjected to human errors and eventually overburdening because of the simultaneous use of many applications not communicating with each other. Accordingly, the addition of one more platform for televisits would increase the workload of physicians and nurses. On the other hand, a leaner system with an integrated and communicating system of applications would imply higher efficiency, reducing the time spent and improving the mood of healthcare professionals. Moreover, the availability of quiet rooms dedicated to televisits with high-quality instrumentation (i.e. computers, audio headphones, webcams) implies the reorganization of the conventional in-presence visiting rooms. The choice of the IDEF0 technique represents an unconventional aspect in the healthcare field. Its employment has allowed the complexity reduction of the care process by means of the differentiation of the main phases and their sublevels. Otherwise, the definition of the right and complete flow of activities would be very complicated also for personnel who is daily involved in the care process. Even if far from the medical background, IDEF0 has been crucial for the effective extrapolation of a high amount of information. The employment of preset tables represents an innovative aspect, useful for discerning and collecting only the information required for the diagrams, neglecting the less significant ones. Tables have offered health personnel a guide for neatly describing activities of the process. The basic training provided the skills required to read diagrams and to extract the required pieces of information. Moreover, preset tables have facilitated the medical team in reading and interpreting IDEF0 by themselves, so that they could be easily understood. IDEF0 has created a common and systematized language for the description of processes in which introduce televisits. The representation has permitted the definition of formalized knowledge inside hospitals, in which information is usually fragmented. Hence, health professionals may exploit the final representation as a base for updating written procedures, ensuring a better adhesion to actual practices and discussing the implementation of new digital solutions.

6 Conclusion The present research proposes a methodology, related to hospital processes intended for the introduction of telemedicine services. The approach allows the knowledge collection and process representation for a deep detection of activities organization, involved personnel and resources consumption. The methodology is subdivided into four steps. The procedure for the acquisition of data about health providers’ activities plays a crucial role in the effective representation

190

A. Savoldelli et al.

of the process; it is proposed basing on preset and standard tables. Then, the placement of tables information in IDEF0 diagrams is presented. The preliminary draft of the process has to be checked and shared with the involved medical personnel. The final version is used by health professionals as a base for the discussion about the AS-IS situations and future implementations. The application of the four steps in the HF clinic of a large hospital in Italy is proposed. The mapping of the ordinary AS-IS process before the introduction of televisits is a strategic preliminary operation, useful for the investigation of strengths and weaknesses of the actual treatment procedure. According to the found considerations, the process can be improved or redesigned, in order to offer higher quality services to patients. In particular, the streamlining of activities will be possible by means of adequate technical equipment based on a few computer applications, able to communicate one to each other. Moreover, the reorganization of activities and digitalization of documents could further simplify future implementations. The emerged critical issues have been discussed with medical personnel and hospital managers, as a starting point to set the stage for adding telemedicine services. A solid and controlled process of care is the basis for a more efficient and durable integration of digital care; moreover, it helps the involved professional figures in acquiring consciousness about the complete flow of activities and increase their outcome. Future researches could replicate the proposed methodology by integrating the option of digital care. The mapping will offer a document for the detection of the variation of employed personnel and resources in the two different situations. An evaluation of reached improvements with the adoption of telemedicine services could be performed, by considering both hospitals’ and patients’ points of view.

References 1. Obi, K., et al.: Structural heart interventions during COVID-19. Curr. Probl. Cardiol. 47, 100934 (2022) 2. Remuzzi, A., Remuzzi, G.: COVID-19 and Italy: what next? The Lancet 395, 1225–1228 (2020) 3. Salzano, A., et al.: Heart failure management during the COVID-19 outbreak in Italy: a telemedicine experience from a heart failure university tertiary referral centre. Eur. J. Heart Fail. 22, 1048–1050 (2020) 4. Vidal-Alaball, J., et al.: Telemedicine in the face of the COVID-19 pandemic. Atención Primaria 52, 418–422 (2020) 5. Savoldelli, A., Vitali, A., Remuzzi, A., Giudici, V.: Improving the user experience of televisits and telemonitoring for heart failure patients in less than 6 months: a methodological approach. Int. J. Med. Inform. 161, 104717 (2022) 6. Dorsey, E.R., Okun, M.S., Bloem, B.R.: Care, Convenience, Comfort, Confidentiality, and Contagion: The 5 C’s that Will Shape the Future of Telemedicine. Journal of Parkinson’s Disease 10, 893–897 (2020) 7. Kichloo, A., et al.: Telemedicine, the current COVID-19 pandemic and the future: a narrative review and perspectives moving forward in the USA. Family medicine and community health 8, 530 (2020) 8. Almathami, H.K.Y., Than Win, K., Vlahu-Gjorgievska, E.: Barriers and facilitators that influence telemedicine-based, real-time, online consultation at patients’ homes: Systematic literature review. Journal of Medical Internet Research 22 (2020)

An Engineering Method to Evaluate Care Processes and Introduce Televisits

191

9. Scott Kruse, C., et al.: Evaluating barriers to adopting telemedicine worldwide: A systematic review. Journal of Telemedicine and Telecare 24, 4–12 (2018) 10. De Ramón Fernández, A., Ruiz Fernández, D., Sabuco García, Y.: Business Process Management for optimizing clinical processes: A systematic literature review. Health Informatics J. 26, 1305–1320 (2020) 11. Silva Rosa, L., Soares Silva, T., Fantinato, M., Heloisa Thom, L.: A visual approach for identification and annotation of business process elements in process descriptions. Comput. Stand. Interfaces 81 (2022) 12. Sohrabi, M., Zandieh, M., Nadjafi, B.A.: Dynamic demand-centered process-oriented data model for inventory management of hemovigilance systems. Healthc. Inform. Res. 27, 73–81 (2021) 13. Cozijnsen, L., Levi, M., Verkerk, M.J.: Why industrial methods do not work in healthcare: an analytical approach. Intern. Med. J. 50, 250–253 (2020) 14. Antonacci, G., Lennox, L., Barlow, J., Evans, L., Reed, J.: Process mapping in healthcare: a systematic review. BMC Heal. Serv. Res. 211 and 21, 1–15 (2021) 15. Szel˛agowski, M., Berniak-Wo´zny, J., Lipi´nski, C.: BPM support for patient-centred clinical pathways in chronic diseases. Sensors 21, 7383 (2021) 16. Broekhuis, M., van Weering, M.D., Schuit, C., Schürz, S., n van Velsen, L.: Designing a stakeholder-inclusive service model for an eHealth service to support older adults in an active and social life. BMC Health Serv. Res. 21, 1–17 (2021) 17. Ahmed, E.S., Ahmad, M.N., Othman, S.H.: Business process improvement methods in healthcare: a comparative study. Int. J. Health Care Qual. Assur. 32, 887–908 (2019) 18. Baghbani, M.: IDEF0 modeling standard: a tool for process map drawing under requirements of ISO 9001: 2015: a case study. J. Mod. Process. Manuf. Prod. 8, 57–66 (2020) 19. Kim, S.H., Jang, K.J.: Designing performance analysis and IDEF0 for enterprise modelling in BPR. Int. J. Prod. Econ. 76, 121–133 (2002) 20. Fu, M., Wang, D., Wang, J., Li, M.: Modeling method of operational task combined with IDEF and UML. In: Proc. 2018 IEEE 3rd Adv. Inf. Technol. Electron. Autom. Control Conf. IAEAC 2018, pp. 1443–1447 (2018) 21. Bevilacqua, M., Mazzuto, G., Paciarotti, C.: A combined IDEF0 and FMEA approach to healthcare management reengineering. Int. J. Procure. Manag. 8, 25–43 (2015) 22. Bertoni, M., Cugini, U., Regazzoni, D., Rizzi, C., Ugolotti, M.: A BPR methodology for product development processes. IFIP Int. Fed. Inf. Process. 207, 370–378 (2006) 23. Kammoun, A., Hachicha, W., Aljuaid, A.M.: Integrating quality tools and methods to analyze and improve a hospital sterilization process. Healthc. 9, 544 (2021) 24. Pyliavskyi, I., Martusenko, I., Molnar, O., Dzyana, H., Kushniriuk, V.: Modeling ways of improving green economy and environmental protection in the context of governance. Bus. Theory Pract. 22, 310–317 (2021)

Modeling of Cutting Operations for Robotic Surgery Using CAE Tools Stefano Pettinato1 , Roberto Francesco Pitzalis1,2 and Giovanni Berselli1,2(B)

, Jesús Ortiz2

,

1 University of Genova, Via all’Opera Pia 15/A, 16145 Genova, Italy [email protected], [email protected] 2 ADVR Department, Istituto Italiano di Tecnologia, Via S. Quirico 19d, 16163 Genova, Italy {roberto.pitzalis,jesus.ortiz}@iit.it

Abstract. Robotic surgery is a set of techniques and technologies used to plan, move, and perform surgery, exploiting the capabilities of robots to overcome human limitations in minimally invasive operations. The use of robots guided by augmented reality has allowed surgeons to improve vision and precision during surgery. Despite the results achieved over the years, there is still a high clinical demand for improvements, which can only be reached using intelligent physical or virtual tools. The present work aims to develop a virtual prototype of robotized bone milling operations, when there is the need to virtually predict operative performance. The proposed model leverages the active connection between Simulink, the well-known numerical tool, and RecurDyn, a commercial CAE solver. In particular, the model parametrization and the simulation process are managed from Matlab, whereas the mechanical system is solved in RecurDyn. After a proper validation of the simulation framework with data taken from the literature, a set of parametric studies has been carried out to investigate the influence of the input parameters on the cutting process. Keywords: Robotic surgery · Robotic milling optimization · Virtual prototyping · Passively compliant spindle

1 Introduction Robotic surgery is a set of techniques and technologies used to plan, move, and perform surgery by using robots to help surgeons in vision and precision. Lots of advantages come from robot assistants, like surgeon extenders and auxiliary surgical supports, when performing minimally invasive surgery as, for example: reduction of hospital stay and post-operative pain, lower risk of infection and post-surgical tumor cell residues [1]. For these reasons great attention has been put on surgical robots over the years, since in 2000 the well-known da Vinci surgical system has been approved for human use and introduced into the market, pushing for a breakthrough merging the boundaries of medicine and engineering. However, the regulatory, ethical, and legal barriers imposed on medical robots necessitate careful consideration of different levels of autonomy, as well as the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 192–203, 2023. https://doi.org/10.1007/978-3-031-15928-2_17

Modeling of Cutting Operations for Robotic Surgery Using CAE Tools

193

context for use [2]. In 2017, Yang et al. in [2] proposed an initial classification of the autonomy achievable by a surgical robot mapping it into six different levels [2, 3]: (level 0) no autonomy, (1) robot assistance, (2) task autonomy, (3) conditional autonomy, (4) high autonomy and (5) full autonomy. No autonomy consists in tele-operated robots which follow user’s command; whereas full autonomy consists in a “robotic surgeon”. From level 1 to 4, the treating physician is still in control of the operation to a significant extent [3]. Levels 4 and 5 are a manifestation of science fiction where robots become medicine practitioner and need stricter regulations [3]. Nevertheless, even more companies have been becoming interested in introducing 5G technology for remote-operation surgery. 2019 COVID pandemic could be a great example of how surgical robots that could be tele-operated from a safe distance, or mobile robots capable of carrying out simple autonomous tasks in hospitals like disinfecting spaces, would have been beneficial for patients and medical staff [3]. From the technological point of view, recent market research has proven that Robotic Deburring (RD) can be enhanced by the availability of methods and integrated tools capable of quickly analyze and optimize the process performance in a virtual environment. On the other hand, CAM tools already available neglect some of the main unavoidable sources of errors: inaccuracy of robot posture, low stiffness and unpredictable variety of burr size/shape. These inconveniences may lead to deformation and chatter vibration of the robot end-effector causing poor machining quality accuracy [4]. For these reasons, RD process parameters is still based on several physical tests, which drastically reduce the robotic cell productivity [4, 5]. Thus, due to these limits and lack of knowledge, we have specifically focused on predicting forces, displacements and machining performances which affect a surgical tool during knee arthroscopy through a Virtual Prototype Model of a compliant spindle. This research could expand the state-of-art on designing robotic manipulator providing a better understanding of how the execution and quality of the operations are influenced by process and tool parameters. The model developed is universal and usable in all production sectors where deburring is still an exhausting manual operation, such as the industrial one. Making it automatized will help to reduce defects related to manual deburring, production time and costs, and it will also be beneficial for workers’ health. The rest of the article is organized as follow: Sect. 2 describes the engineering methodology adopted by interfacing different software; Sect. 3 introduces the theory of milling and deburring, focusing on the most important parameters and variables involved; Sect. 4 shows the virtual prototype of a compliant spindle; Sect. 5 contains the schematic of the spindle during the milling process for the assessment of the virtual prototype; Sect. 6 contains experimental results from simulations which will be used to instantiate the behavioral model with engineering details. Finally, Sect. 7 reports the concluding remarks.

2 Materials and Methods The lack of proper analytical models of milling and deburring process led us to the use of peripheral milling models, since the machining operations are similar. Peripheral milling is the process of machining using a rotary cutting tool with multiple cutting points (flutes or teeth) to remove material by feeding a work-piece towards the tool with a continuous

194

S. Pettinato et al.

motion in the tangential direction. Most conventional orthopedic interventions adopted contacts methods (controlled mechanical fracture) for bone surgery which are dangerous because could damage human tissues or brake the bone itself causing long-lasting pains. Damages are due to the force the cutter applies as well as the geometry and the surface roughness of the tool itself. To avoid these issues, it is possible to use special surgical tools or improving the robotic device control by developing an analytical model capable of predict and optimize forces which act on the bone. The latter is exactly the aim we have pursued in this research. This process is quite complex to be analyzed and lots of researches have been focused on developing models for forces evaluation. Among all of those [6–9], the most studied and used is an analytical correlation of forces at the cutting edge described by the linear model developed by Altintas [9]. However, the parametrization of forces profiles is based on experimental cutting tests and it is only assessed for a specific combination of cutter-workpiece materials. Altintas mathematical model consisted of five sub-algorithms to evaluate: mill-workpiece profile interaction, kinematic profiles (position and velocity of the milling tool), feed rate, machining range boundaries and acting forces.

Fig. 1. Co-simulation diagram. Circled in red the input and end of the computation; circled in blue the main computational loop.

To make that model parametric and adaptable to different working conditions, in a previous work, Berselli et al. [4] developed a virtual model of the milling process using RecurDyn environment. However, that software was not suitable for such computational efforts, making the whole model very burdensome also due to the lack of an iterative loop. So that, we made a step ahead by interfacing three different software for a cosimulation analysis, each one used for a specific computational purpose, as shown in Fig. 1. RecurDyn is used to solve the dynamics of the system computing milling-tool position and velocity. Simulink, the master of the simulation, solves the mill-workpiece interaction by following the Altintas mathematical model and returns the values of forces

Modeling of Cutting Operations for Robotic Surgery Using CAE Tools

195

which have to be applied to the mill. Furthermore, the use of Matlab allows to manipulate data, correlate input parameters with performance indexes and graph outputs from the simulations, in order to analyze their trend due to changes in the input parameters. Matlab is supplied with all variables, modifiable through a dedicated workspace, so as to make the model parametric. This flow-chart allows to model the process by varying input parameters of the equations, save time and reduce computational effort.

3 Overview on Milling and Deburring The milling process, even when applied to bone, may be modelled as a mechanical system with stiffness and damping schematized as shown in Fig. 2(a). c

h

f=ϑ ϑ

m m

f

A

ϑ

ϑ

ϑ f=p/2+ϑ f=p+ϑ c

(a)

(b)

(c)

Fig. 2. (a) Schematizing of a basic milling tool during machining; (b) Diagram of a helical cutter; (c) Direction of feed velocity.

The mechanics of milling and deburring processes has been extensively treated in [4, 9]. Several parameters influence the process: force, speed, cutting depth, feed rate, tool clearance angle, lubrication. Let us consider an end-mill having diameter D, helix angle β and number of teeth (or flutes) N. Let us define c as the feeding rate, z as the axial cut depth, φj as the instantaneous angle of immersion of the j-th tooth within the workpiece, φp = 2π/N as the tooth spacing angle, φst , φex , φs = φex − φst respectively the cutter entry, exit and swept angles. When a helical mill is used (β = 0) a cutting edge point axially located at a distance z above a reference flute with an immersion angle of φ, it will lag behind the reference end point of a lag angle ψ = 2zD−1 , see Fig. 2(b), [9]. In particular, assuming that the bottom end of one flute is designated as the reference immersion angle φ, the immersion angle for the j-th flute at an axial cut depth z is [4, 9]: φj (z) = φ − jφp − kβ z for j = 0, . . . , N − 1; where kβ = 2D−1 tan(β).

(1)

Due to the compliant structure, the feed velocity vector of the work-piece will be inclined respect to the horizontal axis and could be defined as vF = vP − vA = [˙xF , y˙ F , z˙F ]T , where vP and vA are respectively the relative velocity of work-piece and point A with respect to O-frame. Hence, ϑ = arctan( yx˙˙ FF ) is defined as the spindle velocity inclina-

tion angle. So that, the process speed rate is c = (Nn)−1 (˙xF2 + y˙ F2 )

1/2

, while the chip

196

S. Pettinato et al.

  thickness, hj (φj ), is computed as follows: h φj , z = csin(φj (z) − ϑ). Naturally, if the spindle is neglected, it is possible to approximate: c = x˙ F ,˙yF = 0, z˙F = 0   compliance and h φj , z = csin(φj (z)) [4, 9]. The total force produced by  the j-th flutecan be  obtained integrating the differential cutting forces respect to zj,1 φj (z) and zj,2 φj (z) , the lower and upper axial engagement limits of the in-cut portion [4, 9]. 





Fx,j φj (z) =





      zj,2 φj (z)   φj (z) sin(ϑ) φj (z) cos(ϑ) cos 2φj (z) − ϑ sin 2φj (z) − ϑ c + Krc Ktc − − −

kβ 2 4 2 4 zj,1 φj (z)



  

    c 1 −Ktc 2φj (z) − sin 2φj (z) + Krc cos 2φj (z) + Kte cosφj (z) + Kre sinφj (z) Fy,j φj (z) = − 4kβ kβ

 

z (φ (z)) 1 Kac ccos(φj (z) − ϑ) zj,2 φj (z) Fz,j φj (z) = ) j,1 ( j kβ

j,2 φj (z)

z

zj,1 φj (z)

(2) (3) (4)

where Ktc , Krc , Kac are respectively defined as the cutting force coefficients due to the shearing action in tangential, radial, and axial directions; while Kte , Kre , Kae are the edge constants, evaluated using the methods described in [4, 10, 11]. The total instantaneous forces on the cutter at immersion φ are computed as follows: Fx (φ) =

N  j=0

Fxj Fy (φ) =

N 

Fyj Fz (φ) =

j=0

N 

Fzj

(5)

j=0

The resulting instantaneous force generated by the interaction mill-workpiece is given by:  F(φ) = Fx2 (φ) + Fy2 (φ) + Fz2 (φ) (6)

4 Mechanical Design of the Compliant Spindle The design of our spindle virtual prototype comes from the choice of a commercially spindle for Robotic Deburring (RD) machines. Among the great availability on the market, the tool considered is ATI RC 140, chosen for its widespread adoption and classical architecture [4]. It consists in a pneumatic spindle radially compliant which allows to compensate the shear force applied by the work-piece to the tool avoiding chatter during deburring. As shown in Fig. 3 the pneumatic motor is inserted into a housing and supported by a spherical joint (located at point O) and by a compliant device. The source of mechanical compliance, hereafter termed compliance device, consists of seven pistons, able to translate along their axis with a limited stroke, and connected to a common chamber with an air inlet. At the initial spindle configuration (undeflected, Fig. 4) all pistons are in contact with the motor; however, during cutting process, three possible contacts are imposed to each piston: contact with the inner structure (point C), contact with the chamber at either the lower or the upper dead-points [4].

Modeling of Cutting Operations for Robotic Surgery Using CAE Tools y

Inlet 1: Motor

z

External Structure

197

Crown of cylinders

Turbine Internal structure Rotor

x

Internal Structure

Cutter Ball joint (point O)

Pistons

Burr

Rotor

Workpiece

Physical Prototype (a)

Compliant device (c)

CAD Model (b)

Fig. 3. Compliant spindle configuration and components.

From a mechanical point of view, the spindle could be schematized as a rigid beam fixed on one end with a spherical joint (O) while in the other end it is loaded with a shear force FA , as shown in Fig. 5. FC is the compensation force applied by the compliant device. Shall we identify the parameter R (0 ≤ R ≤ 1) which is the ratio between dc and da . For commercially available spindles, R is comprised between 0.2 and 0.5. From the equilibrium equation respect to O, it is possible to notice how a variation on compensating pressure force FC leads to a variation in shear force FA proportional to R: FA = RFc

(7)

The spindle mechanical structure was modelled with a Multi-Body Dynamic Software (MBD) in order to provide a reliable model of the spindle behavior. Pistons in contact with the internal structure (point C) Only three pistons in contact with the internal structure Rotor Axis

A

α

O

Housing Axis

C

Offset LDP

Pistons at lower dead point (LDP)

Fig. 4. Compliant spindle in deflected configuration highlighting pistons position.

198

S. Pettinato et al.

O O

A

C

(a)

A C

(b)

Fig. 5. (a) Engraflexx spindle from Gravostar [12]; (b) Mechanical schematization of the spindle.

5 Virtual Prototype via CAE Tools A virtual prototype of the whole process can be done by interfacing a multibody model of the spindle mechanical structure with a parameter model of the cutting forces. In RecurDyn multibody environment, it is possible to import the geometry of a system as a CAD model, and then make complex analysis of its kinematics and dynamics. Firstly, we defined relations and boundaries among each elements of the spindle, Fig. 6, necessary to observe the mechanical response. The external structure of the device is fixed in O with a spherical joint which imposes the compliance between external and internal structures. The motor is fixed onto the inner structure with a cylindrical joint which only allows rotation respects to z-axis. The mill is set at one end of the motor shaft and will rotate together thanks to two rotational joints and respective ball bearings. The compliant mechanism consists of seven prismatic joints which are able to move in their locations  producing frictions against rubber seals and cover structure: Fsln = μ Asln Pcomp + P , where Pcomp is the compensation pressure imposed by the user, Asln is the seal lateral area, P the preload and μ being either the static or dynamic coefficient of friction. To reduce computational efforts, we neglected the contact point PMS between piston dome and external structure since it cannot happen. We set the contact point C between pistons and the inner bushing as the zero position (no forces applied on the end-effector), allowing a position error of 0.05 mm along the axis. Joints

Fixed to ground Ring of pistons

Prismatic

PMS Rubber seal PMI

Spherical Cylindrical

Back cover

D

C

External structure

(a)

(b)

Fig. 6. (a) Mechanical structure and joints of the spindle; (b) Piston cross section.

We defined all forces acting during the process: the milling force, the deburring force and the compensation force (due to the air pressure acting on the compliant device). Milling and deburring forces distribution, respectively due to either the mill-burr or the mill-work-piece interaction, are considered applied onto two spheres: the former is

Modeling of Cutting Operations for Robotic Surgery Using CAE Tools

199

placed into the middle of the end-effector (point B); while the latter is placed in the end-effector over the middle section of the burr (point A), as shown in Fig. 7(a). The compensation force is an input parameter which depends on user choice. Let us assume that as constant: Fpressure = Pcomp AF , where AF is the piston dome area.

n B

A

Burr profile

A Workpiece surface after machining

D Workpiece profile

O

(b)

(a)

Fig. 7. (a) Mill model in RecurDyn; (b) Excessive deburring and related parameters.

We proposed an improvement on the algorithm used in [4] by interfacing different software as described in Sect. 2. This allows to compute the milling tool position (xa , ya , za , coordinates of point A respect to the origin O) in a parametric way. The following discussion refers to point A, but same considerations could be done on point B to correctly evaluate milling process. Referring to Fig. 7(a), let us define a deburring virtual model considering the burr profile and its parameters: hb as the burr height, bb as the burr width. As broadly described in [4], the most common and preferred machining situation is excessive deburring, when the mill cuts out all the burr and penetrates a bit the workpiece surface. In this case, pr , the radial deburring depth, is pr > hb , so prb = hb ; prp = pr − hb , as shown in Fig. 7(b). Focusing on a purely horizontal feed motion, tilt angle between the milling tool and work-piece axes is ϑ = 0◦ . Recalling: φstb , the starting angle into the burr; φexb , the exit angle from the burr; φstp , the starting angle into the work-piece; φexp , the exit angle from the work-piece [4]. By interfacing Matlab, Simulink and RecurDyn, it was possible to compute forces trends applied by a j-th flute on the work-piece (p labelled in Fig. 8) and the burr (b labelled in Fig. 8) along each axis (x, y, and z). More important for the choice of a proper mill is the force-RMS value which includes both the forces on the burr and the work-piece:     N 1   2 Ftot (8) Fm = n 1

And also the surface machining error, err, which is equal to the difference between the height of the lower contact point of the mill after machining and the nominal profile of the piece before machining:   (9) err = min hpci − hn It is preferred being negative because means to obtain an excessive deburring.

200

S. Pettinato et al.

3

2

5

0

-1

Forces [N]

2

1

Forces [N]

Forces [N]

4

10

3

0

1 0

-5 -1

-2 0

-2

-10

1

0.5

1.5

Time [s]

0.5

0

1.5

1

Time [s]

x

Time [s]

1.5 x

(c)

(b)

(a)

1

0.5

0

x

Fig. 8. Total forces: (a) Related to a j-th flute; (b) On the workpiece related to all flutes; (c) On the burr related to all flutes.

6 Results To validate our virtual model the benchmark for automated surgical milling is the work made by Christopher Plaskos in [13] related to knee arthroscopy. We used all the same parameters imposed by Plaskos. We set: N, number of teeth, = 4; β, helix angle, = 30°; D, diameter = 8 mm; ap , radial depth = 10 mm; pr = 15 mm; ω, angular speed rate = 100 rpm; c, feed rate, = 2 μm; vava , feed speed rate, = 0.04 mm/s. We assumed a burr thickness of 1 mm. Due to the lack of information, we adopted the same cutting coefficients reported in [12]: Ktc = 500 N/mm2 , Krc = 700 N/mm2 , Kac = 300 N/mm2 . . Comparing our results to those of Plaskos (Fig. 9), it is possible to highlight strong correlations which allow to define a Design of Experiment (DOE) process to measure and plot machining error and mean forces acting on the milling-tool. 100 rpm, c=25μm

150

100

Force [N]

Force [N]

50

0

-50

-100 Measured Predicted

0

60

120

180

240

300

360

Rotation angle [°]

-150

0

90

180 270 360

(b)

Rotation angle [°]

(a)

Fig. 9. Fx and Fy forces depicted:(a) Plaskos’s simulation in blue while in black what experimentally measured; (b) Results from our virtual simulations.

Different parametric analysis (Figs. 10, 11, 12 and 13) on the mean force and machining error by varying pressure, feed velocity and rotational speed, brought to light an optimal condition of Pcomp = 8bar, vava = 70 mm/s and vm = 40000 rpm, which correspond to a very low and negative machining error. By setting these values, we conducted analysis involving the number of teeth and diameter of the milling tool by selecting different commercial orthopedic mills by [14].

Modeling of Cutting Operations for Robotic Surgery Using CAE Tools Constants:

=300

,

=8bar,

=70mm/s,

201

=40000rpm.

Mill diameter, D [mm]

err [mm]

-0.0292

-0.0297

err [mm]

Number of teeth, N

-0.0302

7

-0.0307 6 -0.0312

5

14

12

Diameter, D [mm]

4

10

8

Number of teeth, N

3

6

Fig. 10. Simulation results of machining error obtained by changing parameters N and D. =8bar,

Constants: err [mm]

[N/

=70mm/s,

=40000rpm.

] 0.5

[N/

err [mm]

0.4

]

0.3 0.2 0.1 0 900

-0.1 300

400

700 500

[N/

600

]

700

800 400

500

800

600

[N/

]

Fig. 11. Simulation results of machining error obtained by changing coefficients Ktc and Krc . Constants:

=300

,

=8bar,

=70mm/s,

=40000rpm. 20.53

Mill diameter, D [mm]

[N]

[N]

20.52

Number of teeth, N

20.51

20.50

20.49

20.48 6

8

10

12

14

Number of teeth, N

3

4

5

6

7

Diameter, D [mm]

Fig. 12. Simulation results of Fm acting of the cutter by changing parameters N and D. Constants:

=8bar,

=70mm/s, [N/

[N]

=40000rpm. ]

30

[N/

]

[N]

25

20

900 800

15 300

400

500

[N/

600

]

700 600 500 700

800

[N/

]

400

Fig. 13. Simulation results of Fm acting of the cutter obtained by changing Ktc and Krc .

The choice of the number of teeth is more critical than that of the diameter. In fact, by fixing a number of teeth the fluctuations of data are infinitesimal; contrary to what happens when fixing a diameter. At last, a parametric analysis involving cutting force

202

S. Pettinato et al.

coefficients highlights that the higher these coefficients are, the higher the cutting force and machining error are. We figured out that for values of Ktc = 300 N/mm2 , and Krc = 700, 800, 900 N/mm2 , it is necessary to increase Pcomp because forces along y-axis become too excessive. Finally, it has been designed attachments to allow the use of a commercial spindle with the xArm7 robot and arranged inside a simulated medical room to provide a general idea of a typical context of work, as shown in Fig. 14.

Fig. 14. Final render of the operating room with detail of the compliant spindle.

7 Conclusions A virtual model of milling and deburring processes has been proposed. It consists in integrating a 3D multi-body model of a pneumatic compliant spindle with the behavioral model of the process forces evaluating the influence of several design parameters. The proposed model allows to: a) estimate the process forces accounting for the presence of work-piece burr; b) estimate deburring error as function of the cutting parameters, hence allows designers to optimally set up deburring parameters. Numerical simulations show the usability and usefulness of the model by predicting both cutting forces and surface finishing accuracy, enabling for a virtual test of the process quality. Results are very close to those of literature [9, 13] and allow us to define a Design of Experiment (DOE) process, laying the bases to expand the state-of-art of surgical automated milling and deburring. The tool is particular useful for the offline tuning of robotic deburring process and can be integrated with a dynamic model of the robot structure built within the same simulation environment. However, the reliability of these promising results require validation also in the experimental field. A careful assessment of the proposed virtual prototype by experimental testing is subject of future work.

References 1. Bonfe, M., et al.: Towards automated surgical robotics: A requirements engineering approach. In: Proceedings of the 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), pp. 56–61. Rome, Italy (2012) 2. Yang, G.Z., et al.: Medical robotics-regulatory, ethical, and legal considerations for increasing levels of autonomy. Sci. Robot 2(4), 1–2 (2017)

Modeling of Cutting Operations for Robotic Surgery Using CAE Tools

203

3. Attanasio, A., et al.: Autonomy in surgical robotic. Annual Review of Control, Robotics, and Autonomous Systems 4, 651–679 (2021) 4. Berselli, G., et al.: Design optimization of cutting parameters for a class of radially-compliant spindles via virtual prototyping tools. Int. J. Comp. Aided Eng. Technol. 11(2), 232–252 (2019) 5. Chen, Q., et al.: Posture optimization in robotic machining based on comprehensive deformation index considering spindle weight and cutting force. Robotics and Computer-Integrated Manufacturing 74, 1–19 (2022) 6. Merchant, E.M.: Mechanics of the metal cutting process. I. Orthogonal Cutting and a Type 2 Chip. J. Appl. Phys. 16(5), 265–275 (1945) 7. Fang, N., et al.: An analytical predictive model and experimental validation for machining with grooved tools incorporating the effects of strains, strain-rates, and temperatures. CIRP Annals in Manufacturing and Technology 51(1), 83–86 (2002) 8. Shamoto, E., et al.: Prediction of shear angle in oblique cutting with maximum shear stress and minimum energy principles. J. Manuf. Sci. Eng. 121, 399–407 (1999) 9. Altintas, Y.: Manufacturing automation: metal cutting mechanics, machine tool, vibrations and CNC design. Applied Mechanics Review 54(5), 77–96 (2001) 10. Budak, E., et al.: Prediction of milling force coefficients from orthogonal cutting data. J. Manuf. Sci. Eng. 118(2), 216–224 (1996) 11. Gradišek, J., et al.: Mechanistic identification of specific force coefficients for a general end mill. Int. J. Mach. Tools Manuf 4(44), 401–414 (2004) 12. Gravostar Technologies: Engraflexx: sbavatura robotizzata di profili non definiti. https://gra vostar.com/it/sbavatura/downloads/ (2020) 13. Plaskos, C.: “Modeling and design of robotized tools and milling techniques for total knee arthroplasty”. PhD thesis in Engineering Sciences at Université Joseph-Fourier, Grenoble (2005) 14. XISHAN: Catalogue of Single-use Sterile Tools. https://pdf.medicalexpo.com/pdf/chongq ing-xishan-science-technology/xishan-catalogue-single-use-sterile-tools-consumables/121 460-231645.htm (2020)

Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries Nicola Cappetti , Silvia Pierri, and Carlotta Fontana(B) University of Salerno, 84084 Fisciano, SA, Italy [email protected]

Abstract. In the field of medical image processing, the resolution capacity exhibited by the initial diagnostic investigations is becoming increasingly important. With respect to them, in fact, the row image set is subjected to three-dimensional reconstruction analysis, by partitioning the regions of interest, as well as to local investigations, aimed, for example, at the extrapolation of topological information, relating to the morphology of the object that needs to be investigated. The accuracy of these functions is, however, difficult to quantify, due to the lack of three-dimensional models that act as a reference Gold Standard. The reproduction of CT-type diagnostic acquisitions, starting from a virtual scanning procedure of a starting known three-dimensional geometry is used. To do this, triangular tessellated three-dimensional models of various geometries were examined. These were broken down into cubic elements, equal in size to those of a common voxel, thus resulting in a volume scan simulation of the original region considered. The structure thus obtained was then subjected to skeletonization and medial axis algorithms to evaluate the effectiveness of some of the most commonly used functions in medical processing. A virtual scanning model of this type can be an extremely effective evaluation analysis tool in discriminating the resolutive quality of the medical image processing functions. From a qualitative comparison of this type, it is possible to optimize automated anatomical investigation algorithms, making a significant contribution in the refinement of the techniques, now more and more demanding, of image processing in the biomedical field. Keywords: Medical image acquisition · Voxelization · Thinning and Medial Axis

1 Introduction Strong developments in computer technology are driving towards the introduction of increasingly complex tools to help the most diverse industrial sectors. In this sense, research and development in the biomedical field plays a leading role. Surgical practice, today more than ever, in fact, makes use of instruments and computerized devices of high precision. Therefore, the sensitivity of these equipments to provide extremely accurate results, in the shortest possible time, becomes particularly important. The most modern diagnostic acquisition techniques contemplate the implementation, within them, of effective optimization algorithms, with the aim of producing an © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 204–216, 2023. https://doi.org/10.1007/978-3-031-15928-2_18

Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries

205

extremely small margin of error. In this sense, the medical image represents the starting tool for the most common diagnostic and treatment practices with respect to a particular pathology of the human system. In the purely IT engineering field, the search for the differences in detail, between one diagnostic technique and another, is of fundamental importance in understanding the deductive capacity of these tools in detecting morphological alterations of clinical interest [1–3]. Among the major diagnostic imaging techniques, computed tomography [CT] plays a fundamental role for its ability to produce results of digital reproductions, with good anatomical detail and with a relatively short computational waiting time, characteristics such as to make it the most widely used diagnostic tool in emergency rooms [4, 5]. The use of diagnostic imaging techniques of the CT type also allows to broaden the horizons of research developments in the bioengineering field, aimed, above all, at the automatic recognition of pathological anomalies starting from the analysis and three-dimensional reconstruction of entire series of acquisitions. of images. These investigations are carried out relying on Machine Learning or topological analysis algorithms capable of recognizing the presence of even minimal alterations, starting from the study of the morphonology of the anatomical region of interest. An example concerns the automatic detection of cerebral and aortic aneurysms, until now entrusted only to the distinctive abilities of a clinical eye, which can represent a fundamental aid in promptly preventing the formation and development of the disease, limiting an increase in the level of risk for the patient’s life [6–8]. Another medical application concerns the reconstruction of injured anatomical parts for the design and implantation of custom-made replacement prostheses or scaffolds using 3D printing [9]. Most of these processing techniques use scanned images, segmented by a clinical expert, to verify the reliability of the reconstruction results obtained, using them as the gold standard of reference. The implementation of medical images in the development of effective algorithms has highlighted, however, the presence of substantial defects in the resolution of the latter. In the field of computed tomography, in fact, inaccuracies in the acquisition of images could occur for many reasons, among which the Partial volume effect [10, 11], the noise and the image resolution [12]. In this case, in the partial volume effect, this imprecision is due to the presence within the voxel of a structure with a different density which contributes to the determination of the average density of the entire voxel. For this reason, a clear margin between two structures is less defined. This effect is less evident when the dimensions of the voxel are reduced by reducing the section or through the use of larger matrices and a narrow reconstruction field. The dimensions of the voxels, together with the number of layers considered, greatly contribute to defining the resolutive quality of the scans obtained. The present research work aims to reproduce a virtual scanning model, inspired by the CT technique, starting from a known geometry, in the form of a mesh. The algorithm, in fact, scans, slice by slice, the starting mathematical element, which acts as a true Gold Standard of reference, allowing the elaboration of the images obtained, through the implementation of algorithms of three-dimensional analysis and reconstruction. The goal is the ability to determine the effectiveness of the algorithmic models used, through a direct comparison with the starting geometric model. By this way, it is possible to circumvent the strict dependence on manual segmentation of scanned images, often affected by artifacts that alter their quality, and determine the

206

N. Cappetti et al.

presence of errors. in the operations of dimensional comparison measurements. Starting from the scanning of elementary geometric models, the skeletonization and research functions of the medial axis, widely used in the biomedical field, will be implemented here, in order to evaluate their effectiveness and to establish a comparative analysis with respect to some of the most frequently used algorithms in their extrapolation. Of the aforementioned functions, the first processes a three-dimensional erosion process of the geometry to which it is associated, resulting in the central axis of the object, preserving its connectivity between the constituent voxels; the second computes the punctual distances of the voxels of the model with respect to the closest background. These are shape functions through which the dimensional and geometric information of the starting model is obtained. Their application in the technological field is very simple. In the biomedical field, in particular, they are widely used in the three-dimensional reconstruction of anatomical regions; an example concerns the identification of lobes in the liver area [13] and the segmentation of cranial and optic nerves [14, 15]. Often, instead, the re-course with shape algorithms allows to discriminate between different diagnoses, accurately describing the structure of a particular anatomical anomaly [16]. Nevertheless, an application example of the shape functions in related fields, such as Rapid Prototyping [17] and in 3D navigation [18], is reported. The quality of segmentation and analysis starting from series of images, as in the case of CTs, is strongly influenced by the resolution of the latter, in particular by the characterization of voxels distribution in space [19–21]. The present research work aims to reproduce a virtual scan of known threedimensional models and to characterize reconstruction differences, compared to the original model, as the dimensional characteristics of the voxels vary. The three-dimensional models thus obtained were then subjected to shape analysis, in order to evaluate the effectiveness of the algorithms used, by means of a qualitative comparison between some of the main functions made available by the libraries in image processing, using the original geometry, as a reference Gold Standard, in the comparison between the obtained and real measurements.

2 Methods This research work is based on the employment of a voxelization algorithm of threedimensional geometries capable of simulating the result of a CT scan. This algorithm was developed in-house with the aim of using it for the analysis of new and commonly used shape functions. in the field of medical image processing. The idea arises from the difficulty, in the field of Medical Imaging, to make use of a reference Gold Standard to be used to evaluate the accuracy of the results deriving from the use of shape reconstruction algorithms. An illustrative case concerns the frequent use of Thinning and Medial Axis algorithms, capable of providing precise information on the morphological characteristics of the object to which they are applied. To effectively verify the validity of the reconstruction and analysis processes in medical field, the modeling of simple three-dimensional geometries was carried out, characterized by a small-scale configuration of a tubular type, with a circular section. The reason lies in the fact that, in the field of medical imaging, anatomical elements of reduced dimensions, such as vascular structures, are the most difficult to reconstruct and

Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries

207

of greatest interest in the routine implementation of data processing and extrapolation capable of deducing morphological and dimensional information. The results obtained, in the form of dimensional values, downstream of the shape functions implementation, will finally be compared with the actual dimensions of the known original parametric models, in order to analyze the behavior of the various functions tested and their results accuracy. 2.1 Geometry Models Twenty scans were carried out relating to two geometric shapes modeled in a CAD environment, such as, respectively, a two-cylinder element and a circular element, in the shape of a torus. In particular, the characteristics relating to the mathematical models created are shown in Table 1. The reproduced geometries are representative of the morphological characteristics most commonly found in the anatomical field, i.e., tubular structures associated with frequent section variations. In addition, to evaluating the algorithmic performance in the extrapolation of the medial axis as a function of the aforementioned silhouettes, their use was also evaluated by varying the spatial inclination, a phenomenon that is not infrequently associated with dimensional errors in algorithmic processing The dimensions of the models realized are on a small scale, in order to highlight the sensitivity of the virtual scan model in the reconstruction of particularly reduced volume details, in relation to the real dimensions of anatomical interest, which are reproduced in the diagnostic acquisition phase in the radiological field (Fig. 1). Table 1. Dimensional characteristics of geometry models. Geometry

Dimensions and size

Horizontal Bi-cylinder

D1 = 4 mm; D2 = 8 mm; L1 = 7 mm; L2 = 14 mm

30° Bi-cylinder 45° Bi-cylinder 60° Bi-cylinder Vertical Bi-cylinder Horizontal Torus_6 mm

D = 2 mm R = 3 mm

30° Torus_6 mm 45° Torus_6 mm 60° Torus_6 mm Vertical Taurus_6 mm Horizontal Torus_8 mm

D = 2 mm; R = 4 mm

30° Torus_8 mm 45° Torus_8 mm 60° Torus_8 mm Vertical Torus_8 mm Horizontal Torus_10 mm 30° Torus_10 mm 45° Torus_10 mm 60° Torus_10 mm Vertical Taurus_10 mm

D = 2 mm; R = 5 mm

208

N. Cappetti et al.

Fig. 1. Technical drawing of the geometric figures modeled in parametric form: bi-cylinder shape on the left, torus shape on the right.

2.2 Scanning Imaging Algorithm The geometric models thus obtained were stored in mesh format and processed within an algorithm developed in a Python programming environment, for the implementation of virtual scanning. The procedure reads the mathematics in .STL format. This latter is subjected to a process of reduction of the geometric volume in voxels, through an in-house voxelization process. In particular, for each of the geometries, three different volumes have been obtained, which differ from each other for the voxels dimensions that constitute them, assumed respectively to be equal to 0.3, 0.35, and 0.4. The voxel was, here, considered to be cubic in shape. The peculiarity of the volume obtained is that of intrinsically containing the information, for each voxel, relating to the percentage of occupation of the geometry within the single voxel element. More precisely, the scanning process performs a volume calculation inside each cell considered, assigning a maximum value to the voxels entirely occupied by the geometry under examination, and a value, different from zero, in the remaining cells constituting the edge surface, proportional to the level of volume filled by the model inside the cell. 2.3 Shape Analysis: Thinning and Distance Function Algorithms The models thus obtained constitute a volume representation, in the form of a voxel, of any geometry considered. The use of a destructuring geometry of this type makes it possible to obtain a reproduction, in a similar form, of the volume elements normally used in medical image processing algorithms. This makes it possible to use this type of geometry in analysis and processing algorithms, aimed at obtaining key information in computational investigations of a topological and morphological nature, often fundamental in the radiological field, in formulating a correct diagnosis. The shape algorithms, used here in the processing of volumes of images, are of the Skeleton and Medial axis type. In particular, by skeleton we mean the reduction of a geometry to the central axis only, with a thickness of 1 pixel. The medial axis allows the information relating to the distance from the nearest background to be attributed to the central skeleton, expressing itself in a punctual distribution of distance elements through which it is possible to identify the shape dimensions of the element to which it belongs. There are many data analysis libraries, especially in Python language, which provide efficient skeletonization and distance function algorithms. In the present work three of them have been used, such as, in particular, Scikit-image, ITK and SGEXT, to evaluate the processing differences between the different methods used and compare

Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries

209

them, therefore, with the variation of the dimensions of the voxels through which the geometries have been reconstructed, in the previous volume scanning phase. The first case concerns the generation of the skeleton using the algorithm provided by the Scikitimage library, relating to three-dimensional geometries. The theoretical criterion on which the algorithm is based is [22]. The calculation of the distances was obtained by implementing the Euclidean distance transform provided by the Scipy library. A second skeleton was obtained using the thinning algorithm developed by the ITK library, for which reference should be made [23]. The algorithm automatically extrapolates the medial axis, referring to the Euclidean distance calculation described in [24]. Finally, a third algorithm was tested, relating to the model proposed by the SGEXT library, of which the bibliographic reference in the elaboration of the skeleton is [25]. In turn, the function extrapolates the medial axis, relying, in the calculation, on the external DGtal library, based on the Lp metric method. The following tables (Tables 2, 3 and 4) show the results obtained, downstream of the previously described elaborations. Table 2. Maximum distance value [mm] from the medial axis results of the Scikit.image, ITK and SGEXT algorithms, for different voxel dimensions. Scikit-image Dim voxel [mm]

0.3

0.35

ITK 0.4

0.3

SGEXT 0.35

0.4

0.3

0.35

0.4

Horizontal Bi-cylinder

3.95

4.14

3.57

3.51

3.60

3.02

3.56

3.70

3.34

30° Bi-cylinder

3.91

3.97

4.01

3.47

3.44

3.44

3.32

3.38

3.46

45° Bi-cylinder

3.94

4.15

3.87

3.57

3.58

3.24

3.46

3.53

3.24

60° Bi-cylinder

3.91

3.97

4.02

3.47

3.45

3.44

3.3

3.03

3.46

Vertical Bi-cylinder

3.95

4.14

3.51

3.51

3.60

2.94

3.56

3.70

3.34

Horizontal Torus_6 mm

0.85

0.7

0.8

0.42

0.35

0.4

0.75

0.70

0.8

30° Torus_6 mm

0.74

0.78

0.8

0.3

0.35

0.4

0.65

0.78

0.8

45° Torus_6 mm

0.73

0.78

0.89

0.3

0.35

0.4

0.65

0.73

0.83

60° Torus_6 mm

0.73

0.78

0.8

0.3

0.35

0.4

0.65

0.73

0.8

Vertical Taurus_6 mm

0.85

0.7

0.8

0.42

0.35

0.4

0.75

0.7

0.8

Horizontal Torus_8 mm

0.73

0.7

0.8

0.3

0.35

0.4

0.65

0.7

0.8

30° Torus_8 mm

0.73

0.78

0.8

0.3

0.35

0.4

0.65

0.73

0.8

45° Torus_8 mm

0.73

0.78

0.89

0.3

0.35

0.4

0.65

0.73

0.83

60° Torus_8 mm

0.73

0.78

0.8

0.3

0.35

0.4

0.65

0.73

0.8

Vertical Torus_8 mm

0.73

0.7

0.8

0.3

0.35

0.4

0.65

0.7

0.8

Horizontal Torus_10 mm

0.73

0.7

0.8

0.3

0.35

0.4

0.65

0.7

0.8

30° Torus_10 mm

0.73

0.78

0.89

0.3

0.35

0.4

0.65

0.73

0.83

45° Torus_10 mm

0.73

0.78

0.89

0.3

0.35

0.4

0.65

0.73

0.83

60° Torus_10 mm

0.73

0.78

0.89

0.3

0.35

0.4

0.65

0.73

0.83

Vertical Taurus_10 mm

0.73

0.7

0.8

0.3

0.35

0.4

0.65

0.7

0.8

210

N. Cappetti et al.

Table 3. Minimum distance value from the medial axis results of the Scikit.image, ITK and SGEXT algorithms, for different voxel dimensions. Scikit-image

ITK

SGEXT

Dim voxel [mm]

0.3

0.35

0.4

0.3

0.35

0.4

0.3

0.35

0.4

Horizontal Bi-cylinder

1.79

1.75

1.78

1.50

1.40

1.26

2.09

2.00

1.66

30° Bi-cylinder

1.50

1.91

2.03

1.20

1.44

1.49

1.60

1.76

1.51

45° Bi-cylinder

2.10

1.97

0.40

1.61

1.48

0.40

1.89

1.97

0.40

60° Bi-cylinder

1.5

2.01

1.78

1.2

1.48

1.26

1.6

1.06

1.51

Vertical Bi-cylinder

1.50

1.40

0.40

1.20

1.05

0.40

2.09

2.00

0.40

Horizontal Torus_6 mm

0.6

0.7

0.69

0.3

0.35

0.4

0.6

0.70

0.58

30° Torus_6 mm

0.6

0.61

0.4

0.3

0.35

0.4

0.60

0.50

0.58

45° Torus_6 mm

0.6

0.7

0.56

0.3

0.35

0.4

0.6

0.7

0.58

60° Torus_6 mm

0.6

0.61

0.56

0.3

0.35

0.4

0.6

0.50

0.58

Vertical Taurus_6 mm

0.6

0.7

0.69

0.3

0.35

0.4

0.6

0.7

0.58

Horizontal Torus_8 mm

0.6

0.7

0.8

0.3

0.35

0.4

0.6

0.7

0.8

30° Torus_8 mm

0.6

0.5

0.56

0.3

0.35

0.4

0.6

0.5

0.58

45° Torus_8 mm

0.6

0.61

0.56

0.3

0.35

0.4

0.6

0.5

0.58

60° Torus_8 mm

0.6

0.5

0.56

0.3

0.35

0.4

0.6

0.5

0.58

Vertical Torus_8 mm

0.6

0.7

0.8

0.3

0.35

0.4

0.6

0.7

0.8

Horizontal Torus_10 mm

0.6

0.7

0.7

0.3

0.35

0.4

0.6

0.7

0.58

30° Torus_10 mm

0.6

0.5

0.4

0.3

0.35

0.4

0.6

0.5

0.58

45° Torus_10 mm

0.6

0.61

0.56

0.3

0.35

0.4

0.6

0.5

0.58

60° Torus_10 mm

0.6

0.61

0.56

0.3

0.35

0.4

0.6

0.5

0.58

Vertical Taurus_10 mm

0.6

0.7

0.4

0.3

0.35

0.4

0.6

0.7

0.5

Table 4. Number of skeleton elements (voxel) obtained from the discretization algorithm, for different voxel dimensions. Scikit-image

ITK

SGEXT

Dim voxel [mm]

0.3

0.35

0.4

0.3

0.35

13

30

32

0.4

0.3

0.35

13

26

28

0.4

Horizontal Bi-cylinder

30

32

30° Bi-cylinder

36

31

23

36

31

23

37

26

27

45° Bi-cylinder

30

28

221

30

28

154

22

23

235

25

(continued)

Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries

211

Table 4. (continued) Scikit-image Dim voxel [mm]

0.3

0.35

ITK 0.4

0.3

SGEXT 0.35

0.4

0.3

0.35

0.4

60° Bi-cylinder

31

27

26

31

27

26

37

26

25

Vertical Bi-cylinder

32

34

100

32

34

89

26

28

85

Horizontal Torus_6 mm

56

48

42

56

48

38

56

48

40

30° Torus_6 mm

53

44

40

53

40

28

52

44

40

45° Torus_6 mm

50

40

36

50

40

30

48

40

36

60° Torus_6 mm

52

44

40

52

40

28

52

44

40

Vertical Taurus_6 mm

57

48

42

57

48

38

56

48

40

Horizontal Torus_8 mm

74

64

56

74

64

56

76

64

56

30° Torus_8 mm

70

60

52

70

56

30

71

60

52

45° Torus_8 mm

66

56

48

66

52

40

66

56

48

60° Torus_8 mm

71

60

54

71

56

30

69

60

52

Vertical Torus_8 mm

73

64

56

73

64

56

76

64

56

Horizontal Torus_10 mm

94

80

72

94

80

71

95

80

72

30° Torus_10 mm

88

76

68

88

72

38

88

76

64

45° Torus_10 mm

82

69

60

82

65

44

82

69

60

60° Torus_10 mm

88

76

66

88

72

36

88

76

64

Vertical Taurus_10 mm

93

80

72

93

80

69

93

80

72

3 Results The implementation of the skeletonization and medial axis shape functions to the geometries obtained downstream of the virtual scanning process in voxel format has obtained the aforementioned results, visible in Tables 2, 3 and 4. In particular, from the threedimensional display of the median axes, a different behavior was shown by the respective three libraries, strongly dependent on the orientation of the geometry in space and on the spatial discretization, a function of the size of the voxel. The search for the central axis relating to the condition of greater volume discretization taken into consideration, equal to 0.4 mm, has in fact produced different results between the different algorithms with the generation of evident errors in the axial reconstruction, as the inclination of the model varies in the space. An example case of the results obtained, under the described conditions, is reported below, referring to the bi-cylinder model:

212

N. Cappetti et al.

Fig. 2. Skeleton representation of the bi-cylinder geometry in the horizontal twin cylinder (letter “a”), 45° inclined twin cylinder (letter “b”) and vertical twin cylinder (letter “c”) configuration. The results obtained from the implementation of the algorithms developed by the ITK (n.1), Scikit-image (n.2) and SGEXT (n.3) libraries are reported.

As can be seen in Fig. 2, in fact, the result of the skeletonization appears different as the orientation of the geometry in space varies, producing structural deformations, particularly evident in the condition of maximum volume discretization of the object in question. This behavior, highlighted in the results produced by ITK and Scikit-image, is to be taken into consideration in the elaborative analyzes of diagnostic images, whose dimensional values are inserted in an extremely small-scale context and whose spatial dimension of the voxels, depending on the resolution of the diagnostic machine considered, lies within the range of values here considered. Nevertheless, the difficulty of the aforementioned algorithms in elaborating an effective reconstruction of the central axis as the topological conditions of the region of interest vary it is highlighted, a phenomenon accentuated in the case of reconstruction investigations of particularly complex anatomical structures, such as vascular ramifications, in which there is no preferential direction of spatial placement. On the other hand, as regards the dimensional results, i.e., the values relating to the point distances of the central axis from the edge, since morphology of the skeletons resulting from the three algorithms are different from each other, it results difficult to make a mutual quantitative comparison between the distance values resulting from the application of all the medial axis functions implemented. Consequently, the three libraries ‘algorithms had been evaluated individually, by estimating their relative error, for different space orientation configurations (Table 5).

Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries

213

Table 5. Medial error percentage values from medial axis performance results. Maximum distance error

Scikit-image

Dim voxel [mm]

0.3

0.35

ITK 0.4

0.3

SGEXT 0.35

0.4

0.3

0.35

0.4

Bi-cylinder

1%

4%

13%

3%

5%

16%

8%

19%

7%

Torus_6 mm

15%

11%

11%

34%

0%

0%

14%

11%

4%

Torus_8 mm

0%

11%

11%

0%

0%

0%

0%

4%

4%

Torus_10 mm

0%

11%

11%

0%

0%

0%

0%

4%

4%

Minimum distance Error

Scikit-image

Dim voxel [mm]

0.3

0.35

0.4

0.3

0.35

0.4

0.3

0.35

0.4

Bi-cylinder

36%

34%

128%

31%

31%

113%

26%

53%

115%

Torus_6 mm

0%

14%

50%

0%

0%

0%

0%

32%

0%

Torus_8 mm

0%

33%

37%

0%

0%

0%

0%

34%

33%

Torus_10 mm

0%

32%

57%

0%

0%

0%

0%

34%

14%

ITK

SGEXT

In particular, with respect to the reference geometry and for each measured distance, the difference between the maximum and minimum values obtained was compared to the relative average of the. From an evaluation comparison it emerged that the performance behaviors of the three algorithms are different with respect to the two types of geometries considered. In particular, a globally effective result is found in the calculation of the medial axis relative to the representation of structures in the shape of a torus. Therefore, it is assumed that the algorithms have a high ability to keep track of the curvatures of the geometry considered. Of all the libraries examined, ITK is the one that performs best when the spatial conditions of the object vary, namely being independent with respect to the physical orientation of the object in space. This characteristic is denoted by the invariant behavior to spatial orientation, exhibited by the ITK function, relative to the bi-cylinder geometric element, subjected to an incremental modification of the spatial inclination, respectively equal to 30°, 45°, 60° and 90°. Compared to the representation relating to the bi-cylinder, however, a good behavior was obtained by the algorithm of the scikit image library, which, in general, approximates the dimensions of the object more accurately. Even in this second geometric condition, however, the result produced by the ITK processing best approximates the morphological characteristics of the model preserving its conformation when the topological conditions vary. Compared to the different discretization models, obtained by varying voxel dimensions, a greater measurement accuracy is found in correspondence with smaller dimensions of the voxel, in particular for 0.30, indicating that the variation in the size of the object influences the behavior of the aforementioned shape functions.

214

N. Cappetti et al.

4 Conclusions This research work is focused on a testing a CT-scan voxelization algorithm to perform a comparative evaluation of some of the main 3D shape processing algorithms, starting from three-dimensional objects, represented in the form of voxels. A discretization algorithm allows to obtain representations of the same object, as the dimensions of the voxel vary, allowing to reproduce the three-dimensional model with the desired degree of volume accuracy, adapting it to the resolution capabilities of the comparison diagnostic machines. Two shape algorithms, respectively of skeletonization and medial axis, were applied, with the aim of making a qualitative comparison between different algorithms developed by some of the main data processing libraries, such as Scikit-image, ITK and SGEXT. The comparison is aimed at establishing its effectiveness in performing the dimensional and morphological reconstruction of small objects, in connection with the processing of volume images generally used in radiology. The use of similar analysis tools allows the automatic extrapolation of key information in the development of a correct clinical diagnosis, for which a high level of accuracy is required, especially for a small-scale level of anatomical detail. In this sense, two different tubular geometries have been modeled; the first, twin-cylinder, characterized by a sudden variation in section; the second, in the shape of a bull, characterized by a strong curvature. The two models were considered according to different spatial inclinations and subjected to a first voxel volume scanning algorithm, of which three different dimensions were compared. The configurations thus obtained were then subjected to shape analysis, allowing the extrapolation of the precise distribution of the dimensional measurements of the objects considered. A comparative analysis of the results obtained showed a greater accuracy in the approximation of the dimensional measurement by the Scikit-Image library, within the calculated maximum distance, followed by the SGEXT library, more performing, however, in the measurement of the minimum distance. However, there is a more effective behavior of the algorithm developed by ITK in correctly approximating the value of the distance, maximum and minimum, freeing itself from the dependence on the variation of the inclination of the object in space, a condition that can be decisive in ensuring high precision and accuracy of results, in the diagnostic field. The use of a virtual scanning model of three-dimensional objects can be extremely useful in the field of medical imaging, in validating the accuracy of the results obtained downstream of topological and shape processing, normally used in diagnostic investigation software. The model allows, in fact, the designation of a reference Gold Standard, normally entrusted to a manual segmentation, no longer affected by the presence of errors deriving, upstream, from the resolution of the digitalization process of the diagnostic images. Such a tool can, therefore, be useful in improving the effectiveness and accuracy of three-dimensional model processing algorithms, which are now increasingly popular in the optimization of processing analyzes in the medical field.

References 1. Khalkhali, I., Mena, I., Diggles, L.: Review of imaging techniques for the diagnosis of breast cancer: a new role of prone scintimammography using technetium-99m sestamibi. European journal of nuclear medicine 21(4), 357–362 (1994)

Skeleton and Medial Axis Functions Evaluation of Voxel Discretized Geometries

215

2. Kherlopian, A.R., et al.: A review of imaging techniques for systems biology. BMC systems biology 2(1), 1–18 (2008) 3. Salehi S., Abedi A., Balakrishnan S., Gholamrezanezhad A.: Coronavirus Disease 2019 (COVID-19): A Systematic Review of Imaging Findings in 919 Patients. AJR Am J Roentgenol. 215(1), 87–93 (Jul 2020). https://doi.org/10.2214/AJR.20.23034. Epub 2020 Mar 14. PMID: 32174129 (2020) 4. Juliusson, G., Thorvaldsdottir, B., Kristjansson, J.M., Hannesson, P.: Diagnostic imaging trends in the emergency department: an extensive single-center experience. Acta radiologica open 8(7), 2058460119860404 (2019) 5. Jin, P.F.K., Van Geene, A.R., Linnau, K.F., Jurkovich, G.J., Ponsen, K.J., Goslings, J.C. Time factors associated with CT scan usage in trauma patients. European journal of radiology 72(1), 134–138 (2009) 6. Larrabide, I., Villa-Urio, M.C., Cardenes, R., Pozo, J.M., Hose, D.R., Frangi, A.F.: Automated intracranial aneurysm isolation and quantification. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 2841–2844. IEEE (September 2010). (2008) 7. Meng, C., Yang, D., Chen, D.: Cerebral aneurysm image segmentation based on multi-modal convolutional neural network. Computer Methods and Programs in Biomedicine 208, 106285 (2021) 8. Kauffmann, C., et al.: Measurements and detection of abdominal aortic aneurysm growth: accuracy and reproducibility of a segmentation software. European journal of radiology 81(8), 1688–1694 (2012) 9. Ciocca, L., De Crescenzio, F., Fantini, M., Scotti, R.: CAD/CAM and rapid prototyped scaffold construction for bone regenerative medicine and surgical transfer of virtual planning: a pilot study. Computerized Medical Imaging and Graphics 33(1), 58–62 (2009) 10. Heckel, F., et al.: Segmentation-based partial volume correction for volume estimation of solid lesions in CT. IEEE Transactions on Medical Imaging 33(2), 462–480 (2013) 11. Liu, W., Zhu, Y., Tang, L., Zhu, X., Xu, Y., Yang, G.: Effect of various environments and computed tomography scanning parameters on renal volume measurements in vitro: A phantom study. Experimental and Therapeutic Medicine, 12(2), 753–75 (2016). (2013) 12. Barrett, J.F., Keat, N.: Artifacts in CT: recognition and avoidance. Radiographics 24(6), 1679– 1691 (2004) 13. Zwettler, G., Swoboda, R., Pfeifer, F., Backfrieder, W.: Fast medial axis extraction algorithm on tubular large 3D data by randomized erosion. In: International Conference on Computer Vision and Computer Graphics, pp. 97–108. Springer, Berlin, Heidelberg (January 2008). (2009). https://doi.org/10.1007/978-3-642-10226-4_8 14. Sultana, S., Blatt, J.E., Gilles, B., Rashid, T., Audette, M.A.: MRI-based medial axis extraction and boundary segmentation of cranial nerves through discrete deformable 3D contour and surface models. IEEE transactions on medical imaging 36(8), 1711–172 (2017) 15. Noble, J.H., Dawant, B.M.: An atlas-navigated optimal medial axis and deformable model algorithm (NOMAD) for the segmentation of the optic nerves and chiasm in MR and CT images. Medical image analysis 15(6), 877–88 (2011) 16. Fetita, C., et al.: Transferring CT image biomarkers from fibrosing idiopathic interstitial pneumonia to COVID-19 analysis. In: Medical Imaging 2021: Computer-Aided Diagnosis, vol. 11597, p. 1159708. International Society for Optics and Photonics (February 2021) 17. Ding, D., Pan, Z., Cuiuri, D., Li, H., Larkin, N.: Adaptive path planning for wire-feed additive manufacturing using medial axis transformation. Journal of Cleaner Production, 133, 942–952 (2016) 18. Fu, M., Liu, R., Qi, B., Issa, R.R.: Generating straight skeleton-based navigation networks with Industry Foundation Classes for indoor way-finding. Automation in Construction 112, 103057 (2020)

216

N. Cappetti et al.

19. Andre-Luiz-Ferreira Costa, B.V., Barbosa, J.P.P.G., Alison-Jhisel-Mansmith Calle, M.P., Santamaria, S.L.P.D.C.: Influence of voxel size on the accuracy of linear measurements of the condyle in images of cone beam computed tomography: a pilot study. Journal of Clinical and Experimental Dentistry 10(9), e876 (2018) 20. Moshfeghi, M., Tavakoli, M.A., Hosseini, E.T., Hosseini, A.T., Hosseini, I.T.: Analysis of linear measurement accuracy obtained by cone beam computed tomography (CBCT-NewTom VG). Dental research journal, 9(Suppl 1), S57 (2012) 21. Ganguly, R., Ramesh, A., Pagni, S.: The accuracy of linear measurements of maxillary and mandibular edentulous sites in cone-beam computed tomography images with different fields of view and voxel sizes under simulated clinical conditions. Imaging science in dentistry 46(2), 93–101 (2016) 22. Lee, T.C., Kashyap, R.L., Chu, C.N.: Building skeleton models via 3-D medial surface axis thinning algorithms. CVGIP: Graphical Models and Image Processing 56(6), 462–478 (1994) 23. Homann, H.: Implementation of a 3D thinning algorithm. Insight Journal 42 (2007) 24. Maurer, C.R., Qi, R., Raghavan, V.: A linear time algorithm for computing exact Euclidean distance transforms of binary images in arbitrary dimensions. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(2), 265–270 (2003) 25. Couprie, M., Bertrand, G.: Asymmetric parallel 3D thinning scheme and algorithms based on isthmuses. Pattern Recognition Letters 76, 22–31 (2016)

User-Centered Design for Manufacturing and Assembly: An Application in the Early Design Stages Cristina Renzi(B)

and Francesco Leali

Department of Engineering “Enzo Ferrari”, University of Modena and Reggio Emilia, Via P. Vivarelli, 10, 41125 Modena, Italy {cristina.renzi,francesco.leali}@unimore.it

Abstract. In product design fields where the demand for an efficient usability is critical to its final success, it is essential to integrate user requirements into the design process from the earliest design stages. Particularly in the medical context, it is essential to systematically control the design phases in order not to neglect the strict constraints related to engineering design quality, safety, and usability requirements for the end user. Hence, in this work, an integrated user-centered product design and process simulation method is proposed. This method is aimed at enriching the state of the art in integrated product design and process simulation in the early design stages, as well as providing a novel design example of a device thought for medical contexts. The case study consists in the design of an intercom, aimed at improving patient-doctor communication in the case of bedridden patients on with helmet for Continuous Positive Airway Pressure (CPAP) therapy during COVID-19 pandemic emergency. Patients undergoing helmet-assisted ventilation are often immersed in a highly noisy environment, unable to fully communicate their needs to the doctors. Intercom devices are addressed to the doctors for improving ease communication with the patient. The proposed user-centered design for manufacturing and assembly proved to be efficient in such a complex design context. Keywords: User-centered design · Integrated product and process design method · Process simulation

1 Introduction Engineering design can be conceived as a complex process made of a series of decisions (i.e. “either- or”) and compromises (a trade-off). In this background, systematic decision-based design methods could be very useful to handle uncertainties in complex engineering design contexts. Early design is an uncertain stage in which several constraints should be considered concurrently, to prevent later expensive redesign activities [1]. Moreover, especially in industrial contexts, the fulfillment of customers’ needs is crucial for a successful product. Hence, customer needs, should be translated into product requirements to design the product around the user, in a user-centered design context. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 217–228, 2023. https://doi.org/10.1007/978-3-031-15928-2_19

218

C. Renzi and F. Leali

Particularly in the medical context, it is essential to systematically control the design phases in order not to neglect the numerous constraints related to engineering design quality, the satisfaction of safety and usability requirements for the end user. The usability of the product in the medical field is gaining ground so that numerous environments are dedicated to the realization of usability tests of products and machines dedicated to the medical field. Hence, in this work, an integrated user-centered product design and process simulation method is proposed. This method is aimed at enriching the state of the art in integrated product design and process simulation in the early design stages. The case study described in this work consists in the design of an intercom, aimed at improving patient-doctor communication in the case of bedridden patients on with helmet for Continuous Positive Airway Pressure (CPAP) therapy during COVID-19 pandemic emergency. Often such patients undergoing helmet-assisted ventilation are immersed in a highly noisy environment, unable to fully communicate their needs to the doctors. Intercom devices are addressed to the doctors for improving ease communication with the patient. Patients must wear air-based earphones inside the helmet, while the microphone is positioned outside from the helmet for safety reasons. Since the proposed device is addressed to the medical environment it must fulfill medical certification requirements, before being adopted as a certified product at the hospital. Other than electronical safety requirements, also mechanical and quality requirements must be fulfilled. To this aim, the rules for an optimal design for assembly and manufacturing are followed [2]. 1.1 Related Literature Review Integrated design approaches are widespread in the engineering design literature (as well as in medical design (e.g. [3–8]). The term ‘user-centered design’ (UCD) broadly describes the design processes in which the end-users influence the shape of a design [9]. The UCD method has been widespread in design literature. Several examples of web applications and digital based devices apply the rules of the UCD in medical environment, essentially to collect data from users into databases for the definition of mathematical responses [10]. Nevertheless, few examples include UCD in the earliest design phases of the design of medical devices. The inclusion of the user needs from the earliest design stages could be crucial for the success of usability tests held at the end of manufacturing stages. Hence, according to the Pahl and Beitz systematic engineering design process [11], the earliest design phases consisting in planning and conceptual design phases, other than user needs, already present could be enriched of simulations and prediction of the user response on the product before prototyping phases.

User-Centered Design for Manufacturing and Assembly

219

2 Method Medical environments require for strict requirements, high quality standards, and safety requirements to be satisfied. Hence structured design methods should be applied to consider all constraints in a UCD approach. An integrated product design and process simulation method is proposed in this context. To this aim, a three-steps integrated design and process simulation approach has been carried out (Fig. 1). The first step is the modular design approach, starting with the functional analysis, to point out the interconnections between the parts of the assembly. This step ends with the generation of the design alternatives. In this step, Design for Manufacturing and Assembly (DFMA) rules have been adopted in the design of the connection and closures between the components, to satisfy safety requirements and to easy up assembling operations. DFMA is aimed at lowering the total cost of the final product, by reducing the number of the components, assembly time and the complexity of assembly operations. The second step is the decision making one, aiming at considering all feasible design for properly assembling and closing the device. The Pugh Controlled Convergence (PuCC) method is used for ranking the design alternatives as well as the selection of the most suitable one [12] in a group decision making context. The PuCC method is based on pairwise comparison of each generated alternative with a datum concept. According to the fact that the alternative compared with the datum concept is better, equal or worse than the datum concept with respect to the specific criterion, a “+”, “0”, or “-” is provided respectively. The sum of the judgments for each alternative provides the rank position of the alternative with respect to the others and the datum concept. The third step is related to the application of the design rules for assembly and simulation of the manufacturing process. This is aimed at predicting the assembly efficiency as well as the design suitability for the selected manufacturing process. In particular, a prediction of the force required to close the assembly is carried out. A non-linear finite element analysis (NL-FEA) is performed for evaluating the final deformation of the structure after the assembly step. Hence, a simulation of the selected manufacturing process is carried out to evaluate the optimal design for manufacturing intents. In the following, the steps of the proposed approach have been described.

Fig. 1. A three-step integrated product design and process simulation approach

220

C. Renzi and F. Leali

3 Results The results of the application of the steps of the proposed integrated product design process simulation is reported in the following. 3.1 Step1 Modular Design Following a user-centered design view, the concept of the case for the intercom has been designed around the electronical and mechanical components, to gain the most compact design for the hand of the user. To this aim, an optimized layout of the electronical components on the printed circuit board (PCB) has been chosen. For novel design concepts, as in [11], a functional analysis is required first. A functional scheme for defining the asset of the electronical and mechanical components is defined as in Fig. 2 Upper (1) and lower case (7) collect the electronical core (4, 5). A holder (2), designed for rapid manufacturing purposes (Fig. 3), is aimed at supporting the battery (6) separating it from the electronical core, as well as holding the speaker assembly. Illuminated momentary push buttons are aimed at rapidly provide communications patient-doctor.

Fig. 2. Modular design of the intercom device prototype

User-Centered Design for Manufacturing and Assembly

221

After having defined the components and the related positions of the electronical components, the upper and lower cases are designed focusing on feasible assembly approaches. As a reference for following design optimization steps, a rapid manufacturing technique prototype has been produced. It has been closed by means of four screws and nuts, as depicted in Fig. 4-A. This prototype has been tested by means of a usability test, carried out to verify the ergonomics as well as the efficacy of the electronic components before being tested into medical departments. Hence, decisional and optimization steps have been carried out, to refine the design. Other three design alternatives have been generated, for the evaluation of the feasible assembly approaches as represented in Fig. 4. In particular, “A” design is connected by means of four screws, “B” by snap fit, “C” by a fly-nut screw mechanisms, “D” by a dove-tale joint.

Fig. 3. Battery and speaker holder are designed for rapid manufacturing

A

B

C

D

Fig. 4. Generated design alternatives for connecting the upper and lower case of the intercom device.

3.2 Step2 Decisional Step In this step, a Pugh Controlled Convergence (PuCC) method is applied for selecting the most suitable assembly connection for the intercom device. Decision makers related to this work have been a doctor, an engineer, and a manufacturer. The decisional step started with the generation of design criteria as described in the following.

222

C. Renzi and F. Leali

• Criterion 1: Not easy re-opening. This is aimed at safety reasons because the device is addressed to medical environment • Criterion 2: Compact design. This criterion is aimed at reducing the volume to be covered by one hand • Criterion 3: opening without other tools, if required. In case of necessity of changing the battery, this is useful for the user to be able to open the device without any tool. • Criterion 4: good ergonomics on hand (optimal handling by the user (doctor)) • Criterion 5: optimal design for injection molding (design for reducing sinks and air bubbles) The most suitable asset seems to be the B design alternative (snap fit connection) with the maximal voting (Table 1). Table 1. Application of the PuCC for selecting the most suitable assembly asset for the intercom device.

Not easy re-opening

Screw (reference design)

Snap fit

Hinge

Dovetail joint

A

B

C

D

0

+





Compact design

0

+



+

Opening without tools

0

+

+

+

Ergonomics

0

+



0

optimal design for injection molding

0

+

+

+

Total result

0

+5

−1

+2

Step 3 Process Simulation Once the most suitable conceptual design for the intercom device has been chosen and the usability has been tested, the optimized design the assembly of the upper and lower covers have been carried out. An acrylic based plastic material (Polymethyl methacrylate PMMA) has been chosen for injection molding of the lower and upper case due to its transparency, easy cleaning, and scratch-resistance, which are effective criteria in medical environments. After the snap fit assembly has been designed, a first evaluation of the mating force for assembling the upper and lower cases have been implemented. Hence, a non-linear finite element analysis (NL-FEA) has been carried out into Dassaults’ SolidWorks-Simulation Premium educational environment, to evaluate the

User-Centered Design for Manufacturing and Assembly

223

compromise design for the snap fit, considering both the most suitable material for injection molding and a minimal effort to open the device in case it is necessary. Design for Assembly in a User Centered Design Approach In order to investigate the usability grade of the assembly of the case by means of snap fits, before manufacturing steps, the mating force (“push-on force”) W has been evaluated. To this aim, let define the maximum strain at the base ε0 of the snap fit as in Eq. 1 in which: t is the thickness of the snap fit; L is the length of the snap fit; y_max is the maximal allowable deflection of snap; Q is the magnification/ deflection factor; as a function of the aspect ratio L/t defined in [13]: ε0 = 1.5 ·

t · y_max l2 · Q

(1)

Moreover, P is the perpendicular force acting for the deformation of the snap fit (Eq. 2). Hence the mating force W (Eq. 2) is a function of P and the geometric shape of the snap fit in terms of angle α (Fig. 5). W =P·

μ + tanα 1 − μ · tanα

P=

bt 2 Eε 6L

(2) (3)

Fig. 5. Mating force W on the snap fit

For the evaluation of the mating force W, the following snap fit design dimensions have been considered for the snap fit (Table 2):

224

C. Renzi and F. Leali Table 2. Snap fit dimensions Name b

Measure 4

Units of measure mm

y

2

mm

L

15

mm

t

2

mm

α

30

degrees

Hence with the data collected in Table 2, Q = 2,25 with L/t = 4 [13]. The allowable deformation at base εbase given by Eq. 1 is about 4%, which is acceptable for acrylic based plastics [14]. The evaluation of the mating force W, considering the perpendicular force P of 40N (Eq. 3) on the snap fit is given by Eq. 2, considering a Young modulus E for the acrylics of 3000 MPa, and the deformation ad base of 4%. Hence, with a friction coefficient μ of 0.2 for the acrylics, a perpendicular force P of 21N, the mating force W (push force) is about 19N. This result is coherent with a manual assembly of the case by a young male user [15]. NL-FEA for the Assembly A nonlinear finite element simulation NL-FEA has been carried out, to predict the behavior of the snap fit assembly as well as the stress during the mounting and the residual deformation after the assembly. FEA model has been built according to allowed geometric symmetric constraints. PMMA plastic material has been chosen for the assembly parts. Ten phases have been considered to simulate a complete snap fit assembly lasting one second of time. As for the boundary conditions, the following constraints have been provided: – Symmetric geometry constraint – Linear translation to simulate the displacement of the “male” part during assembly – Fixed geometry on the “female” part Stress and displacement results are depicted in Fig. 6 for what concerns only phase 3.

Fig. 6. Stress and displacement in temporal phase 3 for the snap fit assembly

User-Centered Design for Manufacturing and Assembly

225

The maximal value of the residual deformation is at the base of the snap fit and is low (0.2%) (Fig. 7).

Fig. 7. Residual deformation at the end of the simulation.

Design for Manufacturing After assembly method has been defined and simulated, the selected manufacturing process is simulated. A plastic injection molding simulation has been carried out to predict possible problems in manufacturing, due to design issues, since the early design stages. In a first run of the plastic injection molding simulation, sink marks (maximum dimension amounting at 13 μm) have been highlighted in correspondence to thicker areas (reported in Fig. 8.

Fig. 8. Sink marks in thicker areas (upper case on the left, lower case on the right)

226

C. Renzi and F. Leali

Surface sinks are depressions on the surface of an injection molded plastic part. The fundamental cause of the surface sinks is that insufficient number of polymer molecules have been compressed into one part to compensate for the shrinkage. In particular the thicker areas could provide sink marks. This is because thicker sections of a part cool at slower rates than thinner sections, with significant shrinkage in thicker sections. Hence, whenever possible, the part should be designed with uniform wall thickness. After areas of non-uniform thickness have been found, a modification to the design of the case have been provided, with the aim of reducing sink marks problems. Hence, a second run of simulation have been carried out with better results (Fig. 9).

Fig. 9. Sink marks have been reduced by making the thickness uniform

4 Discussion and Conclusions The proposed integrated design method has been showed efficient for the evaluation of a novel product conceptual design in a user-centered design viewpoint in the early design stages. The use of the PuCC method in the second step (decisional step) is efficient for an easy evaluation of the most suitable alternatives in a group decision making context. Nevertheless, as for most decisional methods, some limitation should be considered. First, PuCC is based on a group discussion to provide a unique judgment to the alternatives, with respect to each criterion, in comparison to a datum concept. This necessarily leads to neglect the opinion of some decision makers in relation to the judgment of the majority. Moreover, the PuCC method is more efficient if more alternatives are included. This is because novel alternatives could be generated after the application of PuCC method. This is clearly explained by Frey et al. [16], in which is suggested that the strength of the method lays in the “controlled” convergence of the set of design concepts to the most suitable one. Other limitations are related to errors due to pairwise comparison and explained in [13] as in [16]. In particular, it is highlighted that pairwise comparisons can generate misleading conclusions by introducing significant errors into decision making. This is claimed extensively by Saari and Sieberg since 2004 [17]. They provide an interesting demonstration related to a problem in the pairwise comparison, stating that: “suppose the A > B > C ranking holds over all criteria…. If we just rely on the pairwise

User-Centered Design for Manufacturing and Assembly

227

outcomes, this tally suggests that the A > B and A > C rankings have the same intensity…. It is this useful intensity information that pairwise comparisons lose…”. This raises fundamental point related to the intensity of feelings. Frey et al. [16] conclude that according to this viewpoint, it seems that is not enough that an engineering method should lead to selection of a good concept. As a conclusion, the integrated process design and simulation method proved to be efficient in a context based on product design around the user. In an engineering field where the demand for efficient usability of a product is fundamental for the success of the final product, it is essential to integrate usability aspects in the design process since the early design stages. Moreover, in the medical field it is essential to control the design steps so as not to neglect the numerous constraints linked to quality and to the satisfaction of safety and usability requirements for the end user. Therefore, integrating the simulation of the assembly and manufacturing process into the design method can be a plus for a product intended for complex fields such as that of medicine.

References 1. Renzi, C., Di Angelo, L., Leali, F.: Automotive design engineering: material and processes selection problems. In: Rizzi, C., Andrisano, A.O., Leali, F., Gherardini, F., Pini, F., Vergnano, A. (eds.) ADM 2019. LNME, pp. 373–384. Springer, Cham (2020). https://doi.org/10.1007/ 978-3-030-31154-4_32 2. Boothroyd, G.: Design for manufacture and assembly: the Boothroyd-Dewhurst experience. In: Huang, G.Q. (eds.) Design for X, pp. 19–40. Springer, Dordrecht (1996). https://doi.org/ 10.1007/978-94-011-3985-4_2 3. Panari, D., Renzi, C., Vergnano, A., Bonazzi, E., Leali, F.: Integrated design method for optimal tolerance stack evaluation for top class automotive chassis. In: Eynard, B., Nigrelli, V., Oliveri, S., Peris-Fajarnes, G., Rizzuti, S. (eds.) Advances on Mechanics, Design Engineering and Manufacturing. LNME, pp. 1013–1022. Springer, Cham (2017). https://doi.org/10.1007/ 978-3-319-45781-9_101 4. Renzi, C., Leali, F.: An integrated modular design and group decision-making method in complex design environments. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 581–592. Springer, Cham (2022). https://doi. org/10.1007/978-3-030-91234-5_58 5. Renzi, C., Leali, F.: Multicriteria decision making methods in engineering design: an application on medical devices. In: Roucoules, L., Paredes, M., Eynard, B., Morer Camo, P., Rizzi, C. (eds.) JCM 2020. LNME, pp. 406–411. Springer, Cham (2021). https://doi.org/10.1007/ 978-3-030-70566-4_64 6. Alexander, K., Clarkson, P.J.: A validation model for the medical devices industry. J. Eng. Des. 13(3), 197–204 (2002) 7. Alexander, K., Clarkson, P.J.: Good design practice for medical devices and equipment, Part II: design for validation. J. Med. Eng. Technol. 24(2), 53–62 (2000) 8. Druzgalski, C.: Bioengineering and biotechnology - systems on a chip/integrated design. In: Wescon Conference Record, pp. 292–296 (1998) 9. Abras, C., Maloney-Krichmar, D., Preece, J.: User-centered design. In: Bainbridge, W. (ed.) Encyclopedia of Human-Computer Interaction, vol. 37, no. 4, pp. 445–456 (2004). Sage Publications, Thousand Oaks 10. Göttgens, I., Oertelt-Prigione, S.: The application of human-centered design approaches in health research and innovation: a narrative review of current practices. JMIR Mhealth Uhealth 9(12), e28102 (2021)

228

C. Renzi and F. Leali

11. Beitz, W., Pahl, G., Grote, K.: Engineering design: a systematic approach. Mrs Bull. 71 (1996) 12. Renzi, C., Leali, F., Pellicciari, M., Andrisano, A.O., Berselli, G.: Selecting alternatives in the conceptual design phase: an application of fuzzy-AHP and Pugh’s controlled convergence. Int. J. Interact. Des. Manuf. (IJIDeM) 9(1), 1–17 (2013). https://doi.org/10.1007/s12008-0130187-y 13. BASF: Snap-Fit Design Manual, BASF Corporation, New Jersey (2017) 14. http://www.lehighvalleyplastics.com/wp-content/uploads/2013/09/Acrylic_Properties.pdf 15. Rusli, L., Luscher, A., Sommerich, C.: Force and tactile feedback in preloaded cantilever snap-fits under manual assembly. Int. J. Ind. Ergon. 40(6), 618–628 (2010) 16. Frey, D.D., et al.: The Pugh controlled convergence method: model-based evaluation and implications for design theory. Res. Eng. Des. 20, 41–58 (2009) 17. Saari, D.G., Sieberg, K.K.: Are partwise comparisons reliable? Res. Eng. Des. 15, 62–71 (2004)

A Medical Device for Correctly Positioning Custom-Made Implants for Fractured Orbital Cavities Manila Caragiuli1(B) , Marco Mandolini1 , Agnese Brunzini1 Alida Mazzoli2 , and Mario Pagnoni3

,

1 Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle

Marche, Via Brecce Bianche 12, 60131 Ancona, Italy [email protected] 2 Department of Materials, Environmental Sciences and Urban Planning, Università Politecnica delle Marche, Via Brecce Bianche, 12, 60131 Ancona, Italy 3 yourFACE, Clinica Ars Biomedica, Via Luigi Bodio 58, 00191 Roma, Italy

Abstract. The orbital walls and floor are common sites of facial bone fracture and may cause severe functional impairment. The complex geometry of the bony orbit makes anatomical reconstruction extremely challenging, with main issues related to the implant’s correct shaping, positioning, and orientation inside the orbital cavity. This study proposes an innovative medical device to place patient-specific implants in fractured eye sockets properly. The device must be used with the developed improved version of a tailored implant shaping mould. The design of the orbital implant positioner followed specific clinical and technical requirements and specifications investigated through the Quality Function Deployment method. The device has been conceived to be simple, economical, capable of managing deantigenated bones or titanium meshes for orbital floor and wall, and reusable multiple times. The positioner consists of two handles hinged together and adequately coupled by a spring to allow the grasping and placing of the implant. Positioner and mould have been manufactured in polyamide using the Selective Laser Sintering technique. The system accuracy assessment resulted in promising outcomes. The mould can precisely shape the implant with a lower than 0.1 mm deviation. The implant positioner can place the implant with a rotation angle around the orbital rim of barely 7.1° and 1.2 mm deviation in the mediolateral direction (no deviations in the anteroposterior and superior-inferior directions occur). Keywords: Rapid prototyping · Craniomaxillofacial surgery · Computer-aided design · Medical devices · Implant design

1 Introduction The orbital wall and floor are common sites of facial bone fracture and may cause severe functional impairment. These fractures are very frequently associated with damage to the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 229–241, 2023. https://doi.org/10.1007/978-3-031-15928-2_20

230

M. Caragiuli et al.

surrounding soft tissue and orbital cavity contents, driving several adverse conditions such as diplopia, enophthalmos dystopia, vertical displacements of the globe, loss in visual acuity, ocular muscle entrapment, eye movements limitation [1]. An orbital reconstruction aims to restore the skeletal cavity, orbital volume, functionality, and facial symmetry [2]. The success of reconstructive orbital surgery depends on diverse aspects of the preoperative evaluation of the defect, the implant’s design and manufacturing, and the surgery [3]. The starting point for an efficient orbital reconstruction is an accurate choice of the implant. In particular, the implant material should guarantee proper structural support [4] and satisfy features such as biocompatibility, availability, easiness to shape, osteoinduction, bio-absorbability [5]. Despite autogenous bone grafts being considered the gold standard for the higher biocompatibility, the donor site availability and the additional patient morbidity have led to the diffusion of allogeneic grafts (such as irradiated bone, lyophilised cartilage, and dura), xenografts, non-resorbable and resorbable synthetic alloplastic materials (e.g., titanium mesh, high-density porous polyethene, porous polyethene with embedded titanium, nylon foil, silicone, hydroxyapatite, polytetrafluoroethylene) [6]. Currently, titanium mesh and high-density porous polyethene implants are the most used non-resorbable synthetic alloplastic materials for orbital floor reconstructions since they are easier to handle and offer the possibility of obtaining a precise three-dimensional (3D) reconstruction [7]. Moreover, alloplastic materials can be easily customized, and they are widely available in the market. However, the choice of the implant type is at the surgeon’s discretion, as well as the approach of the surgical reconstruction. In the past decades, surgeons relied on their expertise to correctly shape and place the implant within a fractured site resulting in long surgical time and trauma for periorbital tissues caused by the unavoidable repetitive trial fitting of the implant. Preformed implants became a viable way to reconstruct an orbital wall defect sparing the surgical time required for implant shaping. However, a critical issue when performing orbital reconstruction is associated with the implant positioning and alignment to provide an accurate aesthetic and functional restoration. Computer-aided design and additive manufacturing techniques have been proven to provide advantages in the reconstruction of craniofacial defects due to the possibility to develop patient-specific implants for an accurate restoration of the defect [8]. The fabrication of a patient-specific mould able to deform a template under a pressure mechanism provides an alternative for an accurate intraoperatively shaping of the implant [3, 9]. Despite benefits associated with reducing surgical complexity, a decrease of operative times, minimisation of the risk of contamination, and improvement of cosmesis and patient satisfaction, these techniques do not solve the issues related to implant positioning. Recent technological developments such as Computer Assisted Surgery (CAS) have improved the safety and outcome of existing surgical procedures for orbital reconstruction [10]. CAS allows intraoperative visualization of the target location through a navigation pointer to compare the implant position with preoperative virtual reconstruction. Since electromagnetic navigation systems are prone to interferences with ferromagnetic instruments, an optoelectronic approach is preferable despite being expensive and sensitive to line-of-sight interference [11].

A Medical Device for Correctly Positioning Custom-Made Implants

231

Scheurs et al. [12] developed an instrument for orbital implant positioning to provide feedback on the implant’s position through an implant-oriented navigation system. The tool consists of a coupling device, a spring-loaded retractor, and an insertion handle. The coupling device hook mechanism guarantees a firm implant positioning before removing the instrument. The polyethene handle is equipped with reflective markers for motion tracking. The tool is currently not approved for clinical use. However, the orbital reconstruction on ten human cadavers reveals a reduction in the fitting trial of the implant since real-time feedback on implant positioning is provided during the implant insertion, contrary to a marker-based approach that evaluates the implant positioning after implant insertion. Several techniques have been adopted to improve the accuracy of orbital implant placement. Intraoperative imaging is advantageous in orbital reconstruction since it enables checking the implant position during the surgery via CT imaging. Non-invasive intraoperative navigation can provide additional benefits through real-time feedback during implant manipulation. However, the expensive equipment limits its availability. Thus, a freehand approach is still commonly practised relying on the surgeon’s expertise in orbital reconstruction [13]. Authors have already developed a new, tailored implant shaping mould [3]. Still, it does not consider the implant positioning in the orbital cavity. However, its placement is crucial to successfully restore the pre-injury symmetry and shape of the fractured floor. The research gap is about taking the implant from the mould and correctly positioning and orienting it inside the orbital cavity. Szymor and Kozakiewicz [14] attempted to reconstruct an orbital wall using a modified orbital retractor to position a titanium mesh. Drilled holes near the retractor’s edges enabled the drag of a bioresorbable thread through them and the conical part of the pre-bent mesh until it reached the correct position. In this context, the current work aims to improve the functionality of the abovementioned tailored implant shaping mould and develop a device for the positioning and the orientation of the patient-specific implant onto the fractured orbital floor. The implant positioner is designed to improve the accuracy of the implant placement, simplifying the surgical procedure. A correct implant placement reduces the failure and reoperation rates and lowers patients’ discomfort. A product development process has been carried out to conceptualize different alternatives and develop a final physical prototype to achieve this goal. The design and development of the innovative orbital implant positioner are described in Sect. 2. The results and discussion about implant shaping, orienting, and positioning are reported in Sect. 3.

2 Design and Development of the Orbital Implant Positioner This section presents the innovative medical device, called orbital implant positioner, to exactly place and orient a custom-made implant within the orbital cavity. The system can manage implants obtained through a tailor-made shaping mould, presented in [3]. The methodology for designing and testing the device is given hereunder.

232

M. Caragiuli et al.

• Implant design: design of the custom-made implant through the mirroring imaging technique, suitable for unilateral defects such as orbital fractures [15]. The procedure for designing an orbital implant is described in [3]. Compared to [3], the implant accounts for extra material for being grasped by the positioning device. • Mould design: design of equipment for shaping the custom-made implant during surgery. The mould consists of a die and a punch coupled in a pressure mechanism that shapes a prefabricated material [3, 8]. Compared to the concept developed in [3], the mould is characterized by two small pins that fix and constrain the implant during the deformation (Fig. 3.a). The shape of the implant determines the diameter and distance between the pins. This trick is essential for placing and orienting the implant within the cavity. • Positioner design: design of the device for placing and orienting the implant within the cavity. This system is the main contribution and novelty of this paper. The following sections present in detail the approach adopted for the development. • Physical prototyping: realization of the devices (mould and positioner) required for a complete surgery. Section 2.5 shows the prototypes developed in this work. • Validation: laboratory tests for evaluating the accuracy of the mould and positioner. Section 3 presents further details about the verification and results. The following sections present the design process of the orbital implant positioner. Three design teams have been established, each one consisting of three master’s degree students in Biomedical Engineering. The authors have supervised (also supported) the three groups and evaluated concepts. The groups cooperated until the definition of the technical specifications. From the conceptualisation on, they worked separately. 2.1 Requirements and Functions The requirements identification is the first crucial step for designing the device. A questionnaire to identify surgeon needs was spread among 23 practitioners. It has been realised starting from the following user need: A device for easily, safely, precisely placing and orienting the implant (deantigenated bone or titanium), realised through a customised mould, within the orbital cavity (floor or wall), with low cost, friendly aesthetic and that can be reused multiple times. The intention was to get feedback for prioritising the following requirements: Cost, Reusability, Ergonomics, Lightweight, Ease of use, Accuracy, and Invasiveness. The other requirements (Biocompatibility, Sterilizability, Safeness, Corrosion resistant, Usable for deantigenated and titanium implants, Usable for customised implants, Usable with implants realised through a custom-made mould, Suitable for orbital floors and walls) were not included in the questionaries because mandatory for this medical device. The involved participants were medicine or health professions students with a training period accomplished, clinicians, and other health professionals familiar with medical instrumentation. Three of the participants were doctors completing their specialisation in ophthalmology. Their answers have been reviewed with a one-to-one interview. The questionnaire consisted of eight closed-ended and one open-ended questions. The requirements have been prioritised using the MOSCOW method (Table 1). The mandatory requirements got the highest score. According to the questionnaire results,

A Medical Device for Correctly Positioning Custom-Made Implants

233

the others have been evaluated using the following method. The answer “High” means MUST, “Intermediate” SHOULD and “Low” COULD. According to the user needs previously defined, the functions established for this device are (i) grasping the implant, (ii) ungrasping the implant, (iii) placing the implant and (iv) orienting the implant. Table 1. Prioritized user requirements. Priority

Weight

MUST

10

1.

Biocompatibility

2.

Cost

MUST

10

3.

Sterilizability

MUST

10

4.

Reusability

COULD

5.

Safeness

MUST

6.

Ergonomics

COULD

4

7.

Lightweight

COULD

4

8.

Ease of use

SHOULD

9.

Corrosion-resistant

MUST

10.

Accuracy

SHOULD

11.

Invasiveness

SHOULD

12.

Usable for deantigenated and titanium implants

MUST

13.

Usable for customised implants

MUST

10

14.

Usable with implants realised through a custom-made mould

MUST

10

15.

Suitable for orbital floors and walls

MUST

10

4 10

4 10 7 7 10

2.2 Technical Specifications Technical specifications have been defined by analysing the questionnaire results and the medical device regulations (Table 2). Other specifications (i.e., Dimensions, Grasping force, Weight and Ergonomic shape) have been defined by meeting stakeholders involved in the questionnaire. The assembly process, namely the time required for assembling the parts, has been set by the authors to 60 secs to allow the device assembly directly within the surgery room. Roughness has been set to 0.8 µm (after polishing) for a matter of easy cleaning and sterilization [16].

234

M. Caragiuli et al. Table 2. Technical specifications

Technical specification Value

UoM

Source Medical device regulation Medical device regulation

Material corrosion resistance

Corrosion according to the source metamodel). The relation triples are determined from the adjacency matrices ADs and ADt which expresses the structure, i.e., the oriented relations between the entities in the source and target graphs. Flattened Entities. To express the subtleties of the structure of the graphs, we have chosen to represent each entity by its flattened shape. A flattened representation consists of capturing the internal (entity attributes) and external (inherited, i.e., relationships with adjacent entities) characteristics of an entity. We have therefore defined the flattened shape of an entity as follows Fe = (Dea , Ter ) where Dea corresponds to the attribute doubles of the entity e and Ter corresponds to the relation triples of the entity e. Thanks to flattened representations, the context of each entity is fully expressed, and the structure of the graph is fully captured in a flattened form. The Fig. 4 presents the flattened entities obtained for the metamodel Family and the metamodel Person based on the instances of the source and target models.

Fig. 4. Flattened representation of source (in blue) and target (in green) graph entities

7 Neo4j Python Driver 4.4—Neo4j Python Driver 4.4.

Digital Continuity Based on Reinforcement Learning

449

Attribute Triples. id T as and id T at such that id T a = {id (e, a, v)|e ∈ E, a ∈ A, v ∈ V } where id number is a unique identifier associated with each entity (the entity with id number 5 has attribute triple < CHILD, FirstName, Julie >). In the case where an entity e has several attributes a, the id number makes it possible to associate all the values v with a single entity.

4.3 Reinforcement Learning of Transformation Rules Among the different reinforcement learning algorithms, the Q-Learning is chosen. The experiences and knowledge accumulated by the agent are stored in a table called Q-table which is structured by a set of lines which characterizes all the possible states, and by a set of columns which represents all the possible actions. Thus, it allows for each state s, to determine the expectation of the rewards for each action a. The Fig. 5 presents the main characteristic elements of reinforcement learning, namely, states, actions, the environment, and the reward function.

Fig. 5. Reinforcement learning applied to learning transformation rules

The Environment. It includes the instance of the source model and target model in the form of attribute triples Tsa and Tta as well as their adjacency matrix ADs and ADt . The States. They are defined as being the source patterns in attribute in the form of doublets Dsa , and in the form of flattened entities Fes to express relationships between entities. The Actions. Performing an action result in the selection of a target pattern. In other words, in the case of a model transformation, carrying out an action corresponds to choosing the target pattern which corresponds to a source pattern. The target attribute patterns are defined by doublets Dta , where the flattened entities Fet express relationships between entities. Thus, for a source attribute pattern there exists, potentially, a

450

Q. Brilhault et al.

target attribute pattern to match with (similarly for source and target flattened entities). Choosing an action follows the ε-greedy policy, which means that the intelligent agent can either explore its environment by executing a random action or perform an action by exploiting the knowledge acquired and stored in the Q-table. The more value of ε decreases, the more the agent will be incited to exploit his knowledge. The Reward. The reward r depends on the result obtained after applying a certain action dt ∈ Dat and fet ∈ Fet for a state ds ∈ Das and fes ∈ Fes . In our case, it is a question of evaluating whether the result of the transformation of the source pattern into a target pattern is correct. Therefore, we have defined the calculation of the reward as being the calculation of the F-measure which considers the precision P, i.e., the ratio between the number of expected items produced and the total number of items produced, and the recall R, i.e., the ratio between the number of expected produced items and the total number of expected items, such as: F = 2 PP +∗ RR

(1)

To evaluate if the result of the transformation is conforming to the expected result, we apply the transformation to the elements of the source instance diagram corresponding to the source pattern. The result of the transformation will be, then, compared to the elements of the target instance diagram corresponding to the chosen target pattern. Optimization and Learning Process. When learning starts, all the state-action pairs of the Q-tables are initialized to zero. The value for a state-action couple, called Q-value, is updated according to the interaction conducted by the agent in its environment and is calculated by the Bellman equation. Qt+1 (st , at ) = Qt (st , at ) + α(r + γ maxat+1 Qt (st+1 , at+1 ) − Qt (st , at ))

(2)

γ maxat+1 Qt (st+1 , at+1 ) allows inferring the Q-value for a state st and action at based on the best action at+1 of the next state st+1 . In the case of model transformation, γ maxat+1 Qt (st+1 , at+1 ) is used to measure the interdependencies of the actions conducted before. In other words, this coefficient makes it possible to assess whether the action that has just been conducted is consistent with past actions.

5 Experiment To measure the performance of the approach, the evaluation is based on two case studies presented in Table 2.

Digital Continuity Based on Reinforcement Learning

451

Table 2. Training dataset Transformations

MM s

MM t

Ms

Mt

iM s

iM t

Family2Person

(3)(4)(3)

(2)(0)(4)

-

-

(1) (6)(5)(6)

(1) (5)(0)(10)

Family2Person_extanded

(6)(7)(6)

(5)(3)(9)

-

-

(1) (9)(14)(9)

(1) (6)(5)(11)

Table read instructions: for the family2Person transformation, the source metamodel. MM s is composed of 3 classes, 4 relationships and 3 attributes (second line); the symbol ‘-‘ (first line) means that the source metamodel is not given as input to the algorithm. iM annotation corresponds to instance models which are the only data given as input to the system.

Table 3. Results of experiments Approach

Transformations

Input data

Training performances

Predict. Perf.

Training time (s)

P

R

F

F

QLearning

Family2Person

iM s iM t

11.40

1.0

1.0

1.0

1.0

QLearning

Family2Person_extended

iM s iM t

32.2

0.74

1.0

0.85

1.0

The Fig. 6 shows the evolution of F-measure, recall and precision over episodes during the training phase of Family2Person_extended transformation. As we can see, the performance metrics increase with the learning and peak at 1 for the recall, 0.74 for the precision and 0.85 for the Fmeasure (see Table 3). This increase is due to the decrease of Epsilon ε which means that the Fig. 6. Learning performance for intelligent agent exploits more the acquired past Family2Person_extended knowledge during the learning phase. transformation A recall of 1 means that all the expected elements (classes, relationships, and attributes) were perfectly inferred without loss. However, the precision is not perfect. This is explained by the fact that the [TOWN HALL] class appears in each target flattened entities (see Fig. 4). Therefore, each time a source flattened entity is associated with a target flattened entity, a [TOWN HALL] class is created. Regarding the training duration, it can be significantly reduced by (1) adjusting the number of episodes necessary during the learning phase (in our case we did it over 1000

452

Q. Brilhault et al.

episodes); (2) improving the source code of the reward function which checks the overall consistency of the actions taken by reducing the calculation time. Finally, concerning the prediction of a target model, from a completely different source instance diagram (conforming to the learned source metamodel), the reuse of Q-tables (in attributes and in relations thanks to flattened entities) makes it possible to infer the entire target instance diagram from a new source instance diagram. This means that all the transformation rules (see Table 1) between the source metamodel and the target metamodel have been correctly learned.

6 Conclusion In this article, a concrete solution to model interoperability problems was provided to guarantee the digital continuity. A model transformation learning approach was proposed to infer structural and semantic relationships between models. More particularly the reinforcement learning techniques using the Q-learning was exploited to infer the transformation rules between two metamodels. The results obtained from the two case studies are very promising since the learning is done within a reasonable time with only one source instance diagram and one target instance diagram. Besides, the learning phase makes it possible to extract all the rules which link two metamodels. In future work, the difficulty of the used datasets will be increased by adding more restrictive transformation conditions to learn (condition in attribute, and modification of the attribute value for instance). Another objective is to set a complete benchmark allowing the comparison of the proposed approach performance with the latest approaches using machine learning techniques. Finally, the association of Deep Learning methods with reinforcement learning will be exploited to ameliorate the expected performance. Acknowledgments. The authors would like to thank the Centre National d’Etudes Spatiales (CNES) and Thales Alenia Space for their financial, material and above all human support. A proof of concept within Thales Alenia Space is already under study to demonstrate the performance of the approach in a real industrial context.

References 1. Soley, R.: Model driven architecture. Object Manag. Group, 12 (2000) 2. Warmer, J.B., Kleppe, A.G.: The object constraint language: Getting your models ready for MDA. Addison-Wesley Professional (2003) 3. Wimmer, M., Kappel, G., Kusel, A., Retschitzegger, W., Schoenboeck, J., Schwinger, W.: From the heterogeneity jungle to systematic benchmarking. In: Dingel, J., Solberg, A. (eds.) MODELS 2010. LNCS, vol. 6627, pp. 150–164. Springer, Heidelberg (2011). https://doi.org/ 10.1007/978-3-642-21210-9_15 4. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction.pdf (2016) 5. Barriga, A., Rutle, A., Heldal, R.: Personalized and automatic model repairing using reinforcement learning. In: Proceedings of the 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), Munich, Germany, pp. 175–181 (2019). https://doi.org/10.1109/MODELS-C.2019.00030

Digital Continuity Based on Reinforcement Learning

453

6. Eisenberg, M., Pichler, H.-P., Garmendia, A., Wimmer, M.: Towards reinforcement learning for in-place model transformations. In: Proceedings of the 2021 ACM/IEEE 24th International Conference on Model Driven Engineering Languages and Systems (MODELS), Fukuoka, Japan, pp. 82–88 (2021). https://doi.org/10.1109/MODELS50736.2021.00017 7. Jouault, F., Kurtev, I.: Transforming models with ATL. In: Bruel, J.-M. (ed.) MODELS 2005. LNCS, vol. 3844, pp. 128–138. Springer, Heidelberg (2006). https://doi.org/10.1007/116634 30_14 8. Varró, D.: Model transformation by example. In: Nierstrasz, O., Whittle, J., Harel, D., Reggio, G. (eds.) MODELS 2006. LNCS, vol. 4199, pp. 410–424. Springer, Heidelberg (2006). https:// doi.org/10.1007/11880240_29 9. Balogh, Z., Varró, D.: Model transformation by example using inductive logic programming. Softw. Syst. Model. 8, 347–364 (2009) 10. Wimmer, M., Strommer, M., Kargl, H., Kramler, G.: Towards model transformation generation by-example. In: Proceedings of the 2007 40th Annual Hawaii International Conference on System Sciences (HICSS 2007), Waikoloa, HI, pp. 285b–285b (2007). https://doi.org/10. 1109/HICSS.2007.572 11. Dolques, X., Huchard, M., Nebut, C., Reitz, P.: Learning transformation rules from transformation examples: An approach based on relational concept analysis. In: Proceedings of the 14th IEEE International Enterprise Distributed Object Computing Conference Workshops, p. 7 (2010) 12. Saada, H., Dolques, X., Huchard, M., Nebut, C., Sahraoui, H.: Generation of operational transformation rules from examples of model transformations. In: France, R.B., Kazmeier, J., Breu, R., Atkinson, C. (eds.) MODELS 2012. LNCS, vol. 7590, pp. 546–561. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33666-9_35 13. Baki, I., Sahraoui, H.: Multi-step learning and adaptive search for learning complex model transformations from examples. ACM Trans. Softw. Eng. Methodol. 25(3), 1–37 (2016). https://doi.org/10.1145/2904904 14. Burgueño, L., Cabot, J., Li, S., Gérard, S.: A generic LSTM neural network architecture to infer heterogeneous model transformations. Softw. Syst. Model. 21, 139–156 (2021). https:// doi.org/10.1007/s10270-021-00893-y 15. Ali, A., Nordin, A., Alzeber, M., Zaid, A.: A survey of schema matching research using database schemas and instances. Int. J. Adv. Comput. Sci. Appl. 8(10) (2017).https://doi.org/ 10.14569/IJACSA.2017.081014 16. Daniel, G., Sunyé, G., Cabot, J.: UMLtoGraphDB: Mapping conceptual schemas to graph databases. In: Comyn-Wattiau, I., Tanaka, K., Song, I.-Y., Yamamoto, S., Saeki, M. (eds.) ER 2016. LNCS, vol. 9974, pp. 430–444. Springer, Cham (2016). https://doi.org/10.1007/978-3319-46397-1_33

Enhancing Human-Robot Collaboration in the Industry 5.0 Context: Workplace Layout Prototyping Castrese Di Marino , Andrea Rega(B) , Ferdinando Vitolo , and Stanislao Patalano Fraunhofer J-Lab IDEAS, Department of Industrial Engineering, University of Naples Federico II, Naples, Italy [email protected]

Abstract. According to the Industry 5.0 framework, the smart factory should combine digitalization and prediction activities with a greater sustainability and human centrality within working processes. Indeed, the optimization and improvement of the manufacturing processes have to meet cost criteria related to energy consumption, safety, and implementation of new technologies. The development of better and more advanced technologies boosts Human-Robot Interaction (HRI) in the manufacturing processes. However, due to the high number of safety standards about collaborative robotics and the absence of tools and specific design approaches, collaborative robots (cobots) are still widely adopted as traditional industrial robots wasting the huge potential of a properly Human-Robot Collaboration (HRC). The layout designing is a crucial activity in achieving a proper and effective HRC. This work illustrates how to transfer standard-compliant layout solutions towards an interactive three-dimensional (3D) visualization environment in order to enable the digital prototyping of HRC workplaces. The possibility to automatically generate and visualize multiple layout solutions of collaborative workplaces, and then, to simulate the interaction between human and robot, represents one of the most significant tasks during the designing process. HRC workplace layout is designed according to an optimization criterion, by using the full integration of a numerical computing platform with an interactive 3D visualization environment. Then, by means of the variation of the input parameters, the visualization of new layout solutions is enabled in a fast and effective way. Keywords: Industry 5.0 · Human-robot collaboration · Workplace layout · Interactive prototyping

1 Introduction The Industry 5.0 paradigm [1] – from mass production to mass customization – is boosting companies in rapidly developing new industrial models, based on high flexibility and adaptability of the production chain. According to that, a great number of production applications need to be performed with human beings and machines involvement to achieve higher productive levels still maintaining flexibility and adaptability. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 454–465, 2023. https://doi.org/10.1007/978-3-031-15928-2_40

Enhancing Human-Robot Collaboration in the Industry 5.0 Context

455

The large amount of data collected from different factories during multiple industrial applications is reused to improve many aspects of the production process, leading to increased product quality, system productivity and operator safety, while reducing time and costs by adopting dynamic advanced digital management systems [2]. Alongside the new tools to facilitate interconnection between machines, great interest has been focused on the possibilities of collaboration between human beings and robots [3]. These applications are used to be performed with industrial robots and human operators, physically divided by safety fences, in order to ensure the safety of human workers [4]. However, due to the high number of safety regulations governing collaborative robotics applications and the absence of tools to facilitate the design of Human-Robot Collaboration (HRC) workplace layouts [5], collaborative robots (cobots) are often used as traditional industrial robots without exploiting their full potential. Indeed, HRC is characterized by a high degree of complexity: several technical issues, strictly interconnected, have to be faced to reach an efficient collaboration between human being and cobots [6]. HRC and its production system [7] have been extensively studied over the years in order to increase the productivity of processes and ensure the flexibility while ensuring a high degree of safety for the operator. Several studies [8–10] focused on minimizing the overall cycle time and costs, struggling with the efforts required to meet the safety requirements set by the published standards. Most of these studies have dealt with the workplace layout optimization using commercial simulation software to display the results of the optimization phase in a 3D virtual space, even though their use is very limited during the HRC workplace design process. In fact, the proposed layout optimization methodologies are not always correlated with a method for virtual prototyping of the obtained workplace. Often, designers have to manually place the workplace elements into the visualization environment, that manual process increase the possibility of error during the layout generation phase. Furthermore, the visualization software is often totally disjoint from the software used to perform the optimization. This paper aims to provide an approach and a fully integrated tool between the numeric computing platform and the interactive visualization environment. Once the optimization criterion is chosen, the variation of some inputs parameters allows to visualize a new layout solution in a fast and interactive way by means of direct connection of simulation and visualization environment. This allows the designer to immediately see how the changings affects the developing workplace. The using of the proposed tool allows: (i) computing efficient and standards compliant work environment; (ii) minimizing the non-value-added activities that are usually performed by using non-integrated software. The article is organized as follows: Sect. 2 presents the state of the art while Sect. 3 compares the enabling software. Section 4 presents the proposed approach and Sect. 5 illustrates the case study. Finally, conclusions and future works are reported in Sect. 6.

2 Previous Works and Purpose of the Current Work Most of the research works addressing the design of the HRC workplace relegates the layout representation to a stage after an optimization process, using this phase merely

456

C. Di Marino et al.

to visualize the obtained results and not to dynamically improve the layout itself. The lack of integration between these two phases leads to an absence of suitable tools for effective and efficient digital prototyping of HRC workplaces. Some papers that place more emphasis on the visualization of the scene are presented below: 1. F. Ore et al. [8] – this research focuses on the optimization in the early phases of collaborative workplace design by means of simulation tools. It is based on Pahl and Beitz design method, and it is composed of four iterative stages. A simulation tool is used to visualize all the possible obtainable layouts. The authors propose a systematic procedure starting from the identification of requirements, variables, and constraints. After the input setting, a preliminary design of the workplace is generated through a virtual simulation tool. However, the optimization and visualization phases are decoupled. 2. H. Lee et al. [9] – this research work focuses on the design of the HRC workplace for small-batch assembling. This work intends to provide a process model-based HRC (PM-HRC) system to develop collaborative workplaces and easily modify them according to the customer needs. A simulation tool, i.e., Visual Components, is used to evaluate the performances of the proposed PM-HRC system. Therefore, it is clear that the visualization within Visual Components is unrelated to the layout definition phase. This increases the required time to carry out an effective analysis of the solution. 3. J. Saenz et. al. [10] – this research work focuses on safety-related applications in the design stages of a collaborative workplace. The paper proposes a methodology aimed at modeling the safety aspect of HRC applications related to speed and separation monitoring (SSM) in order to save time and space during the designing process. The proposed method builds upon the use of Computer-Aided Design (CAD)/simulation software tools and proposed Computer-Aided Safety (CAS) tool. Hence, this method, through the CAS tool which is directly accessible in the CAD/simulation environment, supports the designer by making safety information available before final commissioning. However, the phases of hazard identification and risk evaluation require the use of documentation and software outside of the robot visualization and simulation tool. A comparison among the reference works above and the current work presented in the follow, is summarized in Table 1. It figures out that in the presented research works, tools which are adopted for optimization, visualization and safety assessment are disjointed. The dynamic interoperability between optimization and visualization is considered as a minor target. On another hand, this pushes analysist and designers to perform non-value-added activities such as exporting and importing models, producing a waste of time during the switching between the required software tools. Furthermore, a preliminary virtual verification is an important goal in a very intensive productive context such manufacturing field is. Therefore, to better perform all the process, analysts and designers require a dynamic interaction between optimization and representation of results. Regardless of software

Enhancing Human-Robot Collaboration in the Industry 5.0 Context

457

tools adopted and optimization process applied, the integration that points to a highlevel approach can support designer’s work, leading the whole process in an integrated way. Finally, the adoption of a user interface or a remote API provides the possibility of automize the designing process. For all these reasons, the current work addresses this lack with an approach that combines the optimization and visualization stages offering the possibility to perform a preliminary safety assessment by means of a dynamic and interactive 3D environment. Table 1. Comparison among proposed approach and state of art. Sought characteristics Research works

Flexibility

Layout modification process

Integration between optimization and visualization

F. Ore et al

Applicable to different industrial applications

No information provided

Optimization and visualization are disjointed

H. Lee et al

Applicable only to assembly applications

Layout modification is complex and time-consuming

Optimization and visualization are disjointed

J. Saenz et. al

Applicable to different industrial applications

No information provided

Optimization and visualization are disjointed

Current work

Applicable to different industrial applications

Layout modification is quick and easy

Optimization and visualization are integrated

3 Suitable Tools for 3D Visualization The goal of the current work is the integration between a 3D visualization tool with a numeric computing environment, i.e., MATLAB, which is used for the spatial optimization of HRC workplace layouts. For this reason, a comparison among open-source and licensed tools is carried out. The main considered aspects are: (i) data-exchange proficiency with MATLAB; (ii) the capability of editing, visualizing the layout and navigating within it; (iii) the possibility to export results; (iv) the variety and complexity of programming language. The following three software are considered: • Simulink 3D World Editor – It is fully integrated into MATLAB environment and provides an efficient data-exchange. It allows building 3D virtual workplaces with primitive shapes and imported CAD files. However, it has a low practicality of editing and visualizing due to the large number of associated nodes. Finally, the navigation is not user-friendly.

458

C. Di Marino et al.

• Blender – It is an open-source and versatile 3D environment that allows realizing animations and simulations. It presents a dedicated Application Programming Interface (API) for Python and realistic rendering capabilities. On another hand, the high customization opportunities lead to high complexity and low usability. Moreover, it uses a not optimized protocol with MATLAB that leads to a not stable time-consuming data-exchange since it cannot be started by a MATLAB script. Finally, the readability of the results is not easy and there is not an integrated library of commercial robots. • CoppeliaSim – It is a software for layout prototyping, simulation, and robot-related applications. It presents the opportunity to develop code using several languages and has an optimized remote API that allows safe and stable data-exchange with MATLAB. Thus, the simulation can be performed by means of MATLAB. It results user-friendly and allows a wide customization. Finally, although it has a library of commercial robots, furniture and human models, external models can be imported and modified. The comparison among software is summarized in Table 2. CoppeliaSim appears to be the most suitable software for the purpose, therefore, it is adopted as visualization environment. Finally, a further consideration related to importing CAD models is reported in this section. Indeed, the selected software tools allow representing three dimensional scenes even though they do not support modelling 3D objects. Thus, it is still necessary to import all the active and passive resources that belong to a collaborative workplace. Adopting this approach, the modelling process is decoupled from the visualization, making the designing more general. Therefore, all the presented tools can import CAD models from the majority of the most spread CAD software, by means of exchange format. Table 2. Comparison among the suitable tools for 3D visualization. Tool Features Tool

MATLAB connection

Readability of output values of the simulation

Programming language(s)

Visualization and navigation inside the 3D scene

3D World Editor

Optimized

Easily readable

Matlab

Not user-friendly

Blender

Not optimized

Hardly readable

Python

User friendly

CoppeliaSim

Optimized

Easily readable

LUA, C + +, Python, Matlab, Java, Octave

User friendly

Enhancing Human-Robot Collaboration in the Industry 5.0 Context

459

4 Integration of Software Tools Aiming at Digital Prototyping of HRC Workplace Layout The proposed approach supports HRC workplace layout optimization by reducing the convex-hull area of the workplace elements and meeting the regulatory requirements and constraints; the resulting layout is then visualized towards a three-dimensional interactive environment to enable the digital prototyping of the whole workplace. Figure 1 represents the workflow depicting the main actions that follow four stages. The first stage presents a group named INPUTS that collects two kinds of information coming from technical or regulatory constraints and task allocation. Prescription coming from standards and dimensions of passive resources provide the context within designer and analyst can consequently set decisions. These inputs are processed in a second stage named SETUP. Actions from designer are set in this part. CAD models, scripts, optimization algorithm are fixed in order to execute numerical optimization. A semi-automatic 3D layout generation is enabled in the third stage by means of model positioning inside the scene. Passive resources dimension, position and orientation data are the main inputs that come from the previous stage. Finally, the digital prototyping is presented thanks to a workplace visualization. The layout previously generated and the movement and constraints lead

Fig. 1. The workflow related to the proposed approach.

460

C. Di Marino et al.

to the result. If the best solution is not achieved, two recursive paths allow updating the process. Once fixed CAD models, only the optimization algorithm can be modified at the second stage in this flow to achieve a different result. Furthermore, if the solution is still not acceptable, all the inputs need to be revised, then, the process starts again. As said above, in this work MATLAB and CoppeliaSim are adopted and connected each other to automatically generate and visualize multiple layout solutions of collaborative workplaces and simulate the interaction between human and robot. MATLAB is the numerical computing platform employed to solve the optimization problem. An ad-hoc algorithm was developed to process input, i.e., task sequence, task assignment, number and typology of involved resources, available space, material flow strategy, in order to dispose the resources within the workplace according to an optimization criterion. For the purpose of the current work, the authors adopt the same criterion they already propose and use in [11] to design the layout of a collaborative workplace for quality inspection purposes: the optimization problem is modelled as a nonlinear programming problem (NLP) which consists in finding the local minimum of a nonlinear objective function subject to nonlinear constraints. The objective function is based on convex-hull area and aims at minimizing the workplace size and the distances covered by the human operator. The nonlinear constraints express the requirements and the constraints stated by the reference standards. The output of the optimization process performed by means of MATLAB is the position and the orientation of the workplace resources in a global reference frame. By using CoppeliaSim, the optimized layout is represented in an interactive threedimensional environment. The generation of the 3D layout is carried out in a semiautomatic way since the two software are connected via UDP protocol. Figure 2 schematizes the information exchange between MATLAB and CoppeliaSim. By means of CoppeliaSim API, through API client, the implemented process helps user to interactively operate with the software. The remote API is developed by Coppelia Robotics. The operation can be asynchronous, when the simulation is executed without the remote API client, or synchronous using the remote API client. Once the connection is established and the MATLAB script is running, the positioning and the orientation of the resources are computed, and the layout is automatically built into the CoppeliaSim environment. The script is built to load the required model from the CoppeliaSim library and place it inside the workplace according to the optimized coordinates. The designer can quickly edit the generated layout solution by changing one or more input in MATLAB or directly by means of CoppeliaSim, and then exchange the data to MATLAB via the dedicated API. The added value of the process consists in

Fig. 2. Schematization of the communication protocol between the software.

Enhancing Human-Robot Collaboration in the Industry 5.0 Context

461

the integration of optimization and visualization, and the generation of quick feedbacks related to the spatial organization of workplaces. CoppeliaSim provides an extensive library of models including industrial robots, cobots, human operators and a huge number of furniture and fixture models. However, it could be very useful to import external CAD models to expand the existing library and enrich the visualization. Each model is associated to an embedded script in order to build a faithful simulation of the process carried out within the workplace. Scripting phase is crucial to maximize the customization opportunities provided by the software. Scripts are written in Lua which is an extension programming language designed to support general procedural programming. Two types of scripts can be associated to each model, i.e., child script and customizations script. They can run threaded or non-threaded. Non-threaded scripts are activated by call-back function from the main program, whereas threaded scripts are based on coroutines and controlled by using several “blocking functions” that thread switching at specific moment of the script. Scripts can be used to place the control point of both operator and robot paths using as reference the position of passive resources. It is important to associate well-written script to models. A well-written script should contain generally valid relationships with other models in order to assure the reusability of the models for several layout solutions. Therefore, a certain number of scripts are necessarily written to enable the direct connection between MATLAB and CoppeliaSim. Therefore, the power of the integration is in the adoption of a set of scripts, codes, and functions that automize all the operations. Furthermore, CoppeliaSim environment enabled on-screen verification of compliance with regulatory constraints in the designing of the HRC workplace layout. The generated three-dimensional layout allows the user to navigate within it in an immersive way and to view the fulfillment of the spatial constraints provided by the standards through color warnings: the distance between resources is computed and represented by means of a green panel when it is greater than the minimum one to guarantee escape routes, or with a red panel when it is less; the minimum separation distance between humans and robots with regard to speed and separation monitoring is displayed as a green, yellow or red cylinder depending on the speed of the robot; load carrying distance is shown in green if it meets the limit distance, in red if not. This feature is crucial to improve the usability of the integrated tool and reduce the time-consuming of the designer which can carry out most of its assessments inside of a 3D interactive environment.

5 Case Study The proposed approach is applied to a selected case study concerning a collaborative workplace for quality inspection of welding points executed by means of ultrasonic technology. A human and a cobot perform their task sharing the same working space. In particular, the human is demanded to carry and precisely position the workpiece on a special jig, whereas the cobot executes the quality inspection following a precise path passing through defined points. Furthermore, the human has a role of supervising the cobot operation.

462

C. Di Marino et al.

The first step of the process presented in the Sect. 4 concerns the disposal of resources and the spatial optimization performed by means of MATLAB. Few considerations are done: the minimum distance to guarantee among the resources is 500 mm according to ISO 13854:2017, the ergonomic limit for human during transportation is referred to ISO 11228–1:2021, and the optimization aims at minimizing of the occupied workplacefootprint (convex-hull area). For this purpose, a set of scripts in MATLAB and Lua are arranged as part of the proposed outline, and a set of CAD models are imported to enrich the visualization with the correct models. In Fig. 3, the used CAD models are reported.

Fig. 3. CAD models of the involved resources.

Finally, the visualization and representation of the collaborative workplace is enabled by means of the layout generation. In the following, Fig. 4 to Fig. 6 depict three different visualizations of the optimized workplace layout, while Fig. 7 depicts the human navigation path during his task execution. In Fig. 4 the safety area is displayed according to the formula provided in ISO/TS 15066:2016 related to the condition Speed and Separation Monitoring (SSM) [12]: Sp (t) = vh · (Tr + Ts ) + vr (t) · Tr + Ss + C + Zd + Zr

(1)

The SSM condition evolution is presented by means of three situations. A cylindrical volume is placed in correspondence of the inspection jig, in accordance with the solution proposed by Saenz et al. [10]. According to the relative position of the operator, the cobot changes its speed:

Fig. 4. Separation distance between human and cobot represented by means of a: a) green cylinder when the robot works at its maximum speed; b) yellow cylinder when the robot works at 50% of the maximum speed; c) red cylinder when the robot works at 10% of the maximum speed.

Enhancing Human-Robot Collaboration in the Industry 5.0 Context

vr−green = vr−MAX , vr−yellow = 0.5 · vr−MAX , vr−red = 0.1 · v r−MAX

463

(2)

where vr−MAX is the maximum robot speed. The different colour represents different separation distances characterized by different speed and risk conditions. In Fig. 5, the distance covered by the operator transporting the workpiece is highlighted. According to ISO 11228–1:2021, the limit of transporting cumulative mass is presented. The green line results in a compliant condition (Fig. 5.a), whereas the red line presents a health dangerous situation (Fig. 5.b). The lines connect the centre of the passive resources among the operator moves.

Fig. 5. Carrying distance visualization: a) the computed value of the carrying distance meets the threshold limit; b) the computed value of the carrying distance doesn’t meet the threshold limit.

Finally, Fig. 6 shows the minimum distance between two generic passive resources. A red or green panel appears if the distance is less or more than the minimum distance provided by standards.

Fig. 6. Escape routes visualization: a) green panels indicating that the minimum distance between resources is guaranteed; b) red panel showing that the resource placement doesn’t meet the required minimum distance.

Furthermore, in Fig. 7, an example of user interface is presented, and the path of human as well. It is possible to simulate the movement and the path inside a scene.

464

C. Di Marino et al.

Fig. 7. User interface.

6 Conclusions and Future Works The paper presented a workflow and a software tools integration for workplace layout optimization and a three-dimensional visualization to allow the virtual prototyping of human-robot collaborative workplaces. By using ad-hoc algorithms, the optimized solutions are always compliant with the reference standards about both safety and ergonomics, however, the integration with a visualization tool allows designer to check the value of obtained results and lead to save waste of time due to import/export operations. A workflow aiming at integrating the numerical computing platform and a visualization tool is proposed and detailed. Based on the proposed workflow, the interactive visualization of an optimized layout allows a critical analysis of the results and identifies risky situations in the execution of the planned tasks. Future application involves the improvement and extension of the integration, specifying ergonomic analyses and cycle time analyses. A further extension requires to send back information within the numerical computing platform to improve optimization. Acknowledgment. This study was developed with the economic support of MUR (Italian Ministry of Universities and Research) under the remit of project ARS01_00861, ‘Integrated collaborative systems for smart factory – ICOSAF”.

References 1. Maddikunta, P.K.R., et al.: Industry 5.0: A survey on enabling technologies and potential applications. J. Ind. Inf. Integr. 26, 100257 (2021) 2. Malik, A.A., Brem, A.: Digital twins for collaborative robots: A case study in human-robot interaction. Robot. Comput.-Integr. Manuf. 68, 102092 (2021)

Enhancing Human-Robot Collaboration in the Industry 5.0 Context

465

3. Villani, V., Pini, F., Leali, F., Secchi, C.: Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 55, 248–266 (2018) 4. Matheson, E., Minto, R., Zampieri, E.G.G., Faccio, M., Rosati, G.: Human-robot collaboration in manufacturing applications: A review. Robotics 8, 100 (2019) 5. Di Marino, C., Rega, A., Fruggiero, F., Pasquariello, A., Vitolo, F., Patalano, S.: A graphbased multi-level framework to support the designing of collaborative workplaces. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 641–649. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_64 6. Hashemi-Petroodi, S.E., Thevenin, S., Kovalev, S., Dolgui, A.: Operations management issues in design and control of hybrid human-robot collaborative manufacturing systems: A survey. Annu. Rev. Control. 49, 264–276 (2020) 7. Rega, A., Di Marino, C., Vitolo, F., Patalano, S., Lanzotti, A.: Towards the upscaling of biomanufacturing process enhanced by human-robot collaboration. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 615–622. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_61 8. Ore, F., Sánchez, J.L.J., Wiktorsson, M., Hanson, L.: Design method of human–industrial robot collaborative workstation with industrial application. Int. J. Comput. Integr. Manuf. 33, 911–924 (2020) 9. Lee, H., Liau, Y.Y., Kim, S., Ryu, K.: Model-based human robot collaboration system for small batch assembly with a virtual fence. Int. J. Precis. Eng. Manuf.-Green Technol. 7(3), 609–623 (2020). https://doi.org/10.1007/s40684-020-00214-6 10. Saenz, J., et al.: Methods for considering safety in design of robotics applications featuring human-robot collaboration. Int. J. Adv. Manuf. Technol. 107(5–6), 2313–2331 (2020). https:// doi.org/10.1007/s00170-020-05076-5 11. Rega, A., et al.: Collaborative workplace design: A knowledge-based approach to promote human-robot collaboration and multi-objective layout optimization. Appl. Sci. 11, 12147 (2021) 12. International Organization for Standardization ISO/TS 15066:2016 (2016)

Requirements Management in Master Model Development: A Case Study in Fusion Engineering Francesca Giovanna Lanzotti1,2,4(B) , Domenico Marzullo3,4 , Vito Imbriani4 Giuseppe Mazzone5 , Jeong-Ha You6 , and Giuseppe Di Gironimo1,4

,

1 University of Naples Federico II, DII, P.le Tecchio 80, 80125 Napoli, Italy

[email protected] 2 University of Padua, 35122 Padova, Italy 3 Department of Engineering and Architecture, University of Trieste, Via Alfonso Valerio, 6/1, 34127 Trieste, Italy 4 Consorzio CREATE, Via Claudio 21, 80125 Napoli, Italy 5 ENEA, Fusion and Technology for Nuclear Safety and Security Department-ENEA C. R. Frascati, via E. Fermi 45, 00044 Frascati, Italy 6 Max Planck Institute for Plasma Physics, Boltzmann Str. 2, 85748 Garching, Germany

Abstract. This research focuses on the requirements management phase in the conceptual stage following a Systems Engineering approach. The development of a parametric associative master model is useful to implement requirements and available knowledge in the CAD model. The vertical decomposition from higher level requirements to lower level requirements is carried out. The decomposition of design parameters follows the mapping process according to Axiomatic Design principles. The functional requirements and design parameters relations enable to develop the parametric associative master model. Modifications related to requirements can be automatically propagated to the down-stream geometries, maintaining the relationships among geometrical features in the following design steps to choose the optimal candidate. The case study deals with the mechanical design of nuclear fusion devices focusing on the improvement of the concept design of neutron shielding plates, a divertor subsystem added to satisfy a high level requirement about divertor shielding performances on vacuum vessel. Among several variants, a few feasible configurations are generated. Keywords: Systems Engineering · Requirements management · DEMO · Divertor · Nuclear fusion engineering

1 Introduction Concept and engineering design of large systems is a huge challenge due to its size and complexity. [1] Proper tools and methods are needed to enable the activities of geographically dispersed and specialized design teams. During the design of such systems global cooperation allows to achieve successfully projects goals. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 466–478, 2023. https://doi.org/10.1007/978-3-031-15928-2_41

Requirements Management in Master Model Development

467

Systems Engineering (SE) principles are consistently adopted with specific design methods allowing for a systematic approach to design since the early phase of product development [2] as an effective way to manage complexity and change. The large number of conflicting and interrelated requirements of a robust system has to be balanced and formalized [5]. An unanimous agreement concerns the relationship between the satisfaction of stakeholders requirements and the success of a project. Requirement engineering, a branch of SE, is becoming widely and increasingly practiced in mechanical design. Requirements management is not considered only the first phase to carry out and complete at the outset of the system development, but it is connected to the whole product lifecycle for the achievement of project goals [3, 4]. The Axiomatic Design (AD) methodology provides designers with a logical method to structure their thoughts in the early design stage [6]. The design purpose is always stated in functional domain, while the physical solution is generated in the physical domain. The mapping process consists of linking these two independent domains at every hierarchical level of design process to provide design parameters specification from the higher qualitative level to the lower quantitative level. The decomposition process helps the design documentation, the information traceability, the identification of changes impact and the achievement of design objectives. Researches have underlined recently that the principal cause of troubled projects concerns the requirements that sometimes are unclear, ambiguous, imprecise and contradictory in the early design stage. When a requirement changes, it might be clearly linked to the corresponding designed feature ensuring that the final product contributes effectively to the customer objectives. After having identified initial requirements and then product functions and architectures, it is necessary to identify 3D shape and dimensions in order to verify interfacing requirements with subsystems and to carry out preliminary analyses. Concept design starts with high level requirements and then continues with a high level description of a conceptual solution. Master model definition provides a workflow that leads the designer from the requirements definition until the product variants generation. According to this design methodology, the associativity between Computer Aided Design (CAD) and Computer Aided Engineering (CAE) environments is kept during the concept design [7, 8]. However, the requirements management phase is not enough emphasized and described in detail. Iterative and Participative Axiomatic Design Process (IPADeP) provides a systematic thinking and a CAD-centric design approach to support design activities in the early conceptual design stage improved by a Parametric Associative (PA) model [9, 10]. A PA model is a computer-based description of a geometrical model that depends on non-geometrical parameters. IPADeP is an iterative process for the project optimization that avoids the traditional DAER (Design-Analysis-Evaluation-Redesign) model [11]. This design methodology is driven by requirements without considering the relationship between design parameters defined by AD and the parameters of the geometrical model. This research work focuses on recent improvements implemented in master model workflow adopting Axiomatic Design principles to define and manage system requirements. The case study follows the new procedure dealing with the mechanical design of nuclear fusion device. The conceptual design of a divertor subsystem, added to satisfy a high level requirement of the whole system, is improved and discussed.

468

F. G. Lanzotti et al.

2 Master Model Procedure The development of a master model concept using a top-down logic for the design of large and complex product should follow the new workflow shown in Fig. 1.

Fig. 1. New master model concept definition workflow and tools.

The master model workflow consists of several phases that are characterized by a proper tool to allow the design and development activities: I. II. III. IV. V. VI.

Requirements definition; Choice of design parameters; Parameterization; Development of a parametric 3D model for each solution; Generation of geometrical variants; Multiphysics simulations.

A major relevance is given to the requirements definition phase by adopting Axiomatic Design principles to improve the previous Master Model definition workflow [7]. Parameterization phase is also deepened to build a parametric associative model. 2.1 Requirements and Design Parameters Identification Requirements definition (first phase in Fig. 1) consists in identifying functional requirements and input constraints provided by specialized design teams involved in the design of each subsystem and the related interfacing systems. According to Axiomatic Design principles, functional requirements are formally defined to be the minimum set of independent requirements that completely characterizes the design objective for a specific need. Starting with stakeholders requirements, the vertical decomposition proceeds with system and subsystem requirements until the definition of component requirements. The relationships among requirements are maintained by tracing them among different levels.

Requirements Management in Master Model Development

469

Fig. 2. Vertical decomposition and mapping process.

Requirements management and concept development process are strictly interrelated in the early stage of the design development. The vertical decomposition including horizontal domains, vertical hierarchies, zigzagging, mapping is carried out for a specific system architecture (Fig. 2). The design process is represented as a mapping operation, moving from a higher level to a lower level of FRs and DPs. The nature of mapping between a given FR vector and a DP vector having design matrix [A] is given by the design Eq. 1: {FRs} = [A]{DPs}

(1)

The definition of DPs matrix and the mapping process in FRs shows clearly the DPs and FRs of the system, allowing their identification. Depending on the design matrix and the number of FRs components at level k (nk ) and DPs (mk ), the project can be coupled, uncoupled, decoupled or even redundant [6]. Indeed, this process helps to identify which DPs might be optimized to satisfy FRs. The development of a complete solution to a given problem proceeds by mapping FRs from the functional domain to DPs in the solution domain. The DPs of alternative concepts are defined and documented in order to create a physical solution that satisfies FRs. 2.2 Parametric Digital Model Development A model specified in terms of DPs is developed; a proper small set of parameters driving the 3D geometry (namely, dimensions or properties that are most likely to be changed during the design process) has been identified in a conceptual design stage (second phase in Fig. 1). Then a relationship between design parameters and CAD model parameters has been underlined in order to show how are related to the DPs defined by Suh [6]. A parametric high level CAD solution has been developed (third and fourth phase in Fig. 1). Parameters modifications are automatically propagated to down-stream applications and geometries. In the concept design the parameterization is very useful to create a relationship between different dimensions. When a value of a parameter is modified, changes are automatically propagated to the other dimensions that have a connection with the once previously modified. In this way, relationships are kept among geometrical objects and features in design process steps. The digital model is easy to maintain and to be changed due to the complexity of large projects that requires to use computer-aided applications for both modelling and structural assessments.

470

F. G. Lanzotti et al.

2.3 Geometrical Variants Generation and Verification The generation, the comparison and the evaluation among different plausible solutions have an essential role to satisfy functional requirements with minimum information with the aim to find an acceptable solution (fifth phase in Fig. 1). A parametric associative master model has to be well-structured keeping a strong connection, so-called associativity, with Finite Element Method (FEM) environment. The verification analyses represent a crucial step for communication among design teams and for understanding concepts problems, feasibility issues, and individuation of possible interfaces (sixth phase in Fig. 1). These potential consequences are strictly connected because the associativity between CAD-CAE environments makes the simulations quicker, easier and smarter than in the past. When the CAD model changes, the same loads and boundary conditions can be applied to different variants, without rebuilding the entire FEM simulation model. An idealization process, involving details suppression and geometrical adaptations, is often necessary. Two different models are maintained for the same product wasting of time and efforts. Advanced Product Lifecycle Management (PLM) platforms provide integrated CAD-CAE tools to aid in the design, analysis, and manufacture of products becoming suitable for complex design that involves concurrently many different aspects.

3 Case Study: Design Progress of a DEMO Divertor Subsystem The case study deals with the mechanical design of nuclear fusion devices. DEMO, the ITER’s successor, is the tokamak that represents the key step to demonstrate the feasibility of energy production from nuclear fusion reaction supplying electricity to the grid (Fig. 3). The work focuses on the improvement of the conceptual design of a divertor subsystem following the new master model definition workflow. It is included within the framework of the conceptual design activities of the DEMO divertor (Fig. 4), following the results of the pre-conceptual design stage concluded in 2020 [12, 13]. The divertor is one of the most challenging in-vessel components whose design optimization requires advanced Multiphysics approach [14]. Among its main functions, this system (Fig. 5) has to withstand a huge amount of thermal power from plasma and shield the vacuum vessel and magnets against nuclear loads [15]. The neutron shielding plates (Fig. 5) have been added to satisfy a high level requirement about divertor shielding performances to the vacuum vessel. The design of neutron shielding plates has been carried out moving from two issues: 1. The subsystem is a container under internal pressure and, according to nuclear rules, it should pass leak test under each type of loads and load combinations. 2. The subsystem must be integrated into the cassette body with a full penetration and continuous welding. Following the prescriptions [16], the welded joints at boundary from water to primary vacuum shall be performed from the primary vacuum side. The previous design of neutron shielding plates has to fulfil this type of welding in order to be integrated in the divertor cassette body. It is impossible to weld the pipes to the cassette due to the little

Requirements Management in Master Model Development

471

Fig. 3. DEMO 3D CAD model

Fig. 4. Lower port base line with Divertor CAD model.

Fig. 5. DEMO Divertor 2020 (single-circuit option) with neutron shielding plates.

distance between two pipes, the welding bead cannot be continuous because the welding edge is not accessible. Each of twenty holes along the transversal section needs its inlet or outlet pipe. Following step by step the new master model workflow (Fig. 1), the precise procedure has been followed. The first phase starts with defining high level initial Functional Requirements (FRis) and Input Constraints (ICs) as shown in Table 1.

472

F. G. Lanzotti et al. Table 1. Neutron Shield (NS) - Initial functional requirements and input constraints.

FRi ID

FRi Description

FRi 1

Limit irradiation damage to the Vacuum Vessel stainless steel below acceptable levels

FRi 2

Allow for vacuum pumping performance

FRi 3

Show the properties of a robust system - withstand thermal and mechanical loads during normal and off-normal events

IC ID

IC Description

IC 1.1

NS shall be compatible with vacuum hole and cassette radial dimension

IC 1.2

NS shall have the same operational life of DEMO divertor

Then vertical decomposition of high level Functional Requirements (FRs) is applied according to Axiomatic Design. Table 2. First level of vertical decomposition and mapping Neutron shielding plates ID

FR

DP

DP type

1

The divertor system must reduce the neutron Nuclear shielding flux if the irradiation damage at the Vacuum performance Vessel behind the divertor is greater than 2.75 dpa during the whole DEMO operational life

I

1.1

The subsystem shall be easy to assembly

Simple shape

II

1.2

The subsystem shall be cooled down

Serial coolant circuit

III

1.3

The subsystem shall assure structural integrity

Minimum thickness

III

1.4

The subsystem shall allow for vacuum pumping

(a) Vacuum pumping performance (b) Overall dimensions

III

The mapping process is shown in the Table 2 where first level Design Parameters (DPs) are identified as the second phase of Master Model procedure shows. The design matrix (Eq. 2) is useful to verify the goodness of the solution and the independence among FRs and DPs. ⎧ ⎫ ⎡ FR1.1 ⎪ X ⎪ ⎪ ⎪ ⎨ ⎬ ⎢ FR1.2 0 =⎢ ⎣ ⎪ FR1.3 ⎪ ⎪ 0 ⎪ ⎩ ⎭ FR1.4 0

0 X 0 0

0 0 X 0

⎫ ⎤⎧ 0 ⎪ DP1.1 ⎪ ⎪ ⎪ ⎨ ⎬ 0⎥ ⎥ DP1.2 0 ⎦⎪ DP1.3 ⎪ ⎪ ⎪ ⎩ ⎭ X DP1.4

(2)

Requirements Management in Master Model Development

473

The second level of vertical decomposition of FR 1.1 and FR 1.2 is shown in Table 3. Table 3. Second level of vertical decomposition and mapping. Neutron shielding plates ID

FR

DP

DPtype

1.1

The subsystem shall be easy to assembly

Simple shape

II

1.1.1

The plates shall be in minimum number to reduce material

(a) Overall sizes (b) Position inside the Cassette Body

III

1.1.2

The plates shall shield the Vacuum Vessel from neutron damage for at least 2.75 dpa

Percentage of Steel (Eurofer97)

IV

1.2

The subsystem shall be cooled down

Pressure drop

III

1.2.1

The plates shall be able to exhaust thermal power

Percentage of water

IV

1.2.2

The manifolds shall convey the water through the holes

(a) Diameter of holes (b) Number of holes (c) One single inlet and one single outlet per neutron shield

IV

The design matrix (Eq. 3) is not diagonal at this level of decomposition. The Independence Axiom is not satisfied, while the Information Axiom has to be considered for the optimal concept selection [5]. So that, FRs and DPs are not independent and this is an important hint for the parameterization phase. ⎧ ⎫ ⎡ FR1.1.1 ⎪ X ⎪ ⎪ ⎪ ⎨ ⎬ ⎢ FR1.1.2 0 =⎢ ⎣0 ⎪ ⎪ FR1.2.1 ⎪ ⎪ ⎩ ⎭ FR1.2.2 0

0 X X 0

0 X X X

⎫ ⎤⎧ 0 ⎪ DP1.1.1 ⎪ ⎪ ⎪ ⎨ ⎬ X⎥ ⎥ DP1.1.2 X ⎦⎪ DP1.2.1 ⎪ ⎪ ⎪ ⎩ ⎭ X DP1.2.2

(3)

The purpose is to generate the geometrical model of the concept that satisfies the FRs and ICs. The position and the diameter of the holes have to be defined carefully considering that they cannot be positioned symmetrically into the total area of the rectangular sector. Each plate is radially divided in two symmetrical parts, in each of them there is the same holes number. The first constraint is the dimension of two symmetrical manifolds in both halves of the plate. The manifold at the beginning of the plate has the role to collect the coolant that comes from the inlet tubes and convey it to the holes, while the manifold at the end of the plate has the mission to collect the coolant from holes and convey it to the outlet tubes. The manifolds are characterized by the same dimensions split in two symmetrical halves. The holes cannot be positioned in the middle of the transversal sector, but they can be equidistributed considering the perimeter of each manifold. According to nuclear analyses, recently carried out by WPDIV team,

474

F. G. Lanzotti et al.

in order to attain nuclear shielding, the percentage by volume of Eurofer and water to be respected is 70% H2 O and 30% Eurofer. The radius and the diameter have to be chosen considering it and this is one more constraint to be considered. The total volume of two channels that are considered as two cylinders divided into the volume of the parallelepiped is around 0.70 in the optimal case. Since the third dimension is constant for both numerator and denominator, the length has been simplified and the considerations have been done in transversal sector. To generate and evaluate the product concepts, new solution is designed using a top-down modelling approach. Starting from a set of geometrical references of the product, the subsystem is designed with respect of the whole assembly considering the relationship between the parts, in order to achieve the maximum degree of freedom, making changes in further steps of the design process. In order to choose the best configuration of the holes, the ratio in the Eq. 4 is calculated considering several values of the diameter where nH is the number of holes. 2

nH ∗ π D4 = ratio b∗h

(4)

Moreover, the six parameters, that characterizes the third step of the master model workflow, are set in order to collocate the initial sketch in the right position and to realize the rectangular matrix. Two of them are the vertical and horizontal distance between two holes or between the edge of the manifold and the hole (Eqs. 5 and 6): hM − nr ∗ D = sV nr + 1

(5)

bM − nc ∗ D = sH nc + 1

(6)

where hM is the height of the manifold and bM is the base of the manifold, nr is the number of rows, nc is the number of columns and D is the diameter. The position of the centre of circular sketch is calculated in Eqs. 7 and 8, adding to the radius, r, the horizontal or vertical space (respectively sH and sV ). The centre to centre distance between two holes in horizontal and vertical direction is valuated in Eqs. 9 and 10. xH = sH + r

(7)

yV = sV + r

(8)

Interaxis_c = 2 ∗ r + sH

(9)

Interaxis_r = 2 ∗ r + sV

(10)

When those values and, eventually, also the percentage of H2 O/SS change, the radius, thicknesses and two interaxis simultaneously change. Using the parametric model, it is possible to evaluate the diameter of each configuration and understand the feasibility without calculating manually any values. In order to compare different configurations

Requirements Management in Master Model Development

475

and choose the best in terms of shielding performance, it is needed to consider some options. Firstly, the percentage of H2 O/SS in the previous configuration is evaluated in Eq. 11 excluding the space of the central manifolds. 20 ∗ π ∗182 20 ∗ π r 2 = = 0.41 b∗h 1000 ∗ 50

(11)

For that configuration, neutronic, fluid-dynamics, thermal and mechanical analyses have been already carried out, that is the reason why it was decided to start with the same percentage of H2 O/SS simulations and then modify this value to maximize it. Therefore, a few feasible configurations are the candidates for the next comparison. In the second comparison the percentage of H2 O/SS is considered 0.5. Hence, only two configurations are selected among all the initial candidates. The holes number, the value of the diameter, the percentage of H2 O/SS are considered for each configuration in Table 4. Moreover, the vertical distance between two holes, that is the same between the manifold hedge and the closest hole, and the horizontal distance between two holes, that is the same between the manifold edge and the closest hole, are compared. Finally, the configurations, in Table 4, have been selected as the extreme cases and the 3D CAD model (Fig. 6 and 7) has been developed ready for further Multiphysics analyses (fourth phase). Table 4. Two best configurations. Configuration

Holes number

Diameter (mm)

H2 O/SS (%)

Vertical Distance (mm)

Horizontal Distance (mm)

I

4

47.7

41

1.2

11.6

II

20

23.5

50

0.98

2.1

Fig. 6. Configuration I, 4 holes. 3D model - preliminary drawing.

A parametric master model has been developed adopting a CAD-centric design approach starting with design requirements and constraints and following step by step the master model workflow. Among several variants two feasible configurations, the worst

476

F. G. Lanzotti et al.

Fig. 7. Configuration II, 20 holes. 3D model – preliminary drawing.

cases, have been chosen to be analysed and compared in the future. The solutions have been designed with the aid of a parametric CAD software of the Dassault Systemes, CATIA V5. This work has been carried out at DII, Department of Industrial Engineering of University of Naples Federico II, member of the CREATE consortium, in close collaboration with ENEA Research Centre of Frascati and Max Planck Institute for Plasma Physics, within the EUROfusion Horizon Europe research Framework Programmes FP9 (2021–2027).

4 Conclusions and Future Works The work discusses improvements in the development of a master model definition workflow as a systematic process in the concept design. The described workflow seems to be suitable for the design of complex and large systems. The requirements definition includes the vertical decomposition and the mapping process between functional requirements and design parameters, according to Axiomatic Design principles. Design parameters, defined by Suh, help to define parameters for the digital model. A well parameterized model allows to optimize design parameters defined at every hierarchical level. The master model helps to find a correlation between a design method and an efficient CAD model. It has been adopted in the conceptual design activities of DEMO divertor subsystem from few high-level requirements to some high level conceptual solutions. Main high level requirements of the neutron shielding plates have been investigated underlying their importance in the cassette body. Starting with the requirements management of this subsystem, the vertical decomposition of functional requirements and the identification of the design parameters have shown how the lower level requirements are interrelated each others. This means that from a lower design parameter it is possible to go back to more than one functional requirement. This is a key step towards the development of a master model concept of the neutron shielding plates because a single design parameter can be connected to many functional requirements. Moving from the most recent model of the plates, design issues have been investigated with the aim to improve the solution. A 3D CAD model with a set of parameters has been developed. Among a large number of possible variants, only two configurations have been selected as the extreme cases to be ready for Multiphysics analyses. Both feasible variants are

Requirements Management in Master Model Development

477

potentially able to satisfy nuclear and vacuum pumping performances and to solve new highlighted design issues. The geometrical model will allow to carry out sequential Multiphysics analyses such as fluid dynamics, neutronic, electromagnetic transient, thermal and structural analyses. 3Dexperience by Dassault Systemes, a PLM platform that represents a step forward in this domain, is a possible candidate to implement this approach, to shorten time to complete simulations during verification and validation phases, and to share design progress in real time. Acknowledgement. This work has been carried out within the framework of the EUROfusion Consortium, funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 — EUROfusion). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. The authors thank the anonymous reviewers for the helpful comments.

References 1. Di Gironimo, G., Lanzotti, A.: Designing in VR. Int. J. Interact. Des. Manuf. 3(2), 51–53 (2009) 2. Marzullo, D., Bachmann, C., Coccorese, D., Di Gironimo, G., Mazzone, G., You, J.H.: Systems engineering approach for pre-conceptual design of DEMO divertor cassette. Fusion Eng. Des. 124, 649–654 (2017) 3. Hull, E., Jackson, K., Dick, H.: Requirements Engineering, 4th edn. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61073-3 4. Haskins, C., et al.: Systems Engineering Handbook, vol. 9. INCOSE, San Diego (2006) 5. Marzullo, D., Di Gironimo, G., Lanzotti, A., Mozzillo, R., Tarallo, A.: Requirements engineering in complex systems design. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 658–667. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_66 6. Suh, N.P.: The Principles of Design. Oxford University Press, New York (1990) 7. Mozzillo, R., et al.: Development of a master model concept for DEMO vacuum vessel. Fusion Eng. Des. 112, 497–504 (2016) 8. Hoffman, C.M., Joan-Arinyo, R.: CAD and the product master model. CAD Comput. Aided Des. 30(11), 905–918 (1998) 9. Di Gironimo, G., Lanzotti, A., Marzullo, D., Esposito, G., Carfora, D., Siuko, M.: Iterative and participative axiomatic design process in complex mechanical assemblies: Case study on fusion engineering. Int. J. Interact. Des. Manuf. (IJIDeM) 9(4), 325–338 (2015). https://doi. org/10.1007/s12008-015-0270-7 10. Di Gironimo, G., et al.: Concept design of the DEMO divertor cassette-to-vacuum vessel locking system adopting a systems engineering approach. Fusion Eng. Des. 94, 72–81 (2015) 11. Wang, L., Shen, W., Xie, H., Neelamkavil, J., Pardasani, A.: Collaborative conceptual design - State of the art and future trends. CAD Comput. Aided Des. 34, 981–996 (2002) 12. You, J.H., et al.: Conceptual design studies for the European DEMO divertor: Rationale and first results. Fusion Eng. Des. 109, 1598–1603 (2016) 13. Marzullo, D., et al.: Progress in the pre-conceptual CAD engineering of European DEMO divertor cassette. Fusion Eng. Des. 146, 942–945 (2019) 14. Mazzone, G., et al.: Eurofusion-DEMO divertor-cassette design and integration. Fusion Eng. Des. 157, 111656 (2020)

478

F. G. Lanzotti et al.

15. You, J.H., et al.: Divertor of the European DEMO: Engineering and technologies for power exhaust. Fusion Eng. Des. 175, 113010 (2022) 16. Pearce, R., Worth, L.: ITER Vacuum Handbook, ITR-19–004. ITER Organization, Creative Commons (2019)

Towards a Knowledge-Based Additive Manufacturing Approach Christelle Grandvallet(B) , Franck Pourroy, and Frédéric Vignat Univ. Grenoble Alpes, CNRS, G-SCOP Laboratory, 46 Avenue Félix Viallet, 38031 Grenoble Cedex 1, France [email protected]

Abstract. Since decades, additive manufacturing (AM) is sparking interest in industry and research laboratories. The assets of this process and its associated technologies no longer needs to be proven: part shape freedom, Buy-to-Fly ratio reduction, integration of lattice structures, etc. As these new technologies implies new expertise, new profiles have then emerged in the value chain. Among them, the CAD/CAM (Computer Aided Design & Manufacturing) engineers become a key player. However, these experts still need training as, for some technologies, they still proceed by trial and error, therefore processing rules would merit harmonization. This research strives to understand the practices of AM designers, through the analysis of their actions and their underlying selection criteria. To do so, the knowledge they produce and mobilize in a work situation is the start of this work. This article proposes then a knowledge-based AM approach to represent key concepts and to model action-oriented knowledge. After a state of the art related to knowledge elicitation, elicitation techniques applied to the AM context are explained. A knowledge model applied to Electron Beam Melting process is presented subsequently as a solution. The methodology combines procedural and conceptual knowledge, highlights action rules. It has the benefit of being a dynamic decision-making support for CAD/CAM engineering, as well as modular and easy to update. It could likewise be applicable to manufacturing activities but also to many processes. Keywords: Additive manufacturing knowledge · Relational knowledge model · Action rules

1 Introduction 1.1 Knowledge Modelling: Opportunities and Needs for Design and Manufacturing Additive manufacturing (AM) is becoming more and more popular in industry and research laboratories for its undeniable advantages such as part shape freedom, Buyto-Fly ratio reduction, integration of lattice structures, etc. These new practices require to use robust approaches so as to optimize the production deadlines, the costs and, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 479–490, 2023. https://doi.org/10.1007/978-3-031-15928-2_42

480

C. Grandvallet et al.

above all, the part quality. In this respect, the CAD/CAM (Computer Aided Design & Manufacturing) engineer plays a major part in the AM process. This research strives to understand the practices of AM designers, through the analysis of their actions and of their underlying selection criteria. To do so, the knowledge they produce and mobilize in a work situation is the start of this work for the following reasons: – Knowledge is personal and resides in individual’s head [1]; it is distinguished from information and data [2]. – Knowledge is hard to locate as it involves many dimensions (Fig. 1). Tacit knowledge is unconsciously the most rooted in individuals [3].

Fig. 1. Example of knowledge dimensions to consider.

– In companies, crucial knowledge have the most significant impact onto productivity [4] and thus are critical to elicit. As a reminder, knowledge elicitation is a task that consists in collecting information from experts [5]. – Knowledge is however dynamic and therefore undergoes transformations [6]. Knowledge in construction, not mature, is hard to externalize for formalization [7]. – Tacit knowledge that cannot be captured or converted can emerge through social interactions [8]. – The formalization process of knowledge under construction requires the use of elicitation tools and techniques. – In a domain with emerging technologies such as AM, experts’ knowledge can be elicited collectively and iteratively, which allows practice harmonization [9]. – A Knowledge Management (KM) model is necessary in order to store the captured information, analyze and structure it, before spreading it to the concerned people and guiding them in their practice. KM approaches in the domain of additive manufacturing has been the focus of some research works. For instance, Dinar and Rosen [10] propose a detailed OWL/RDF ontology to represent DFAM domain and break up knowledge into entities, object properties, and classes. Sanfilippo et al. [11] present a high-level ontology-based representation for industries using various AM machines and materials. It classifies data about technologies, machines, materials, products etc., but does not use knowledge based on experts’

Towards a Knowledge-Based Additive Manufacturing Approach

481

practices. Lu et al. [12] have built a model that self-improves AM knowledge based on manufacturing data set analysis and prediction accuracy. Other authors try to use machine learning approaches to get large data sets for general classification [13]. Again, those approaches do not take advantage of existing expert knowledge. Moreover, they do not propose decision-making support for CAD/CAM engineering activities. Therefore, applied to AM context, to use a decision-making support method based on experts’ practice is of high interest as it can help understanding and predicting the consequences of experts’ actions. In this regard, the resulting research question emerges: which approach, based on experts’ knowledge and easy to update as soon as knowledge and technologies evolve, would enable CAD/CAM engineers to make decisions related to AM? 1.2 Objectives and Scope To assist these engineers in their decisions leads to the assumption that, thanks to the modelling of their knowledge, they should be able to: – Understand the concepts at stake at any moment of the manufacturing process – Assess and plan the consequences of their actions – Decide and act (i.e. choose the correct process parameters, design relevant part supports, etc.) The objectives of this paper are to propose a model for the representation and integration of knowledge created in the action, so as to harmonize the community practices and support CAD/CAM engineers. Based on a combination of static and dynamic elements, it has the advantages to be based on experts’ knowledge and easy to manipulate. The approach to build and use this knowledge-aided manufacturing is detailed in the next sections. First of all, a state of the art related to knowledge elicitation is presented, followed by major elicitation techniques applied to the AM context. A relational knowledge model is then proposed as a reply to the research question. It highlights the main elements as a basis for decision-making support. As a perspective, a use of this model is finally exposed in several activities of the manufacturing process.

2 Knowledge Management for AM 2.1 Knowledge Elicitation as a KM Process Step As previously explained, knowledge can be defined as a mental process since it is produced inside one’s head through the manipulation of concepts, experience and abstraction among others. In this article, concepts refer to mental objects and ideas but also physical or concrete elements that constitutes the AM domain. Knowledge gives rise to an information as soon as it is exteriorized with the help of elicitation techniques used by a knowledge engineer [14]. Consequently, its representation presents a challenge and necessitates a structuring and a knowledge management methodology. The use of Knowledge Management Systems (KMS) facilitates knowledge integration thanks to a

482

C. Grandvallet et al.

classification of key concepts and a codification in the form of information and data [15, 16]. With regard to AM, knowledge integration can encompass many areas; Fig. 2 shows in red the main ones discussed in this article, namely knowledge elicitation, one of the major steps in the KM process, its creation, characterization, structuring and representation.

Fig. 2. Various domains related to knowledge integration

2.2 Elicitation Techniques Tested in AM Context In an AM context, the idea is to “capture” procedural and conceptual knowledge among individuals. In addition to these types, this work focuses on tacit knowledge as it is the most difficult to externalize, and more complex and deeper than explicit knowledge. Several elicitation techniques taken from [17] have been tested onto AM activities for which decisions are hard to take, due to the numerous parameters to select throughout AM process (Fig. 3) [18].

Fig. 3. Example of various parameters to be considered in AM (EBM technology)

Towards a Knowledge-Based Additive Manufacturing Approach

483

They have led to the conclusion that other tools and techniques are crucial for AM knowledge capture, namely: – Physical parts used as intermediary objects [19] which favors discussion and arguing. – The importance of collaborative argumentation in a problem solving situation [20, 21]. – The development of an influence matrix [22] submitted to each expert, then reviewed during collective session to lead to the co-construction of new knowledge. – A knowledge classification by the identification of concepts classes related to design and manufacturing [10]. – The creation of knowledge content and the dynamic storage into a structuring model, the Relational Knowledge Model (RKM) [23], which is the purpose of this paper. In addition to process mapping and concept mapping applied to small struts [24] or the support of parts [9] built by Electron Beam Melting (EBM) technology, a specific technique called iterative LIT, was developed for the capture and formalization of experts’ knowledge [25]. This externalization of tacit knowledge into explicit knowledge (Fig. 4) has to be structured for better understanding and use, hence for supporting decision making.

Fig. 4. The process of knowledge transformation (Mbow et al. 2021)

Thus, classes of concepts related to this domain are identified, as well as “influence relationships” between each other and “action rules”. These elements have been formalized for EBM; their definitions and structuring are detailed as a new knowledge model in the next section. This whole model was the starting point to the mathematization of these rules, and led furthermore to the development of an AM knowledge-aided software dedicated to the orientation of parts [26].

3 Proposition of a Knowledge Model Related to EBM 3.1 RKM Definition The proposed model, entitled RKM for Relational Knowledge Model, is based on the distinction between procedural and conceptual knowledge (Fig. 5), which makes it both dynamic and stable at the same time. It contains six major elements:

484

C. Grandvallet et al.

– process steps that describe the AM domain – actions and action rules related to the operations and rules applied to those process steps – concepts about the domain, characterized by a definition, whether qualitative or quantitative. Evaluation concepts are specific types of concepts that are quantifiable and connected to action rules (Sect. 3.2 evaluation concepts in more details). – relationships between concepts (depending on the knowledge status, they can be described through influences, state rules that express facts, or physical or mathematical laws) – and lastly, justifications which are explanations provided to the CAD/CAM engineer so they better understand these relationships.

Fig. 5. Knowledge structure for RKM

The classical breakdown “procedural/conceptual” is taken from KM literature. RKM procedural knowledge is based on AM observations and practices. Conceptual knowledge gets inspiration from ontologies structure, using relationships. However, experts justifications and relationships classification take into account experts’ conviction level [21]. Next section explains the links between these elements in the framework of additive manufacturing. 3.2 RKM Structure Illustration The previous model has been applied to Electron Beam Melting AM process, following more than ten elicitation sessions with different AM experts. The resulting RKM model takes into account the following knowledge objects: – The manufacturing process: it is composed of the sequential process steps that detail the manufacturing actions to be followed (Fig. 6). This process corresponds to procedural knowledge.

Towards a Knowledge-Based Additive Manufacturing Approach

485

– The actions: also part of procedural knowledge, these are sequential tasks to undertake at a specific process step in order to achieve a goal. Actions are formulated with a verb of action. To orient a part is for instance the first action to take at CAM operations step for EBM technology.

Fig. 6. Process and sequential action for EBM technology

– The environment: it is composed of contextual elements used in the part manufacturing, for instance, the machines, tools, or material. They constitute concepts – The part characteristics: typically defined as concepts, they are based on the specifications, including the geometrical aspects (shape, volume, surface, edge, and vertex) and the part attributes. These latter allow to discriminate geometrical items on the basis of information known or provided either in the upstream or downstream phase of the fabrication step. – The handles: they are concepts on which CAM engineer can act to affect the final cost, quality and deadlines. For instance, in the case of a CAM step, the part orientation angles in relation with the fabrication plate are values that can be modified. In WAAM (Wire Arc Additive Manufacturing), handles could be, depending on the process step, the parameters linked with the robot, the table, the part or else the welding process itself. – The evaluation concepts: they are essential concepts that guide the engineers in their decisions, as they are quantifiable criteria that trigger action rules. For instance, in the case of part orientation action, by varying the two angles (as handles), an AM expert assesses the impact of his/her decisions thanks to these quantifiable evaluation concepts (Fig. 7). – The action rules (AR): typically part of procedural knowledge, these non-sequential rules act as decision support at a specific step of a process and are derived from the evaluation concepts. In EBM context, they are expressed by means of one action verb among a list of four: minimize, maximize, avoid, require. For example, « minimize part shadow onto the start plate» is an action rule to apply at CAM process step and part orientation action. To carry out an action, the engineer selects various action rules at his/her disposal, some of them could be considered as a priority over others depending on the need and context. Generally speaking, this model is constructed in a modular and dynamic way. In contrast with ontologies that are more static and detailed, it is simple to update. On the one hand, it includes well-identified and stable groups related to input: technology (process steps, environment data) and part classification. On the other hand, elements at

486

C. Grandvallet et al.

Fig. 7. Evaluation concepts in the case of EBM part orientation (Mbow et al., 2021)

a more detailed level such as evaluation concepts and action rules can evolve following experts’ experiences. These latter can be easily updated as soon as experts feel the need. For instance, if the calculation of an evaluation concept needs to be modified, experts can gather to discuss about the required changes based on more recent experiments. A simple workflow included in the RKM model would help to warn main experts about the necessity to review such elements. 3.3 RKM Application to Part Orientation A field study was carried out with ten EBM experts from industry and from academic research centers [22]. Following individual and collective knowledge elicitation sessions about the action of EBM part orientation, Fig. 8 represents the RKM that has been designed. At the process step of CAM operations, to optimally orient the part in the fabrication chamber proved to be the very first action of the experts. But when detailing what they do to orient the part, it comes that this sub process is highly context dependent, and that no general sequential way can be formalized. Action rules, as previously defined, are used to report which precise rules guide them in the context dependent part orientation action. In our case, following collective and iterative LIT sessions, a set of eight action rules was identified [22] (see blue frames on the right side of the figure). And whether these action rules are satisfied or not for part orientation depends on two handles that the CAM engineer can manipulate, namely α and β angles. The model structures the various knowledge objects presented in the previous section and their interrelations: the sequential way of reasoning on the top (process steps and actions). Then comes for each action a more global way of reasoning: action rules, RKD concepts linking action rules with the input by the way of evaluation concepts. The first identified category of input concerns the handles that can be adjusted by the users. Then comes the environment and finally the part itself with its numerical description (specifications and shape).

Towards a Knowledge-Based Additive Manufacturing Approach

487

When detailing, for example, the action rule “Minimize total of non-machined overhanging area” (Fig. 8), the associated evaluation concept is «Total of non-machined overhanging areas». In order to evaluate it, the user has to calculate the sum of overhanging and non-machined surfaces, so as to minimize it, and in the end optimize the manufacturing process. Several concepts are then considered: the oriented shape, the resulting oriented surfaces, as well as the overhanging surfaces associated with non-machined surfaces. Indeed, these overhanging surfaces require support structures, which, once removed, leave marks onto the surfaces. This latter is no trouble in the event this surface is going to be machined for any reason. However, if the surface does not require machining, these marks should generally be avoided. Hence, combined with this action rule, the justification here is to avoid supporting and machining operations that are identified as unproductive, or even prove to be expensive.

Fig. 8. RKM for AR “Minimize total of non-machined overhanging areas” and part orientation action

Hence, in this model, the process stands out as the starting point that enables users to navigate and guide them through the main concepts. At each process step, any action to undertake requires to jointly consider the RKM concepts, namely: as input, the environment and the part specifications; and as output, the evaluation concepts and actions rules triggered by handles, and guided by the ultimate objective of cost, quality and deadline optimization.

488

C. Grandvallet et al.

4 Conclusion 4.1 Towards a Knowledge-Based Manufacturing Solution Additive manufacturing is a relatively new process compared to traditional manufacturing, and its knowledge is still not mature all the more so because many technologies have been invented in the last decade (whether liquid, powder or solid based processes). Our posture is to consider that knowledge in people’s head has a different maturity level between experts or practitioners. Hence, thanks to collective and iterative knowledge elicitation, action rules related to a specific process step can be formalized. However, as many AM parameters have to be taken into account by engineers, manufacturing rules cannot be always applied systematically. This is why the proposed model provides guidance through evaluation concepts and concurrent action rules. This methodology enables to meet the needs of decision taking for CAD/CAM engineers. Indeed, the knowledge-based AM model proposes to display concepts mobilized in their reasoning and manufacturing problem solving. These experts have then the possibility to focus on the impact of their actions as soon as they modify the handles value. Besides this user support function, this approach may contribute to practices harmonization, as well as knowledge transfer and sharing within a company. The implementation of this model can be either at company level, provided the company’s objectives are to harmonize practices, or at community level (for instance within researchers community or machine users community). 4.2 Further Opportunities In terms of perspectives, codification is especially of high importance since it helps to structure experts’ knowledge and mathematize it. A software based on fuzzy logic and desirability function has been subsequently developed for visualizing the eight orientation rules and supporting EBM part orientation [27]. This software considers a small part of the AM process since it handles part orientation. In addition to it, it would be for instance a good idea to deepen and lead further research in the support structure generation step, or in applications to other AM technologies, or even other activities such as design.

References 1. Polanyi, M.: The Tacit Dimension. Routledge & K. Paul, London (1967) 2. Boisot, M., Canals, A.: Data, information and knowledge: have we got it right? J. Evol. Econ. 14(1), 43–67 (2004) 3. Grant, K.: Tacit knowledge revisited–we can still learn from Polanyi. Electron. J. Knowl. Manag. 5(2), 173–180 (2007) 4. Grundstein, M.: From capitalizing on company knowledge to knowledge management. Knowl. Manag. Class. Contemp. Works 12, 261–287 (2000) 5. Cooke, N.J.: Varieties of knowledge elicitation techniques. Int. J. Hum. Comput. Stud. 41, 801–849 (1994)

Towards a Knowledge-Based Additive Manufacturing Approach

489

6. Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press, Oxford (1995) 7. Mougin, J., Boujut, J.-F., Pourroy, F., Poussier, G.: Modelling knowledge transfer: a knowledge dynamics perspective. Concurr. Eng. 23(4), 308–319 (2015) 8. Tsoukas, H.: Do we really understand tacit knowledge. Manag. Knowl. Essential Read. 107, 1–18 (2005) 9. Grandvallet, C., Pourroy, F., Prudomme, G., Vignat, F.: Eliciting and structuring additive manufacturing knowledge, a case study on supporting structures for EBM parts. In: Proceedings of the 21st International Conference on Engineering Design, ICED17, 21–25 August 2017. The University of British Columbia, Vancouver (2017) 10. Dinar, M., Rosen, D.: A design for additive manufacturing ontology. J. Comput. Inf. Sci. Eng. 17 (2017). https://doi.org/10.1115/1.4035787 11. Sanfilippo, E.M., Belkadi, F., Bernard, A.: Ontology-based knowledge representation for additive manufacturing. Comput. Ind. 109, 182–194 (2019) 12. Lu, Y., Yang, Z., Eddy, D., Krishnamurty, S.: Self-improving additive manufacturing knowledge management. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 51739, p. V01BT02A016. American Society of Mechanical Engineers (2018) 13. Wang, C., Tan, X.P., Tor, S.B., Lim, C.S.: Machine learning in additive manufacturing: stateof-the-art and perspectives. Addit. Manuf. 36, 101538 (2020) 14. Wilson, T.: The nonsense of knowledge management. Inf. Res. 8(1), 1–25 (2002) 15. Alavi, M., Tiwana, A.: Knowledge integration in virtual teams: the potential role of KMS. J. Am. Soc. Inform. Sci. Technol. 53(12), 1029–1037 (2002) 16. Bernard, J.: A typology of knowledge management system use by teams. In: Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS 2006). IEEE (2006) 17. Milton, N.R.: Knowledge Acquisition in Practice: A Step-by-step Guide. Springer, London (2007). https://doi.org/10.1007/978-1-84628-861-6 18. Grandvallet, C., Pourroy, F., Prudhomme, G., Vignat, F.: Testing three techniques to elicit additive manufacturing knowledge. In: Eynard, B., Nigrelli, V., Oliveri, S., Peris-Fajarnes, G., Rizzuti, S. (eds.) Advances on Mechanics, Design Engineering and Manufacturing. LNME, pp. 281–288. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-45781-9_29 19. Boujut, J.-F., Blanco, E.: Intermediary objects as a means to foster co-operation in engineering design. Comput. Supported Coop. Work (CSCW) 12(2), 205–219 (2003) 20. Baker, M.J.: The function of argumentation dialogue in cooperative problem-solving. In: van Eemeren, F.H., Grootendorst, R., Blair, J.A., Willard, C.A. (eds.) Proceedings of the 4th International Conference on Argumentation (ISSA 1998), pp. 27–33. SIC SAT Publications, Amsterdam (1998) 21. Darse, F.: Analyse du processus d’argumentation dans une situation de reconception collective d’outillages. PUF Le Travail Humain 69, 317–347 (2006) 22. Stenzel, I., Pourroy, F.: Integration of experimental and computational analysis in the product development and proposals for the sharing of technical knowledge. Int. J. Interact. Des. Manuf. 2(1), 1–8 (2008) 23. Grandvallet, C.: Elicitation et structuration des connaissances dans le cadre de la fabrication additive. Thèse. Université Grenoble Alpes, France (2018) 24. Grandvallet, C., Vignat, F., Béraud, N., Pourroy, F., Prudomme, G.: An approach to model additive manufacturing process rules. Int. J. Mech. Eng. Robot. Res. 7, 9–15 (2018)

490

C. Grandvallet et al.

25. Grandvallet, C., Mbow, M.M., Mainwaring, T., Pourroy, F., Vignat, F., Marin, P.R.: Eight action rules for the orientation of additive manufacturing parts in powder bed fusion: an industry practice. Int. J. Interact. Des. Manuf. 14, 1159–1170. (2020) 26. Mbow, M.M., Grandvallet, C., Vignat, F., Marin, P.R., Perry, N., Pourroy, F.: Mathematization of experts knowledge: example of CAM for additive manufacturing. J. Intell. Manuf. (2020) 27. Mbow, M.M., Marin, P.R., Perry, N., Vignat, F., Grandvallet, C.: Knowledge-based evaluation of part orientation desirability in powder bed fusion additive manufacturing. In: Proceedings of the International Joint Conference on Mechanics, Design Engineering & Advanced Manufacturing. ICED21, Gothenburg, Sweden, 16–20 August 2021 (2021)

Systems and Requirements Engineering: An Approach and a Software Tool for the Interactive and Consistent Functional Requirement Specification Agnese Pasquariello(B)

, Ferdinando Vitolo , and Stanislao Patalano

Fraunhofer J-Lab IDEAS, Department of Industrial Engineering, University of Naples Federico II, Naples, Italy [email protected]

Abstract. Managing the complexity of modern systems is a current challenge involving all the phases of a system development process. Complex systems also mean a great number of stakeholders involved and just as many needs to satisfy. In the Systems Engineering approach for complex systems design, Requirements Engineering provides methods to collect stakeholders needs and to translate them into system requirements. Furthermore, several tools currently allow creating requirements repository and manage their properties and changes. Traceability is also implemented to verify design against requirements. Although there are several standards and guidelines which provide criteria and rules for writing requirement statements, it is still missed a process that drives the user in the correct requirement construction. The present paper faces the requirement specification process in order to provide tools that can support and drive designers in the requirements writing. In particular, the paper (i) provides a framework for writing syntactically correct requirements that can be early verified against pre-defined criteria such as clarity, singularity, conformity, and descriptiveness; (ii) presents a software tool for creating well-defined requirements statements offering the user the possibility to formulate “correct requirements by construction”. The driven definition of system requirements leads to a well-defined specification, thus allowing time and efforts reduction during the following steps of system development process. Keywords: Systems Engineering · Requirements specification · Rules implementation · Consistency analysis

1 Introduction Handling the growing up complexity of modern systems is a current challenge which has to be faced in all the product life-cycle steps. From a strictly technological point of view, complexity is the characteristic of highly integrated, multi-domain “Systems of Systems” (SoS) that exhibits emergent behaviour [1]. However, complexity also increases due to the high number of stakeholders involved in a product or service development. As © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 491–502, 2023. https://doi.org/10.1007/978-3-031-15928-2_43

492

A. Pasquariello et al.

expectations get numerous and sometimes opposite or contradictory, designers efforts to synthesize them grow [2]. Firstly, customers are those who demand for technologically advanced solutions with a large number of additional features. Besides the demanded functionalities, safety, reliability, maintainability, and ease-of-use are just an example of other common stakeholder needs, often conditioned by regulations. Furthermore, a modern company cannot be indifferent to the global objectives of environmental sustainability so it must comply with standards constraints and, at the same time, pursue innovation. Finally, low-cost, and short time to market complete the profile of a competitive organization. Systems Engineering is a widely adopted systematic approach for engineering complex systems. It provides an iterative and holistic process to develop a system whose subsystems, components and parts behaviour as expected and meet all the stakeholders needs. The V-Model is the emblematic representation of that process showing the system design as a top-down approach on the left side, the system integration as a bottom-up approach on the right side and verification and validation activities as means of assuring that the implemented system has the required characteristic [3]. So, it is immediate to realize how engineering requirements has a vital role to play since requirements represent the beginning of the design process and the assessment tool. A requirement is a statement that must be formulated both in the problem and solution domain of system design. In the problem domain, a stakeholder requirement identifies what users, customers, suppliers, developer, or business need from a new system. In the solution domain, a system, subsystem, or component requirement identifies what the system must do in order to satisfy those needs. So, a requirement is a statement which expresses or translate a need and its associated constraints and conditions [4]. Requirements Engineering is the discipline dealing with requirements development and management. Requirement development includes the activities of elicitation, analysis, specification, and validation of requirements [5]. The goal of requirements elicitation is to identify all the relevant stakeholders involved in a project and to capture the requirements from all of them. Requirement analysis is used to improve the quality of the elicited requirements in term of form and contents. A structured hierarchical documentation is one of the requirements specification activities. Finally, validation confirms that requirements meet the needs of the customer and other identified stakeholders. Crosssectional activities like traceability, monitoring and controlling of changes contribute to requirements management. All of these activities, if not properly executed, can bring to project failure. There is no chance to manage complexity if basis for planning the development of a system and accepting it on completion are missing or bad defined [6]. The requirement is primarily a statement thus the natural language still remains the most used means of expression. The main reason is that natural language gives the user the innate freedom to express a wide variety of concepts and it is universally recognized. However, such freedom may lead to unclear or ambiguous statements if not properly limited by well-defined rules. The current article aims to provide a framework of syntactic rules for writing correct functional requirements i.e., requirements that can be easily validated and verified as they are easily understandable and uniquely interpretable.

Systems and Requirements Engineering

493

The article is organized as follows: Sect. 2 presents the actual state of art on requirements engineering with reference to the analysis step. Section 3 synthetizes the proposed framework for writing correct requirements. Section 4 illustrates the developed Requirements Writing Tool (RWT) which make the proposed framework usable, the testing activity and the related results. Finally, conclusions and future works are reported in Sect. 5.

2 State of Art Several research articles propose criteria for writing a good requirement and criteria that apply to the entire set of requirements. A common framework is the SMART requirement i.e., a requirement that is specific (clear, consistent, simple, with appropriate level of detail), measurable, attainable, realizable and traceable [7]. Other criteria impose that requirements must be also verifiable, consistent, unambiguous, necessary, solution independent [8]. Sometimes, slight difference between adopted terms can be found in research works. Some requirement features can enhance when others are achieved e.g., verifiability is enhanced when the requirement is measurable. Furthermore, some criteria can be fulfilled acting on the requirement expression i.e., on the language syntax and semantic while other are more referred to the requirement content. The INCOSE guide [9] is one of the most directly usable references since it provides 14 desirable characteristics of requirement statement but also 44 rules that may be applied to achieve that characteristic. Similarly, the international standard ISO/IEC/IEEE 29148:2018 [4] provides guidance on how to build correct requirement constructs. The standard provides requirement characteristic distinguishing individual requirements and set of requirements. Moreover, it provides a set of criteria to correctly write the requirements. Although there are several references to rely on for writing a correct requirement, it is difficult to self-applicate them while writing requirements. The requirements quality analyzer (RQA) [10] is an existing tool which receives the requirements as input data and computes quality metrics and recommendations as output. However, a double effort must be made to write requirements and then correct them. A requirement can be also expressed in tabular form, in formal notation or in domainspecific notations. Formal language is faced by several research works as one of its main benefits is the possibility to develop and debug requirements according to precise syntax and semantics rules of a computer language. Furthermore, as in the case of the Formal Requirements Modelling Language (FORM-L) [11], formal notation helps to automate requirements verification through simulation tool. A subset of FORM-L constructs is implemented by [11] in a an open-source Modelica library as graphical blocks that can be dragged-and-dropped thus allowing the object-oriented modelling of a requirement. The modelled requirement is then linked to the dynamic model of the system under development for verification. However, the tool chain begins in any case with natural language requirements taken for granted as a clearly understandable input. Furthermore, it is not clear if every type of requirement can take advantage of formal modelling as for example stakeholder or functional requirements in early system design.

494

A. Pasquariello et al.

Natural language can be standardized by using predefined grammatical structures for different type of textual requirements. These structures are also known as “boilerplates” and consists of fixed terms and placeholders that the user must fill with system related data. Dick et al. [12] proposed an approach based on boilerplates to assuring that syntax properties are present in the requirements as they are written, rather than evaluating them after. The Requirements Authoring Tool (RAT) from The Reuse Company [13] allows the requirements author to start typing a requirement, and simultaneously to determine which of boilerplates templates are applicable, guiding the user on permissible terminology. However, this prevents the user from exercising creativity. While there are guidelines and tools for defining requirements correctly, they are not widely used. This is due to the lack of a unique and standardized framework to adopt but also to the absence of a tool that simply guides the user in writing a correct requirement instead of simply verify them after. The present work aimed to provide an approach for the correct functional requirement definition i.e., for writing requirement statements that can be early verified against pre-defined rules. The current approach intended to support the user in generating understandable and useful statements from the results of requirements elicitation. The purpose is to overcome the actual difficulty to comply with predefined rules while writing requirements thus providing an interactive way to correct requirement by construction. The overall intent is not to replace natural language, being the one that gives the widest possibility of translating stakeholders needs, but rather to offer the engineer a process that guides him in the correct writing of a requirement from the point of view of its syntax and semantics. In order to achieve the expected results, a process of knowledge acquisition and organization made possible a synthesis of common natural language issues, desirable requirements properties and standardized rules to follow. A graphical user interface, named Requirements Writing Tool (RWT) was developed for authoring requirements that can be early verified according to systems engineering approach. Finally, a set of bad-defined requirements was used to test the RWT. Different types of natural language issues were identified, and the percentages of resolved and unresolved issues contributed to validate the proposed approach. The paper demonstrates that the guided definition of system requirements leads to a well-defined specification, thus allowing time and efforts reduction during development process of complex systems.

3 Syntax Framework for Requirement Statements Construction The syntax adopted for writing functional requirements follows a very simple scheme, based on [4]. A requirement statement consists of a main clause and a sub-clause or condition. The main clause is the direct expression of what the system shall do to satisfy a need. It is composed of a function and eventually a constraint. The condition states in which cases the system shall do what is stated by the functional requirement, whether the functional requirement is limited or not by a constraint. The function of the main clause follows the SVOC (subject – verb – object – complement) form. The subject is the system, subsystem or component that performs or undergoes the action while the object

Systems and Requirements Engineering

495

is the element on which the action falls. The complement integrates the statement with further information. A specification is introduced for both object and complements in the form of a genitive case. Conditions are introduced by a conditional link and follows the same SVOC form of the main clause. Alternatively, the condition can be also stated in the form of a subject and a value e.g., a physical quantity or the state of a component. The proposed approach does not provide a template for defining constraints, in order not to limit the user choices. Figure 1 shows the proposed framework for requirement syntax in the form of a boilerplate.

Fig. 1. Proposed framework for writing syntactically correct requirements

Fixed elements of requirement statement are in bold (e.g., “the” or “shall”) while terms in the placeholders need to be chosen by the user according to the available data (e.g., actor name, actor action, value, system name, system action, object, complement, genitive case). Other terms can be selected by the user from a pre-defined set (e.g., conjunctions like “when”, “if” and “while” for the condition or preposition like “from” and “to” for the complement). Elements in round brackets (e.g., articles) are optional. Following this framework, a requirement is verified against the property of conformity since it complies with an approved template [4]. Furthermore, it will be singular and necessary because the framework avoids using multiple statements and the predefined fields allow to state only essential capability [9]. The framework was built referring to system level design with the intent to author appropriate requirements i.e., detail level of requirements suiting each design stage [7].

496

A. Pasquariello et al.

Figure 2 shows how to formulate a requirement with the proposed framework. The system of interest is a separator system which shall perform a specific function while an external actor e.g., the conveyor is interacting with it.

Fig. 2. Construction of a functional requirement statement with the proposed framework

4 Tool Support and Evaluation of Theoretical Framework In order to prove the effectiveness of the proposed framework a software tool, named Requirements Writing Tool (RWT), was developed in the Matlab® App Designer environment [14]. The tool implements the principles for writing correct requirements that are described in the previous section of the paper. In particular, the tool supports the user in writing systems requirements statements that comply with syntax rules. The following sections illustrates the functionalities of the developed software tool, its application for the evaluation of the proposed framework and finally a brief discussion on the obtained results. 4.1 Description of the Requirements Writing Tool (RWT) The RWT window is organized into four main sections. The Requirements Tree panel shows the hierarchy of system requirements appearing with their respective ID and summary attributes. The Requirements Tree Editing panel allows the user to create, delete and edit the summary of father and child requirements and to reset the entire specification. After the creation of a requirement, a default boilerplate appears in the Requirement Description text field. The Requirements Editing is the operative panel which allows the construction of requirement statements driving the user in the placeholders filling. This panel is empty when the tool is opened, but if the user creates and selects a requirement in the Requirement Tree, some interactive buttons appear. Figure 3 shows the RWT window after the creation and selection of a requirement. The formulation of a requirement description begins with the Main Clause followed by the Condition whose template appears only if the Enable Condition box is selected. It is possible to add (and delete) more than one sub-clause. The Main and Conditional clause follow the proposed framework. All the elements forming both sentence clause have their own boilerplate that appears in the Requirement Description text field thus contributing to the construction of the entire requirement sentence. Figure 4 shows the RWT window with all main and sub-clause elements enabled.

Systems and Requirements Engineering

497

Fig. 3. RWT window after the creation and selection of a requirement.

Fig. 4. RWT window with Main Clause and Condition elements enabled.

4.2 Tool Testing for Framework Evaluation This section describes a preliminary test of the RWT performed on 71 requirements belonging to four sets of requirements collected from literature and specified for: Traffic Light System [15], Prognostic and Health Management System for airplanes [16]; didactic robot for lunar missions and an automated CMC materials cutting for industrial applications [17]. The requirements have been analysed according to the requirement characteristics provided by the ISO/IEC/IEEE 29148:2018 [4].

498

A. Pasquariello et al.

The most important characteristics (adopted in this paper) are: 1. Unambiguous: the requirement is stated in such a way so that it can be interpreted in only one way; 2. Necessary: the requirement defines an essential capability, characteristic, constraint and/or quality factor which cannot be fulfilled by implementing other requirements; 3. Singular: the requirement states a single capability, characteristic, constraint or quality factor; 4. Conforming: the individual items conform to an approved standard template and style for writing requirements. Natural language issues have been encountered in the way those requirements are written because of a wrong usage of natural language. The total number of requirement issues is 196. The issues are summarized as follow: 1. Ambiguity: the requirement is stated in such a way so that it can be interpreted in multiple ways [4]. The requirement is not clearly understandable [7]. 2. Non-requirement issues: the requirement contains descriptive text, statements of fact, futurity, declaration of purpose, preferences, goals, suggestions, allowances [4]. 3. Plurality: the requirement statement includes more than one requirement [4, 7, 9]. 4. Wrong requirement structure: a sentence construction that does not allow to distinguish between functions and relative constraints, including missing or altered SVOC (Subject – Verb – Object – Complement) structure [4]. Figure 5 shows the distribution of language issues in percentage. A requirement may have more than one issues of the same type.

Fig. 5. Natural language issues of requirement sample

Systems and Requirements Engineering

499

The main goal of the testing activity is proving that the implemented software tool and the proposed framework are suitable for authoring correct functional requirements i.e., requirements devoid of language problematics arising from the use of an improper syntax. As the selected case studies rely on the use of independent works, RWT has been also indirectly tested to prove that it has the potential to verify and correct pre-defined sets of requirements. For brevity, the language issues of a sample of two requirements are presented in Table 1. Figure 6 shows the Requirement Description text fields of the improved requirements rewritten using RWT. Req1 [15]: The emergency vehicle detector notifies2 the intersection controller whenever an emergency vehicle is approaching1 and3,4 when it1 has crossed the intersection and3,4 is moving away. Req2 [16]: The PHM system shall analyse the data to3,4 determine impending system degradation or1,3 failure.

Fig. 6. Improved requirements: improved Req1 (a); improved Req2 (b) Table 1. Example of natural language issues for system requirements. Issues

Req1

Ambiguity

is approaching1 does not state a

Req2

or 1 is a conjunction that connects measurable quantity for defining two or more possibilities or the position of the emergency alternatives. In this case, it seems vehicle; ambiguous that these types of it 1 is referred to the but only the context degradation and failure can help in understand it determination

Non-requirement issue notifies2 is a descriptive text that // makes difficult to understand if the system has already the capability to notify or if this capability is needed (continued)

500

A. Pasquariello et al. Table 1. (continued)

Issues

Req1

Plurality

The use of and 3 in a requirement The use of to3 after the function

Wrong requirement structure

Req2

automatically gives rise to plurality of information stated

(the system shall analyze data) gives rise to another system action. In fact, “analyzing data” and “determining system degradation/failure” are to different functionalities The use of or 3 should be avoided because in this case it gives rise to a plurality: system degradation/system failure

The use of the and 4 causes the misuse of the SVOC structure. It is necessary to highlight the absence of the parent statement (The emergency vehicle detector notifies the intersection controller) after the “and”

The action “to4 determine” is neither a constraint nor a condition. The SVOC is altered because of the inherent plurality

Fig. 7. Results of RWT testing: a) percentages of solved and unresolved issues, b) type of natural language issues before and after the usage of RWT

4.3 Results The authors rewrote a set of 71 requirements collected from literature [15–17] through the usage of RWT. It proved effective improvement of the selected requirements as it has removed the 94% of encountered natural language issues keeping the original meaning of requirements, as shown in Fig. 7.a. The issues of descriptiveness, plurality, and wrong requirement structure were completely removed. The 6% of still unsolved issues is totally referred to the ambiguity encountered in some requirements (Fig. 7.b).

Systems and Requirements Engineering

501

5 Conclusions and Future Works The present paper faces the current difficulty in applying rules and guidelines for writing correct requirements during their authoring; it provides means to avoid loses of time due to bad specification occurring later in the system design process. The paper supplies two main contributions: (i) it provides a framework for the correct functional requirement specification i.e., for writing requirement statements that can be early verified against pre-defined criteria for quality requirements; (ii) it presents a software tool (RWT) for creating well-defined requirements statements offering the user the possibility to formulate “correct requirements by construction”. The proposed framework adopts boilerplates as means to standardize the language used for requirements. Boilerplates refer to system requirements and make a clear distinction between functions and constraints through the adoption of SVOC (Subject – Verb – Object – Complement) form. Since it could be time consuming writing a requirement while simultaneously adopting a correct syntax, the developed tool aim to support the user in the “correction by construction”. The proposed framework does not provide a means to verify requirements against all the required criteria, thus some of them are more related to semantic aspects or to the entire set of requirements. However, results show that syntax implemented in RWT allows to early verify requirements against clarity, singularity, conformity to pre-defined template and descriptiveness. Further development will regard the refinement of the proposed framework with a well-defined process for early verification and validation of requirements. This means that requirements might be verified against a larger number of criteria, including those related to the semantic aspects or dependent on the available context constraints e.g., clarity, feasibility, attainability. Advanced techniques as artificial intelligence may be used. Furthermore, RWT could be improved including differentiation of requirements structure for different type of requirements and according to system levels design. Finally, a system development process can be carried out with and without using the proposed framework in order to quantify its efficacy in a quantitative way, such as the time spent, and the errors avoided. Acknowledgment. Authors acknowledge Mr. Francesco D’Ausilio for his support in the designing and implementing of RWT during his master’s degree thesis at IDEAS Lab.

References 1. Mittal, S., Diallo, S., Tolk, A.: Emergent Behavior in Complex Systems Engineering: A Modeling and Simulation Approach. Wiley, Hoboken (2018) 2. Camargo, M., Palominos, P., Marche, B., Toledo, F., Boly, V., Alfaro, M.: A multi-stakeholder system-based methodology to evaluate the needs of innovation ecosystems. Res. Eng. Des. 32(4), 489–506 (2021). https://doi.org/10.1007/s00163-021-00368-7 3. Brusa, E., Calà, A., Ferretto, D.: The methodology of systems engineering. In: Brusa, E., Calà, A., Ferretto, D. (eds.) Systems Engineering and Its Application to Industrial Product Development. Studies in Systems, Decision and Control, vol. 134, pp. 25–68. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-71837-8_3

502

A. Pasquariello et al.

4. International Organization for Standardization: ISO/IEC/IEEE 29148 (2018) 5. Wiesner, S., Peruzzini, M., Hauge, J.B., Thoben, K.D.: Requirements engineering. In: Stjepandi´c, J., Wognum, N., Verhagen, W.J.C. (eds.) Concurrent Engineering in the 21st Century, pp. 103–132. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-13776-6_5 6. Dick, J., Hull, E., Jackson, K.: Introduction. In: Dick, J., Hull, E., Jackson, K. (eds.) Requirements Engineering, pp. 1–32. Springer, Cham (2017) 7. Mannion, M., Keepence, B.: SMART requirements. ACM SIGSOFT Softw. Eng. Notes 20, 42–47 (1995) 8. Génova, G., Fuentes, J.M., Llorens, J., Hurtado, O., Moreno, V.: A framework to measure and improve the quality of textual requirements. Requir. Eng. 18, 25–41 (2013) 9. Requirements Working Group: INCOSE Guide for Writing Requirements. In: International Council on Systems Engineering. (INCOSE), San Diego, California, USA (2017) 10. RQA – QUALITY Studio. https://www.reusecompany.com/rqa-quality-studio. Accessed 24 Jan 2022 11. Bouskela, D., et al.: Formal requirements modeling for cyber-physical systems engineering: an integrated solution based on FORM-L and Modelica. Requir. Eng. (2021) 12. Dick, J., Llorens, J.: Using statement-level templates to improve the quality of requirements. In: 24th International Conference on Software & Systems Engineering and their Applications, Paris, France (2012) 13. RAT – AUTHORING Tools. https://www.reusecompany.com/rat-authoring-tools. Accessed 19 Jan 2022 14. MATLAB App Designer. https://it.mathworks.com/products/matlab/app-designer.html. Accessed 20 Jan 2022 15. Chen, R., Chen, C.-H., Liu, Y., Ye, X.: Ontology-based requirement verification for complex systems. Adv. Eng. Inform. 46, 101148 (2020) 16. Li, R., Verhagen, W.J.C., Curran, R.: Toward a methodology of requirements definition for prognostics and health management system to support aircraft predictive maintenance. Aerosp. Sci. Technol. 102, 105877 (2020) 17. Zheng, C., et al.: A requirement-driven architecture definition approach for conceptual design of mechatronic systems. Integr. Comput. Aided Eng. 26, 361–382 (2019)

Analysis of Guided Waves Dispersive Behavior for Damage Detection in Flat and Curved Composite Panels Donato Perfetto , Alessandro De Luca(B) , Giuseppe Lamanna , and Francesco Caputo Department of Engineering, University of Campania “Luigi Vanvitelli”, via Roma, 29, 81031 Aversa, Italy [email protected]

Abstract. The efficiency of a guided wave (GW)-based Structural Health Monitoring (SHM) system should be verified under increasing geometry complexity and under the real environmental and operational conditions, following the building block approach. In this scenario, Finite Element (FE) modelling represents an effective tool to study the GW propagation for SHM applications, allowing substantially reducing experimental campaign time and costs. In this work, the effect of the curvature (characteristic for example of aircrafts, tanks, pipes) on GW propagation mechanisms was investigated through FE analyses. In particular, dispersion and slowness mechanisms were studied in flat and curved composite panels in the frequency range 100–300 kHz. The difference between the S0 mode velocities for the flat and curved plates is evident only for a low frequency (100 kHz), whilst the propagation is unaltered for higher frequencies. Specifically, for most of the frequencies (>100 kHz), S0 mode propagation speed in the flat plate is higher than the analogous in the curved plate, while concerning the A0 mode, no difference arises. Keywords: Guided wave · Composites · Curvature effect · Structural Health Monitoring · Finite Element Analysis

1 Introduction The growing focus on the integrity of primary and secondary structures [1, 2] has determined the need to monitor the structures continuously, quickly and economically, in order to detect any damage at an early stage to prevent catastrophic failures. In the last decade, numerous studies have been devoted to broaden the knowledge related to the different fields of application of Structural Health Monitoring (SHM) techniques for their implementation on real structures [3]. Several technologies and methods can be followed for health monitoring purposes: among these, Lamb/Guided waves (GW) play a key-role in this field, thanks to their advantages in terms of low required power and long propagation distance (with low attenuation) [4, 5] even in composite materials. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 503–511, 2023. https://doi.org/10.1007/978-3-031-15928-2_44

504

D. Perfetto et al.

This SHM approach is non-invasive and it allows permanent and highly automated inspection through the use of piezoelectric sensors installed (surface-mounted in most cases) on the structure to be monitored. The equipment of a structure with an SHM system can lead several advantages to: maintenance/repairing operations, quasi-real time control of deficiencies [6, 7], and so greater safety, through continuous monitoring. As a drawback, the multi-modal (different modes of different order are superimposed) and dispersive (each mode propagates with different speed depending on the excitation frequency) nature of Lamb waves [8] makes the interpretation of the recorded signals a tricky aspect. The post-processing phase is thus essential and requires a particular focus to extract specific features that can be linked to the actual state of the structure (presence of the damage). Moreover, in composite components, GW exhibit also a fiber-related characteristic: the dependence on the propagation angle (slowness phenomenon). Finite Element Analysis (FEA) allows designers to deeply investigate GW propagation in simple or complex structures, allowing saving substantially costs and time related to the experiments [9, 10]. Furthermore, thanks to the FE modelling, it is possible to analyze some aspects that can threaten the damage sensitivity, such as the influence of the real environmental (temperature, humidity…) and operational (load, initial stress-strain state…) conditions (EOC) [11, 12]. Also, GW propagation characteristics can be effectively explored in curved components. The effect of the curvature (characteristic for example of aircrafts, tanks, pipes) could cause changes to the wave propagation mechanisms with respect to flat panels. For instance, Fong [13] examined the effect of the curvature through the comparison of phase velocity and shifts of wave modes, in a flat plate and in a curved plate, at each frequency. It was noted that the percentage difference of these properties increased exponentially with the increase in the curvature radii and it is also dependent on the frequency. In addition, Wilcox [14] investigated analytically the curvature effects on wave propagation in aluminum panels. The author demonstrated that, when the ratio between the radius of curvature and the thickness of the plate is greater than 10:1, the effect of the curvature on the Lamb wave propagation is negligible. Santana et al. [15] presented the effect that the high curvature to thickness ratio has on the propagation of Lamb waves in composite structures. In the curved portion of the structure, the speed of the symmetrical wave mode S0 increases asymptotically with the radius, while A0 mode shows the opposite behavior. Munian et al. [16] studied the GW propagation in a delaminated curved structure. According to the observations, the sensitivity of the SHM system to the damage detection was found to be negatively affected by the curvature due to the induced converted modes and, reflection and transmission wave phenomena. As a result, technologies behind the SHM systems must be significantly improved to increase their sensitivity to the damage detection also in curved structures, which are very usual in some fields as marine, aerospace, automotive, etc. So, the understanding of the dispersion mechanisms in such components is essential. In this work, rather than studying the effect of the curvature radius variation on GW, the attention is oriented to the study of GW dispersive behavior in flat and curved composite panels, both made of carbon fiber reinforced plastic (CFRP) material. In particular, dispersion and slowness phenomena have been investigated at first in the flat

Analysis of Guided Waves Dispersive Behavior for Damage Detection

505

panel; then, the attention has been paid on a curved panel, characteristic of a portion of a tube. This allows verifying any difference in the wave packets propagation speed introduced by a specific curvature at varying excitation frequency. While, the second part aims to test the sensitivity of the GW-based SHM system in detecting damage in both flat and curved panels.

2 Test Articles and FE Modelling The main goals of this work is to analyze the curvature effects on GW dispersion behavior in composite panels. Two numerical models were created to replicate GW propagation mechanisms in flat and curved panels. All analyses were carried out by using the FE code Abaqus® CAE explicit, while an in-house Matlab® script was used to perform the preprocessing (addressed to correctly define the modelling of the phenomenon, providing information on mesh size, time increment, actuation signal, etc.) and postprocessing (addressed to extract from the predicted signals the time of flight (ToF) of both S0 and A0 modes, to calculate the GW group velocities [17], according to the known actuator-receivers distances, and to plot the dispersion curves) operations. The test articles investigated in this work are herein described. The flat panel has a square shape (310 × 310 mm2 ) while the curved one is characterized by a width of 310 mm and a curvature radius of 167.5 mm, Fig. 1. Both panels have a thickness of 1.5 mm and are made up of 8 CFRP plies with a stacking sequence of [0,90, +45, −45]S . Lamina material properties are shown in Table 1. The curved panel has a curvature radius to thickness ratio equals to 111.67, much greater than the one analyzed by Wilcox in [14]. A network of 5 piezoelectric transducers, one used as exciter and four as receivers, was modelled on the upper surface of the plates to generate and sense the diagnostic GW signal. PZT diameter, dPZT , and thickness, tPZT , are 10 mm and 0.25 mm, respectively, while their technical specifics are reported in Table 1 [2, 18]. As visible from Fig. 1, the mapped area is square-shaped, with the actuator in a central position and the receivers at a 100 mm distance along the diagonals. This configuration permits the investigation on the slowness phenomenon. For the model discretization, finite elements belonging to Abaqus® library were considered: shell finite elements, S4R 2D, with a mesh size of 1.0 mm, for the panels; solid finite elements, C3D8R, with a mesh size of 0.4 mm, for the PZTs. The mesh size has been defined to achieve a number of nodes per wavelength (NPW) of 20 at a carrier frequency of 200 kHz, as described in [9, 10]. The choice of 2D elements for the modelling of the GW propagation has been widely proved by the authors for both isotropic and composite components [10]. The tie constraints approach available in Abaqus® was used to simulate the bonding between PZT and plate. Concerning the GW propagation, radial displacements, calculated through aforementioned preprocessing script, were applied on the actuator upper edge [10]. A chirp excitation signal was used to analyze the dispersive response of GW in a selected frequency range [100:50:300 kHz]. Finally, plate corners were constrained.

506

D. Perfetto et al.

(a)

(b)

Fig. 1. Geometry of the (a) flat and (b) curved panels (measures in [mm]).

Table 1. Properties of CFRP composite lamina and PZT sensors. Material properties

Symbol

Units

CFRP lamina

PZT

Mass density

ρ

[kgm−3 ]

1,534

7,850

Young’ s modulus

E

[GPa]



76

Longitudinal Young’s modulus

E11

[GPa]

105.125



Transversal Young’s moduli

E22

[GPa]

7.7



E33

[GPa]

7.7



Shear modulus

G

[GPa]



29

Shear moduli

G12

[GPa]

3.6



G13

[GPa]

3.6



G23

[GPa]

2.7



Poisson’s ratio

ν





0.32

Poisson’s ratio

ν12



0.36



ν13



0.36



Dielectric constant Piezoelectric charge constant

ν23



0.4



K3





1,280

d31

[10−9 mmV−1 ]



−180

3 Dispersion and Slowness in Flat and Curved Panels Results for both flat and curved panels are detailed in this Section. Results have been postprocessed in a such a way to better understand the influence of the curvature on GW propagation mechanisms and dispersive behavior.

Analysis of Guided Waves Dispersive Behavior for Damage Detection

507

Predicted signals are extracted from the model as the average of the in-plane strains numerically recorded by all nodes of a specific sensor [9] and then converted through the own post-process code. It allows reconstructing the tone burst responses, using the procedure described in [19], in order to automatically identify the S0 and A0 modes. As mentioned, a chirp activation signal was used to study the GW dispersive behavior in 100 ÷ 300 kHz frequency range. The reconstruction of the signals has been performed in this range with a step of 50 kHz. Thus, the code extracts the dispersion curves, Fig. 2. To obtain such curves, a quadratic interpolation of data relating to each frequency and to each sensor has been considered. In particular, circle-shaped dots represent the extracted numerical values, while the solid line is a quadratic interpolation used to fit each set of data. Quadratic interpolation has been adopted according to authors’ previous researches. (a)

(b)

Fig. 2. Comparison of dispersion curves for the (a) S0 and (b) A0 modes in flat and curved plate.

Figure 2 shows the actual variation of the GW propagation velocity in the curved plate compared to the flat one. In detail, the difference between the S0 mode velocities (Fig. 2a) is evident only for a low frequency (100 kHz), whilst the extracted values are almost coincident for higher frequencies (see red and blue dots in Fig. 2a). Specifically, for most of the frequencies (>100 kHz), S0 mode propagation speed in the flat plate

508

D. Perfetto et al.

is greater than the analogous in the curved plate, in accordance with the theory [15]. Analyzing the A0 mode (Fig. 2b), it is possible to note that the curves are overlapped. The results obtained highlight the important influence of the curvature on the GW propagation in composites, especially at low frequency values, specifying that, even when radius thickness ratio is higher than around 10:1, the influence of the same on wave propagation cannot be neglected, in accordance to the studies presented in [14]: for some cases, such as the curved composite plate herein examined, where the curvature radius to thickness ratio is around 111:1, the effect of curvature on group velocities can become significant for low excitation frequency values (Fig. 2).

4 Damage Detection The study of damage detection in composites is a particularly tricky phenomenon due to the variation of the signal propagation velocity with the direction [20]. The aim is to analyze the capability of the SHM system in revealing damage in a curved panel. Starting from the reference (i.e., no damage) configuration of the two investigated panel (see Sects. 2 and 3), the damage has been introduced. To model the damage, a through hole with a radius of 10 mm has been considered by deleting the elements of the mesh according to the deleting technique, widely adopted in previous authors’ works [10, 21]. The damage (d1) has been positioned along the diagonal that connects the sensor 1 with sensor 5 (coordinates of the center of the hole (190, 190)), Fig. 3. This way, the location of this damage appears to be internal to the sensor network and therefore in a critical area.

(a)

(b)

Fig. 3. Investigated damage position d1.

In particular, for damage detection intents, the considered dataset refers to the condition where the actuation signal was excited at a central frequency of 200 kHz. The damage sensitivity connected to this frequency, according to [22, 23], can be calculated as the half of the wavelength of the considered wave mode. Thus, the SHM system at this frequency can detect damages with a size greater than 12.5 mm.

Analysis of Guided Waves Dispersive Behavior for Damage Detection

509

The dataset was recorded according to a round-robin approach, by using a single actuator at time. When a single sensor works as actuator, the others work as passive receivers. This scheme is iteratively repeated up to all PZTs have been used as actuators. Moreover, since the interaction of GWs with damage induces reflection and refraction mechanisms, the recorded signals will be affected by some features (amplitude and ToF variations) missing in the dataset carried out at the pristine configuration. The assessment for each actuator-receiver path of the differences between the signals recorded at both pristine/reference and actual configurations of the plates, through Damage Index (DI) [10], permits the identification of the damaged areas. For damage identification purposes, the comparison of the signals was performed only by referring to the S0 mode, which was extracted by the post-processing code. It can be concluded that the damage will be closer to the actuator-receiver path characterized by the highest DI. Conversely, the damage will be far from those paths which correspond the lowest DIs to. DIs are automatically calculated by the postprocessing script. The damage indices extracted from the two panels for the investigated d1 damage configuration were normalized and shown in Fig. 4. (a)

(b)

Fig. 4. DI evaluation (damage of 20 mm in position d1) in (a) flat and (b) curved panel – 200 kHz reconstruction carrier.

510

D. Perfetto et al.

As visible from Fig. 4, the DIs set allows identifying the most affected paths: PZT1PZT5 propagation path corresponds to DI = 1, and this result is coherent to the actual damage position. Same considerations can be drawn for both flat and curved panels, considering that for 200 kHz carrier frequency, the GW propagation characteristics are almost identical (Fig. 2).

5 Concluding Remarks The paper presents the development of a modelling technique, based on FE method, to investigate on the effects of the curvature, which characterize for example aircrafts, tanks, pipes, etc., on GW propagation mechanisms. In this work, rather than studying the effect of the curvature radius variation on GW, the attention is focused on the study of GW dispersive behavior in both flat and curved CFRP panels. The proposed model starts by a methodology extensively validated by authors in the previous works. Specifically, the simulations allowed highlighting a variation of the S0 mode propagation velocity in the curved plate compared to the flat one in 100–150 kHz frequency band, whilst for the A0 mode no change has been observed by varying the excitation frequency in the curved panel with respect to the flat plate. All this suggests to activate guided waves at higher frequency, i.e. 200 kHz, in order to perform the structural inspection by means of a reliable and general purpose SHM system, which is characterized by a steady damage sensitivity with respect to the curvature. As widely discussed, in fact, the damage sensitivity, which can be represented in terms of damage indices, appears to be unaltered under a 200 kHz actuation signal, where predicted damage indexes have been found to be very similar for both flat and curved panels. Further numerical analyses will be performed to investigate on the effect of the adhesive layer on GW propagation mechanisms. Acknowledgements. This research was funded by the University of Campania “Luigi Vanvitelli” in the framework of “SAFES—Smart pAtch For active Shm” funded research project, as part of V:ALERE 2020 program.

References 1. Caputo, F., Lamanna, G., Perfetto, D., Chiariello, A., Di Caprio, F., Di Palma, L.: Experimental and numerical crashworthiness study of a full-scale composite fuselage section. AIAA J. 59, 700–718 (2021) 2. Boukabache, H., Escriba, C., Fourniols, J.-Y.: Toward smart aerospace structures: design of a piezoelectric sensor and its analog interface for flaw detection. Sensors 14, 20543–20561 (2014) 3. Ricci, F., Monaco, E., Boffa, N.D., Maio, L., Memmolo, V.: Guided waves for structural health monitoring in composites: a review and implementation strategies. Prog. Aerosp. Sci. 129, 100790 (2022) 4. Rocha, H., Semprimoschnig, C., Nunes, J.P.: Sensors for process and structural health monitoring of aerospace composites: a review. Eng. Struct. 237 (2021) 5. Gorgin, R., Luo, Y., Wu, Z.: Environmental and operational conditions effects on Lamb wave based structural health monitoring systems: a review. Ultrasonics 105, 106114 (2020)

Analysis of Guided Waves Dispersive Behavior for Damage Detection

511

6. Rose, J.L.: A baseline and vision of ultrasonic guided wave inspection potential. J. Press. Vessel Technol. Trans. ASME 124, 273–282 (2002) 7. Su, Z., Ye, L.: Identification of Damage Using Lamb Waves. Springer, London (2009). https:// doi.org/10.1007/978-1-84882-784-4 8. Wilcox, P.D.: A rapid signal processing technique to remove the effect of dispersion from guided wave signals. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 50, 419–427 (2003) 9. De Luca, A., Perfetto, D., Lamanna, G., Aversano, A., Caputo, F.: Numerical investigation on guided waves dispersion and scattering phenomena in stiffened panels. Materials 15, 74 (2021) 10. Perfetto, D., De Luca, A., Perfetto, M., Lamanna, G., Caputo, F.: Damage detection in flat panels by guided waves based artificial neural network trained through finite element method. Materials 14, 7602 (2021) 11. Salmanpour, M.S., Sharif Khodaei, Z., Aliabadi, M.H.: Airborne transducer integrity under operational environment for structural health monitoring. Sensors 16, 2110 (2016) 12. De Luca, A., Perfetto, D., Caputo, F., Petrone, G., De Fenza, A.: Numerical simulation of guided waves propagation in loaded composite structures. In: AIP Conference Proceedings, p. 020029 (2020) 13. Fong, K.L.J.: A Study of Curvature Effects on Guided Elastic Waves (2005). https://www. imperial.ac.uk/media/imperial-college/research-centres-and-groups/non-destructive-evalua tion/Fong-K-L-J.PDF 14. Wilcox, P.D.: Lamb Wave Inspection of Large Structures using permanently attached transducers (1998). https://www.imperial.ac.uk/media/imperial-college/research-centres-and-gro ups/non-destructive-evaluation/Wilcox-P.PDF 15. Santana, G.A., Malekan, M., Araújo, A.M., Donadon, L.V., Cimini Jr., C.A.: Finite element evaluation of the effects of curvature in Lamb waves for composites structural health monitoring. Latin Am. J. Solids Struct. 15, 1–21 (2018) 16. Munian, R.K., Roy Mahapatra, D., Gopalakrishnan, S.: Ultrasonic guided wave scattering due to delamination in curved composite structures. Compos. Struct. 239, 111987 (2020) 17. Perfetto, D., De Luca, A., Lamanna, G., Caputo, F.: On the prediction of guided wave dispersion curves in plates for health monitoring applications. Macromol. Symp. 396, 1–4 (2021) 18. Wang, T.: Finite element modelling and simulation of guided wave propagation in steel structural members (2014). http://handle.uws.edu.au:8081/1959.7/uws:32310 19. Michaels, J.E., Lee, S.J., Croxford, A.J., Wilcox, P.D.: Chirp excitation of ultrasonic guided waves. Ultrasonics 53, 265–270 (2013) 20. Gao, F., Hua, J.: Damage characterization using CNN and SAE of broadband Lamb waves. Ultrasonics 119, 106592 (2022) 21. De Fenza, A., Sorrentino, A., Perfetto, D., De Luca, A., Petrone, G., Caputo, F.: Elliptical triangulation method for damage identification on a complex aeronautical structure. In: Structural Health Monitoring 2019 - Enabling Intelligent Life-Cycle Health Management for Industry Internet of Things (IIOT), Proceedings of the Twelfth International Workshop on Structural Health Monitoring (2019) 22. Lee, B.C., Staszewski, W.J.: Modelling of Lamb waves for damage detection in metallic structures: Part I. Wave propagation. Smart Mater. Struct. 12, 804–814 (2003) 23. Lee, B.C., Staszewski, W.J.: Modelling of Lamb waves for damage detection in metallic structures: Part II. Wave interactions with damage. Smart Mater. Struct. 12, 815–824 (2003)

Analysis of the Tensioning Field Induced by Stress Fibers in Nanoindented Stem Cells Adhered to a Flat Substrate Lorenzo Vaiani(B)

, Antonio Emmanuele Uva , and Antonio Boccaccio

Dipartimento di Meccanica, Matematica e Management, Politecnico di Bari, Via Orabona 4, 70125 Bari, Italy [email protected]

Abstract. The study of predictive models describing the biological processes relating extra-cellular mechanical stimuli to structural responses of living cells, or even a differentiation, as in the case of mesenchymal stem cells (MSCs), is a relevant aspect in mechanobiology. A preliminary phase for these studies is the assessment of the mechanical behavior of whole living cells or their subcellular components, often performed by means of Atomic Force Microscopy (AFM). In this study we developed a numerical optimization framework aiming at matching the computed results obtained from a sequence of FEM simulations to an experimental AFM report curve associated to a MSC under investigation, in order to extract the elastic parameters of subcellular components and to assess how the mechanical response changes if the stress fibers network present in the interior of the cell is activated or not. By means of the proposed study, we extracted a set of Young’s moduli for the main subcellular components, which resulted comparable to the values computed by means of the Hertzian contact theory, and was also in good agreement with the related literature. By neglecting the effect of the tensioning pre-stress field induced by the stress fibers network, an underestimation of the Young’s moduli of subcellular components, up to a 15% in magnitude, was obtained. Keywords: Mechanobiology · Stem cells · Stress fibers · Mechanical simulation · Finite element method

1 Introduction The knowledge of the complex regulation mechanisms of biological processes in living cells, linking mechanical stimuli to internal signals capable of triggering a variety of structural responses, represents the core of current mechanobiological studies [1]. By adopting a multiscale approach to a physiological loading path, it can be observed that when an external force is applied on a muscle, it is transferred to the tissues, then to the sensing units of cells that induce a series of biophysical signals, thus enabling a sequence of biochemical reactions to generate a precise structural response or even a differentiation, as in the case of mesenchymal stem cells (MSCs). An important achievement in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 512–519, 2023. https://doi.org/10.1007/978-3-031-15928-2_45

Analysis of the Tensioning Field Induced by Stress Fibers

513

mechanobiology is the prediction of such responses in quantitative terms, given a known combination of forces or displacements applied on an arrangement of cells composing a biological tissue [2]. The study and the definition of predictive physical/mathematical models actually represents a relevant research field, ultimately leading to understanding and verifying mechanobiological relationships useful for preventing and treating congenital deformities, for studying tissue regeneration in fracture healing and for designing high-perfomance implants from a biomechanical point of view [3]. Many mechanobiological models for mesenchymal stem cells exist in the literature, presenting specific relationships relating biophysical stimuli to consequent differentiation paths into different phenotypes [4]. The starting point for all these mechanobiological studies is the assessment of mechanical properties of living cells or even of their subcellular components in all the possible loading cases, and the comparison of the computed values to the reference values presented in the literature. A widely adopted experimental technique for probing living cells or biological tissues is the Atomic Force Microscopy (AFM), in which a spherical, conical or pyramidal nanoindenter placed at the tip of a flexible cantilever is moved into the surface of a sample to investigate [5]. The motion of the nanoindenter and the reaction force exerted by the sample are measured and recorded for subsequent time steps, thus composing a force-indentation curve containing all the useful information on sample stiffness. The force-indentation report curve must be post-processed by means of an “extraction” procedure, from which one or more constitutive parameters (e.g. Young’s moduli), associated to a whole cell sample or even to its subcellular elements, should be computed in order to perform comparisons with tabular data present in the literature and hence deduce useful information from a biological point of view. Many extraction approaches exist, ranging from the easiest to implement, as the analytical Hertzian contact theory, to the most accurate, as in the case of parametric finite element analyses or discrete elements simulations [6–8], which consider rigorous geometries and dimensions, offering in addition the chance of being integrated within numerical optimization algorithms [9–14]. The proposed study is based on an optimized and iterative finite element simulation, aiming at extracting the elastic parameters of subcellular components of a mesenchymal stem cell, starting from an experimental AFM report curve and considering many aspects neglected by the Hertzian theory. The first assumption in the presented study is the definition of a topological configuration for a stem cell adhered to a flat substrate, to be considered as the undeformed and self-equilibrated configuration resulting from a structural cell remodelling [8] (see Fig. 1). Subsequently, a pre-tensioning field, induced by a network of internal stress fibers, necessary for ensuring the adhesion of the cell to a substrate [15] was applied and then followed by an additionally superimposed stress field consequent to the translation of a spherical nanoindenter into the cell volume. A numerical framework was developed, for comparing the results achieved from the experimental AFM test to the numerical results obtained by means of FEM simulations, to characterize the mechanical behavior, in terms of elastic constants, proper of the subcellular components constituting the investigated MSC, and to assess how their behavior changes if the stress fibers network present in the interior of the cell is activated or not [16].

514

L. Vaiani et al.

2 Materials and Methods 2.1 Modelling and Simulation Isolated and cultured MSCs from human primary material were used for AFM indentation tests. After seeding of the mononuclear cell fraction, evolving colonies were separated and MSCs further expanded in plastic culture flasks, and then the osteogenic differentiation of the cells was induced. The experimental testing was performed by using an AFM setup (Nano Wizard, JPK Instruments, Berlin) and an inverted optical microscope (Zeiss Axiovert 200), as previously described in Vaiani et al. [13]. The maximum indentation value was set to be less than 200 nm and, for this displacement range, the reaction forces exerted by the samples were always less than 5 nN.

Fig. 1. Simulation model for the AFM nanoindentation test and a detail of the stress fibers arrangement.

The experimental setup for AFM nanoindentations was modelled in Simulia Abaqus environment. Two main parts were considered, as depicted in Fig. 1: a spherical nanoindenter and a deformable mesenchymal stem cell adhered to a flat substrate. The nanoindenter was modelled as a hemispherical analytical rigid surface presenting a radius of 5 µm, while a quarter of a MSC, presenting an overall diameter equal to 50 µm and a total height of 13 µm [17] was modelled and discretized in finite elements, in order to study the deformation fields and the relative stress fields. The considered topology and the related dimensions were assumed as representative of a whole class of cells, as a synthesis of all the geometrical data retrieved from the scientific literature regarding experimental studies in cell mechanobiology. An improved modelling stage would be based on Scanning Electron Microscopy imaging, for reproducing more rigorous profiles and geometries for cell clusters or even for single cells to test, with the further possibility to introduce the surface roughness as an additional feature, for a highly detailed description and modelling of real nanoindented cells [9]. A set of subcellular components was defined: the cell nucleus, the cytoskeleton, the cortex and the stress fibers arrangement. Nucleus and cytoskeleton were mathematically modelled through 4-node tetrahedral elements with hybrid formulation (C3D4H

Analysis of the Tensioning Field Induced by Stress Fibers

515

elements), whereas the cell cortex was modelled in its thickness, set equal to 200 nm, through a finer mesh of 6-node triangular prisms with hybrid formulation (C3D6H elements). Three different groups of stress fibers were introduced, as shown in the detail view of Fig. 1: ventral stress fibers, dorsal stress fibers, and transverse arcs. They were modelled as pre-tensioned cables subjected to a pulling force of 10 nN [18, 19]. The contact behavior of the interaction between the cell and the nanoindenter was assumed as frictionless; symmetry constraints were imposed to the flat surfaces of the cell quarter and an encastre constraint was set to the lower flat plane of the cell, for representing the general adhesion condition and the action of the focal adhesions, which are specialized protein complexes present at the root of the several stress fibers, responsible for the mechanical coupling between the cell and the extracellular matrix [20, 21]. The mechanical behaviors of all the subcellular components were considered as hyperelastic, by implementing the Arruda − Boyce constitutive law because, according to the scientific literature, among all the hyperelastic laws (Ogden, Mooney-Rivlin and others) it is an adequate choice to describe the mechanical behavior of a stem cell subjected to AFM nanoindentations in the elastic regime [16]. This law considers a strain energy function in terms of two different constants: the shear modulus μ8chain and the distensibility λL , which were assigned to define two different materials for the cortex volume and for the cytoskeleton volume. The corresponding Young’s moduli for the single subcellular components E AB were defined as: E AB = 2(1 + ν) μ8chain , being ν the Poisson’s ratio, set in this study as equal to 0.4999 in order to impose the incompressibility condition proper of biological materials.

Fig. 2. Simulation steps: initial AFM setup, first vertical displacement of the cell tip induced by stress fibers tensioning and overall displacement due to the indentation.

Subsequently, starting from the constrained and undeformed configuration of the mesenchymal stem cell shown in Fig. 1, two simulation steps were defined, as illustrated in Fig. 2: – a tensioning phase, reproducing the pre-stress field generated by the action of the stress fibers network during the adhesion process activated by the cell, starting from the self-equilibrated configuration obtained as a result of a previous structural cell remodelling, until reaching a fully adhered configuration. In this step, a pulling force with a magnitude of 10 nN was applied in downward direction to each stress fiber,

516

L. Vaiani et al.

causing a vertical displacement of the cell tip δ SF , which resulted, for the considered range of elastic constants, in the order of hundreds of nanometers; – an indentation phase, describing the real AFM test, modelled by moving the rigid spherical nanoindenter into the surface of the mesenchymal stem cell, thus superimposing a second vertical displacement δ AFM , equal to 200 nm, to the previous one. In this step, the pulling force exerted by the stress fibers network was kept as constant, while the nanoindentation motion was imposed to the AFM bead. At the end of the simulation cycle, the overall vertical displacement experienced by the cell tip and the coupled spherical nanoindenter was: δ TOT = δ SF + δ AFM , as depicted in Fig. 2. 2.2 Optimization The whole simulation cycle was embedded into an iterative optimization algorithm, with the primary goal of fitting the considered experimental curve (see Fig. 3) below a pre-defined threshold for errors (stop condition). After this numerical matching, the extraction of the Arruda-Boyce constants, as the shear modulus μ8chain , the distensibility λL , and the Young’s moduli E AB proper of the considered subcellular components (i. e. the cell cortex and the cytoskeleton) was performed. In a subsequent phase, the influence of the pre-stress tensioning field on the elastic properties of subcellular components was evaluated, by deactivating and then gradually activating the pulling force to its maximum value in each stress fiber into the deformable cell model [16].

Fig. 3. AFM report curve for the investigated sample and numerical fitting.

The simulations were executed on a HP XW6600-Intel® Xeon® Dual-Processor E5–5450, 3 GHz, 32 GB RAM workstation. In a single optimization run, approximately 70 simulation cycles were executed in order to enter the threshold range for errors and hence to retrieve the optimal elastic constants for the modelled subcellular components. On average, each simulation cycle had a duration of about 3.5 h.

Analysis of the Tensioning Field Induced by Stress Fibers

517

3 Results and Discussion The proposed computational framework, constituted of a series of iterated FEM simulation cycles embedded into an optimization algorithm, predicted a set of elastic properties (i. e. shear moduli, distensibilities and Young’s moduli) of subcellular components that were in good agreement with those reported in the related literature [22, 23]. By applying the AFM experimental technique, Kiss et al. [24] found that the elastic moduli proper of human stem cells are in the range: 0.05 - 10 kPa. Table 1. Optimized elastic parameters computed for deactivated and activated stress fibers, compared to those computed through Hertz contact theory. FEM, Arruda-Boyce

Hertz theory

Cortex

Cytoskeleton

(avg.)

Shear mod. µ8 chain,cor [Pa]

Dist. λL,cor

Young’s mod. E AB,cor [Pa]

Shear mod. μ8chain,cyt [Pa]

Dist. λL,cyt

Young’s mod. E AB,cyt [Pa]

Young’s mod. EH [Pa]

No SF

3243

3.088

9728

1067

3.060

3203

4110

SF active

3251

3.070

9754

1173

3.063

3519

In detail, the value of the Young’s modulus E AB,cyt proper of the cytoskeleton, computed by means of the proposed numerical framework, was about half of the value of the Young’s modulus E AB,cor proper of the cell cortex, and it was comparable to the averaged Young’s modulus E H , computed by means of Hertzian contact theory for the considered setup, being the cytoskeleton the prevalent material constituting the cell, as presented in Table 1. By regarding the effect of the tensioning pre-stress field induced by the stress fibers pulling, it was possible to notice that neglecting this contribution leads to an underestimation of the Young’s moduli of subcellular components, up to a 15% in magnitude. When the tensioning field induced by the stress fibers network is considered in the analysis, lower values for the reaction force on the nanoindenter are measured for every displacement, with respect to the case when the stress fibers action is deactivated. A possible explanation of this phenomenon can be drawn by considering that the stress fibers network generates an elastic tensile field in the contact region, and when the nanoindenter enters the cell surface, it brings downwards the upper region of the cell, helping it at reaching the equilibrium condition it had before the tensioning phase. In detail, during its vertical motion, the nanoindenter probably induces a sort of “restoring” compressive field with an opposite sign, and this might be the field required for decreasing the magnitude of the tensile stress state in the upper region of the cell resulting from the previous action of the stress fibers network. In conclusion, the proposed simulation and optimization framework had the capability of extracting a set of elastic constants proper of different subcellular components of a mesenchymal stem cell, even in the case of a superimposed pre-stress field induced by a

518

L. Vaiani et al.

stress fibers network. In particular, if the action of the focal adhesions and the stress fibers network are not considered, it is possible to observe an increase of the force required to perform the nanoindentation of the spherical nanoindenter into the deformable cell, and a reduction in the magnitude of the elastic constants proper of all the several subcellular components (i. e. cortex and cytoskeleton) that were modelled in the proposed study. Acknowledgments. We gratefully acknowledge the support offered by the Italian Ministry of Education, University and Research under the Programme PON R&I 2014–2020 and FSC (Project ‘CONTACT’, ARS01_01205) for the fundings received.

References 1. Bao, G., Suresh, S.: Cell and molecular mechanics of biological materials. Nat. Mater. 2(11), 715–25 (2003) 2. Lim, C.T., Zhou, E.H., Quek, S.T.: Mechanical models for living cells - a review. J. Biomech. 39(2), 195–216 (2006) 3. Van der Meulen, M.C., Huiskes, R.: Why mechanobiology? A survey article. J. Biomech. 35(4), 401–414 (2002) 4. Boccaccio, A., Ballini, A., Pappalettere, C., Tullo, D., Cantore, S., Desiate, A.: Finite element method (FEM), mechanobiology and biomimetic scaffolds in bone tissue engineering. Int. J. Biol Sci. 7(1), 112–32 (2011) 5. Kuznetsova, T.G., Starodubtseva, M.N., Yegorenkov, N.I., Chizhik, S.A., Zhdanov, R.I.: Atomic force microscopy probing of cell elasticity. Micron 38, 824–833 (2007) 6. Vichare, S., Inamdar, M.M., Sen, S.: Influence of cell spreading and contractility on stiffness measurements using AFM. Soft Matter 8, 10464–10471 (2012) 7. Vargas-Pinto, R., Gong, H., Vahabikashi, A., Johnson, M.: The effect of the endothelial cell cortex on atomic force microscopy measurements. Biophys. J. 105, 300–309 (2013) 8. Vassaux, M., Milan, J.L.: Stem cell mechanical behaviour modelling: substrate’s curvature influence during adhesion. Biomech. Model. Mechanobiol. 16(4), 1295–1308 (2017). https:// doi.org/10.1007/s10237-017-0888-4 9. Boccaccio, A., Uva, A.E., Papi, M., Fiorentino, M., De Spirito, M., Monno, G.: Nanoindentation characterisation of human colorectal cancer cells considering cell geometry, surface roughness and hyperelastic constitutive behaviour. Nanotechnology 28, 1–18 (2017) 10. Boccaccio, A., et al.: Nanoscale characterization of the biomechanical hardening of bovine zona pellucida. J. R. Soc. Interface 9(76), 2871–2882 (2012) 11. Boccaccio, A., et al.: A hybrid characterization framework to determine the visco-hyperelastic properties of a porcine zona pellucida. Interface Focus 4(2), 20130066 (2014) 12. Boccaccio, A., Papi, M., De Spirito, M., Lamberti, L., Pappalettere, C.: Effect of the residual stress on soft sample nanoindentation. Appl. Phys. Lett. 102, 1–5 (2013) 13. Vaiani, L., et al.: Coarse-grained elastic network modelling: a fast and stable numerical tool to characterize mesenchymal stem cells subjected to AFM nanoindentation measurements. Mater. Sci. Eng. C 121, 111860 (2021) 14. Vaiani, L., Fiorentino, M., Gattullo, M., Manghisi, V.M., Uva, A.E., Boccaccio, A.: A coarsegrained lattice spring model to characterize nanoindented stem cells. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 623–629. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_62 15. Burridge, K., Guilluy, C.: Focal adhesions, stress fibers and mechanical tension. Exp. Cell Res. 343, 14–20 (2016)

Analysis of the Tensioning Field Induced by Stress Fibers

519

16. Migliorini, E., et al.: Nanoindentation of mesenchymal stem cells using atomic force microscopy: effect of adhesive cell-substrate structures. Nanotechnol. 32, 215706 (2021) 17. Su, X., et al.: Nanomorphological and mechanical reconstruction of mesenchymal stem cells during early apoptosis detected by atomic force microscopy. Biol. Open 9(3), bio048108 (2020). https://doi.org/10.1242/bio.048108 18. Deguchi, S., Ohashi, T., Sato, M.: Tensile properties of single stress fibers isolated from cultured vascular smooth muscle cells. J. Biomech. 39, 2603–2610 (2006) 19. Deguchi, S., Sato, M.: Biomechanical properties of actin stress fibers of non-motile cells. Biorheology 46, 93–105 (2009) 20. Liu, Y., et al.: Nanoparticle tension probes patterned at the nanoscale: impact of integrin clustering on force transmission. Nano Lett. 14, 5539–5546 (2014) 21. De Beer, A.G.F., Cavalcanti-Adam, E.A., Majer, G., Lopez-García, M., Kessler, H., Spatz, J.P.: Force induced destabilization of focal adhesions at defined integrin spacings on nanostructured surfaces. Phys. Rev. E 81, 1–7 (2010) 22. Liang, X., Shi, X., Ostrovidov, S., Wu, H., Nakajima, K.: Probing stem cell differentiation using atomic force microscopy. Appl. Surf. Sci. 366, 254–259 (2016) 23. Chen, Q., et al.: AFM studies of cellular mechanics during osteogenic differentiation of human amniotic fluid-derived stem cells. Anal. Sci. 26, 1033–1037 (2010) 24. Kiss, R., et al.: Elasticity of human embryonic stem cells as determined by atomic force microscopy. J. Biomech. Eng. 133, 1–10 (2011)

Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train with Gear or Bearing Defect Raphaël Mignot-Pasty1,2(B) , Lionel Roucoules2 , François Malburet2 , and Olivier Honnorat1 1 Arts et Metiers Institute of Technology, HESAM Université, LISPEN,

13617 Aix-en-Provence, France [email protected] 2 Airbus Helicopters, Aéroport Marseille Provence, 13725 Marignane, France

Abstract. Gear and bearing dynamics have been independently and widely studied over the past decades in the scope of different applications like the automotive, the energy or the aeronautical industries. Research about dynamic behavior of gear systems under many failure cases has also been conducted: spalling, eccentricity, or teeth breakage are often depicted and their influence over the dynamic behavior and vibrations are of great interest for both researchers and industrials, especially in a goal of predictive maintenance. This research is looking for enhancing the vibration detectability of a particular defect which has not been much considered in literature models, namely a gear rim crack. This particular defect can be initiated on integrated bearing races which are encountered on complex transmission systems: planetary gear trains of helicopter gearboxes. The added value of this paper is based on proposing a framework gathering models from literature, analytical approaches or FE models which deal with the different sub-systems composing the gear train (planet gear, spherical bearing and housing components…) with or without defect. The framework offers the possibility to assemble different models in order to assess the global behavior. The assembly is selected based on tradeoffs between the computation efficiency and the behavior fidelity. Coupling and interoperation methods between sub-models are studied to guaranty the modularity and upgradability of the model, allowing the method to be transposed to future architectures. Keywords: Dynamic modeling · Modeling framework · Coupling · Planetary gear train · Defect

1 Introduction Helicopters Main Gearboxes (MGB) are complex systems gathering several mechanical transmission components such as gears (spur, helical and bevel gears), bearings, splines, shafts and housings. MGB are highly critical systems as they represent the sole link between the Main Rotor (MR), which provides the aircraft thrust and lift, and the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 520–532, 2023. https://doi.org/10.1007/978-3-031-15928-2_46

Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train

521

aircraft structure. Different types of failure can occur in such systems, many being often encountered in the literature like teeth root cracks. Some defects remain specific to the particular components of the helicopter gearboxes, such as the on illustrated in Fig. 1: it consists in a gear crack on a planet gear which integrates a bearing race. With this specific configuration, a crack can be initiated at the integrated bearing rolling race and propagate through the gear rim. Such a crack could endanger the gearbox integrity. Preventing such a failure is a major industrial issue that needs to be addressed: the identification of the defect signature through vibration monitoring is one of the main research roadmaps to tackle this problem, especially since the existing monitoring strategies failed to detect the defect. Modeling the dynamic behavior of the MGB with defects would enhance the understanding of the defects impact on the vibration, monitored by accelerometers positioned in fix-reference outside the ring gear.

Fig. 1. Helicopter epicyclic module with defect study case (PGC = Planet Gear Carrier; PG = Planet Gear; SG = Sun Gear; RG = Ring Gear)

Many researches have already considered the independent dynamic behaviors from gears and bearings with different methods such as Lumped Parameter Models (LPM) [1], Finite Element Models (FEM) [2] or hybrid approaches [3]. Several defects are also widely studied: cracks and spalls on gear teeth (but not in the gear rim) [4]; geometrical errors such as eccentricity [1]; bearing spalls [5]. Nevertheless, research effort is still needed to combine sub-systems local influences into the gearbox global behavior. This need is particularly critical considering the location of the involved crack: at the direct interface of the planet gear and the bearing.

522

R. Mignot-Pasty et al.

In this paper, a modeling framework method is proposed to define the required couplings and interactions between local models of gears, bearings and defects of distinct natures and from different origins to predict the global system dynamic behavior.

2 Main Concepts Definition and State of the Art 2.1 Integration Methods The considered model-assembly approach introduces the question of how models will exchange information or results: we define the communication strategy between two or more models as the method of Integration, or Interoperability [6]. Three integration methods will be considered in this paper to connect a structure 1 and a structure 2, see Fig. 2. Each structure comprises components of different natures (e.g. a component A or B for the structure 1) to be “connected” with those of the counterpart structure. These connections imply “translation” operations modifying the exchanged data to make it usable by the other structure, at the expense of a certain loss of information. Translations are performed using either loose or tight coupling, see part 2.2. FEDERATION Structure 2

Structure 1 Translation

COUPLING SOLUTIONS

MEDIATION Structure 1

Eq.1

Structure 2 Structure 3

Eq.2 Loosely coupled or Eq. tot

FULL INTEGRATION Integrated structure

Tightly coupled

Fig. 2. Integration methods: data-exchange between sub-models

Federation is the first integration method: it involves direct exchange between models. Communication between N models will require N (N − 1)/2 translations in a tightly coupled approach or N (N − 1) in a loosely coupled one. The next method is the Mediation: communication between models is indirect and involves a mediation structure. Each model communicates with the mediator whose role is to translate all the collected information with a common syntax. The amount of translations is proportional to N . The last method consists in the Full integration of all the models into one: all the structures are modified to become an intrinsic part of the integrated structure, fully compliant with the rest of it. No translation is required while operating.

Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train

523

2.2 Coupling Definition Coupling notion is used to define the way equations of interoperating models will be solved, with a specific look at the solving sequence. It is introduced by Schoinochoritis et al. [7] for thermomechanical studies. In a coupled approach, thermal and mechanical sub-models are executed in parallel and exchange information at each time step. Conversely, an un-coupled approach would consist in running models independently without interaction: the computation time is optimized but the fidelity of the representation remains limited. Chen et al. qualify the coupling by distinguishing tight and loose couplings [8], see Fig. 3.

Fig. 3. Comparing loose and tight coupling approaches, adapted from Chen et al. [8]

In their multi-physics study, loose and tight coupling are used to solve sequentially the separated equations. With Loose coupling, each equation is independent and solved successively by passing one or more results/parameters (highlighted in Fig. 3) to the other equation. Loosely coupled models exist independently of the others. In the Tight coupling, an iteration loop is introduced between the two equations in order to increase their temporal dependency and to make their results converge more accurately. Equations of each model are then solved together at each time step as one. Results provided by the latter method offer more fidelity to the representation at the expense of a higher computation time. The right coupling solution is to be used as translation operation in the selected integration scheme depending on the fidelity to efficiency desired balance. Coupling and Integration methods are cross-evaluated in Fig. 4 following three criteria retained for our study: the Modularity of the method stating on the capability to replace involved models by alternative versions and to allow the assembly evolution; the Fidelity of the representation offered by the modeling approach; and the Computation time, stating on the efficiency of the method.

524

R. Mignot-Pasty et al. Tight coupling

Loose coupling small

large

small

large

Modularity

Federation

Fidelity Temps Modularity

Mediation

Fidelity Temps Modularity

Integration

Fidelity Temps

Fig. 4. Evaluation of the integration and coupling methods

2.3 Modeling Framework Modeling frameworks have been studied by authors from several research domains: Auzelle considers the “Zachman Framework” to model an enterprise information system [9]; Claeys and al. Build a framework to facilitate the “cognitive automatization” in industrial assembly lines [10]. Klikovits et al. summarize a definition of a modeling frame: «[…] We define a modeling frame to consist of all process information required to perform the activity, the context the activity is performed in, and a (potentially empty) set of frames defining sub-activities» [11]. The purpose of the framework is to facilitate and organize the assembly of the models along with defining the couplings and the running sequence. In doing so, it should enhance the fidelity of the observer. Four requirements are identified for the modeling framework to be constructed in the scope of the MGB dynamic modeling: • To organize the model assembly, which is the main purpose of the frame; • To deal with heterogeneous sub-models, as the framework shall gather models from different behaviors (static, kinematic or dynamic), systems (gears, bearings, housing) and defects (crack, spall…); • To grant the modularity and upgradability of model, with the capability to replace part or all of the sub-models depending on the need of the user; • To be a generic method: the framework should be easily transposed to other MGB with various architectures.

3 MGB Modeling Framework To model the MGB dynamic behavior, a bottom-up approach is undertaken by identifying first the relevant sub-system models and then to aggregate them by defining the suitable framework. 3.1 Heterogeneous Sub-models Several local models dealing with the MGB sub-systems have been built before this study:

Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train

525

• LPM dynamic models of the multi-stages planetary gear train, with different levels of fidelity and inspired from different literature studies [1], see Fig. 5. Models are referred as 1A, 1B and 1C in Fig. 8; • Spherical Roller Bearing (SRB) LPM and contact models coming and adapted from the literature [5, 12]. They are referred as 2A and 2B; • The elastic behavior of the ring taken as a structural part [13]. They are referred as 3A and 3B; • The defects influence on the signal such as gear rim crack coming from analytical and FEM models, or bearing spall [5]. They are referred as 1Xa, 1Xb and 2Xa. • The transfer function modulating the signal from the planet-ring contact to the fixreference sensor, see Fig. 5. Quasi-static and dynamic versions coming from local studies (FEM or testing results) are considered, referred as 4A to 4C;

Fig. 5. Planetary gear train LPM model

Heterogeneity of the models comes from the systems they represent, but also from their nature (LPM, FEM, databases) and their origin (literature, local analysis or bench testing). Moreover, all these modeling “bricks” describe different kinds of behaviors: Gear model is defining the system as mass-spring sets and introduces the Degrees of Freedom (DoF) of the MGB model and defines its kinematics; Bearing model considers the static influence of the loaded rolling elements with stiffness terms; Defect models modify the gears and bearings stiffness depending on the defect properties; Ring models rely on elastic beam theory and infer through introducing additional stiffness components into the gear model; and finally Transfer functions modify the dynamic model results to predict the accelerometer signal.

526

R. Mignot-Pasty et al.

3.2 Interactions and Couplings Between Sub-models Exchanged Data The interactions of the sub-models involved in the assembling process are illustrated in Fig. 6. All the “bricks” engaged in the assembly are based on mechanical theories, meaning that they are based on common formalisms (properties, material and geometry). Nevertheless, as the simulated behaviors vary between each model, exchanged data must be homogenized to be commonly exploitable by the dynamic behavior matrix equation. Therefore, the main results of each model are expressed as stiffness or inertial terms which can be used in the stiffness or mass matrix, see Fig. 6: Gear and Bearing models produce mesh or bearing translational stiffness terms (k mesh and k bear , see Fig. 6 and Fig. 5); Ring model results are translated towards stiffness terms (k ring ) added in serial to gear mesh stiffness all over the ring circumference; Crack model produce stiffness modification coefficients (k mesh ) to be convoluted with k mesh ; and Bearing spall model is expressed indirectly by modifying k bear using a roller compression gap coefficient δ + .

Fig. 6. Interaction and information transfer between MGB sub-models (CB = Calculation Basis, PG = Planetary Gear train, SRB = Spherical Roller Bearing)

Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train

527

Running Sequence The running sequence involved in the MGB modeling process is given using the numerical sequence tags in Fig. 6: In step 1, interactions occur in Federation mode between different models by pairs (small N , see Fig. 4): gear model interacts with the SRB model to give external forces required for the rolling elements loads calculation (as the bearing outer-ring is the same part as the gear body). Gear and bearing models then exchange with their defect sidemodels respectively, if any defect is considered in the system. In step 2, sub-models are independently executed to calculate their respective stiffness (or stiffness modification coefficients) results on periodic and discrete angular spaces, producing the “Calculation Bases”, CB in Fig. 6. These bases can be generated in parallel or even prior to the overall modeling process. They also can result from independent literature studies or from parallel works, and can be easily replaced by updated versions. They therefore offer the expected modularity while limiting the running time by calling each independent model on limited number of time-steps. In step 3, Mediation approach is used to assemble one general “Master” equation, representing the whole model coupled dynamic behavior. This equation is firstly shaped by requiring the equation topology from the Planetary Gear (PG) model, which brings the model DoF. General parameters (geometrical and material) of the selected submodels are next imported. Then, parameters contained in the involved Calculation Bases are called by the Mediator structure to fill the general equation fields. The differential equation system is finally solved through a Matlab built-in ode function and a resolution loop is engaged: stiffness results are actualized by calling the Bases with the updated characteristic angles at each step. In step 4, the model is run over a complete cycle of MR rotations. Dynamic results consisting in angles and angular speeds are extracted, resampled and finally treated through the transfer function to predict the vibration signal measured by the fix-position accelerometer. Coupling Analysis Couplings involved in the MGB modeling process can finally be discussed based on the presented interactions. To support our analysis, MGB running sequence is placed in the Integration/Coupling evaluation table introduced in part 2, see Fig. 4 (Fig. 7). The initialization step involves Federation mode as sub-models interact by pairs using loose coupling: models generate information separately and then exchange their results one after another. In step 2, Calculation Bases are generated independently and in parallel. The Mediation phase is then engaged by transferring information from the bases to the “Master” equation, in a loosely coupled way. Tight coupling is finally encountered once the variable-topology dynamic equation starts its resolution loop: one unique and common equation is assembled for the entire system to take into account all the possible coupled behaviors and system interactions.

528

R. Mignot-Pasty et al.

Fig. 7. MGB model in terms of coupling and integration method

3.3 MGB Model: A 3-Axes Framework Heterogeneous local models have been introduced and the integration/coupling strategy analyzed with the objective of optimizing the modularity while conserving an acceptable fidelity and a reasonable computation time. To fulfill the requirements listed in part 2.3, the framework dedicated to managing the assembly of the models shall finally be defined. Three major axes are used to characterize the framework, see Fig. 8. The first axis defines the “System” and is organized in columns, each one representing one of the system components (the ring, the gear or bearings). A final column is dedicated to the transfer function used as an intermediary between the modeled system and the sensor. By including components from each column in the global model, the completeness of the system is improved.

System 1Ab

+ Fidelity - Computation time

Fissure 1Aa Fissure MEF 1A analytique EPI

RING rigid

RING flexible

Gain stat SRB K=cst

3O

4A

2O

1Bb

Fissure 1Ba Fissure MEF 1B analytiqu EPI e 3A

2Aa 2A SRB Ecaillage littérature

1Cb

Fissure 1Ca Fissure MEF 1C EPI analytiqu e

4B

Gain quasi-stat

2Ba 2B Ecaillage SRB evolution

4C

Dyn Transfer fct

Simple Assembly

Intermediate complexity model

High fidelity representation

No defect

Simple defect model

Refined defect model

Fig. 8. MGB 3-axes modeling framework

Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train

529

A second axis is defined as the “Fidelity level”, along which models from each subsystem are placed depending on their level of representativeness. For instance, three levels of fidelity are considered for the planetary gear set model: the lower level only considers rotational DoF while the higher level includes translational DoF for all rotating components. By increasing the fidelity, computation time grows longer. Using these too first axes various models of more or less complex architectures can be assembled, with variable tradeoffs between the computing effectiveness and the fidelity level. In Fig. 8, three non-exhaustive assembling paths are proposed as an example: a very simple one composed by purely rotational DoF without flexible ring or bearing models; an intermediate level of fidelity/complexity introducing translational DoF, constant results from SRB model and a quasi-static transfer function; and a high-fidelity/complexity assembly introducing the flexible ring behavior, a SRB model adapted from the literature to fit the particular integrated-race configuration of the planet gear, and the dynamic transfer function. These two first axes fulfil the requirements by proposing modular assembling paths, with the possibility to switch each model of component and by considering part or all of the system. It is also transposable to any architecture made of planetary gear trains, housing and bearings. A final axis must nevertheless be considered, consisting in the “Level of defect” to take into account the corresponding models in the assembly. The zero-level of defect deals with healthy systems. By evolving along this third axis, more complex defect models used to modify the dynamic behavior are represented (from a simple parametric modification to FEM results for instance). The required heterogeneity is therefore fulfilled as the user can chose to either consider healthy components in its model or different levels of defects on one or several components.

4 First Results of the Method Results are observed at the different stages of the modeling process, from the local submodel to the varying-complexity assembly. Each sub-model included in the framework is firstly validated. As an example, the validation and the evolution of the SRB bearing model is shown in Fig. 9. Bearing results are expressed as load sharing between each rolling element for a given load-case. The re-encoded model is primarily calibrated through comparing the load-sharing results with the original literature ones. Once validated (acceptable discrepancies), the model can be adapted to the current study which considers integrated outer races, meaning that the bearing outer-race will deform while the gear is loaded. Load sharing is therefore adapted accordingly (see the right-hand side of Fig. 9). Next, results from the assembling methodology shall be assessed. The results from two assembly paths are presented in Fig. 10 in terms of FFT (Fast-Fourier Transform) from the point of view of a fix-point accelerometer. The first model is a simple assembly without defect (3O-1A-4B, see Fig. 8), the second includes translational DoF and bearings with a planet gear crack (3O-1Ca-2B-4B).

530

R. Mignot-Pasty et al.

Fig. 9. Local model results evolution: SRB bearing model coming from literature [12]

Fig. 10. Comparing the predicted accelerometer signal by a simple model assembly (left) and by a complex assembly with defect (right)

Both representations show the three theoretical “healthy” harmonics of the train, organized by pairs as observed in the real system, but the more complex assembly brings more energy in the second harmonics. Also, sidebands emerge from the complex model including defects, see Fig. 10 at the first harmonic level. Those low-energy sidebands are coming from the combination of the defect influence with the translational DoF of the

Framework for the Dynamic Modeling of a Helicopter Planetary Gear Train

531

complex model. Information has therefore been produced by the assembling strategy, at the expense of a greater computation time (around 8 to 10 times greater than for the simple model, for the same number of PGC rotations and the same CPU configuration).

5 Conclusion A modeling framework has been proposed to define the interactions between sub-models describing helicopter Main Gearbox transmission components. Coupling and interoperation methods have been established following four requirements: managing the organization of the model assembly; dealing with heterogeneous sub-models; granting modularity and upgradability for the assembly; and being a generic method to be easily transposed to new architectures. The modeling strategy based on pre-calculated Calculation Bases has been exposed and sub-models organized in the framework along three axes: the system completeness, the fidelity of the representation and the level of defect modeling. This method namely allows to couple the epicyclic gears and integrated bearings of the studied system, with spalls and rim-crack defects at the gear/bearing interface, and to check the defects influence in the vibration spectrum. First results of the method have finally been presented from the local model point of view to the global one. New information has been produced in the frequency spectrum by coupling the gear and bearing models, and the influence of the crack has finally been observed in the result. The methodology applied in the scope of this research can be extended to any transmission system involving planetary gear trains. Research is still ongoing to connect reduction stages of other natures (cylindrical or bevel gears…) and to include other defects (geometrical, mounting errors…), so that the proposed solution will then be able to study other gearboxes.

References 1. Chaari, F., Fakhfakh, T., Hbaieb, R., Louati, J., Haddar, M.: Influence of manufacturing errors on the dynamic behavior of planetary gears. Int. J. Adv. Manuf. Technol. 27, 738–746 (2006) 2. Rigaud, E., Barday, D.: Modélisation et analyse de l’erreur statique de transmission d’un engrenage. Influence des déformations des roues et interactions entre les couples de dents en prise (2007) 3. Guilbert, B., Velex, P., Dureisseix, D., Cutuli, P.: Modular hybrid models to simulate the static and dynamic behaviour of high-speed thin-rimmed gears. J. Sound Vib. 438, 353–380 (2019) 4. Mohammed, O.D., Kumar, U., Rantatalo, M.: Analytical crack propagation scenario for gear teeth and time-varying gear mesh stiffness. Presented at the International Conference on Applied Mechanics, Paris, France, 22 August (2012) 5. Ghalamchi, B., Sopanen, J., Mikkola, A.: Modeling and dynamic analysis of spherical roller bearing with localized defects: analytical formulation to calculate defect depth and stiffness. Shock Vib. 2016, 1–11 (2016) 6. Bigagli, L., Nativi, S., Mazzetti, P.: Mediation to deal with information heterogeneity? Application to earth system science. Adv. Geosci. 8, 3–9 (2006) 7. Schoinochoritis, B., Chantzis, D., Salonitis, K.: Simulation of metallic powder bed additive manufacturing processes with the finite element method: a critical review. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 231, 96–117 (2015)

532

R. Mignot-Pasty et al.

8. Chen, F., Zhang, S., Liu, H.: Modeling and analysis of fluid-thermal-structure coupling problems for hypersonic vehicles (2018) 9. Auzelle, J.-P.: Proposition d’un cadre de modélisation multi-échelles d’un système d’information en entreprise centré sur le produit (2009) 10. Claeys, A., Hoedt, S., Soete, N., Van Landeghem, H., Cottyn, J.: Framework for evaluating cognitive support in mixed model assembly systems. In: 15th IFAC Symposium on Information Control Problems in Manufacturing, vol. 48, pp. 924–929 (2015) 11. Klikovits, S., Denil, J., Muzy, A., Salay, R.: Modeling frames (2017) 12. Ghalamchi, B., Sopanen, J., Mikkola, A.: Simple and versatile dynamic model of spherical roller bearing. Int. J. Rotating Mach. 2013, 567542 (2013) 13. Abousleiman, V., Velex, P.: A hybrid 3D finite element/lumped parameter model for quasistatic and dynamic analyses of planetary/epicyclic gear sets. Mech. Mach. Theory 41, 725–748 (2006)

Design of a Snap-Fit Joint Through a Multibody Model Giulia Pascoletti1,2(B) , Paolo Conti2 , Francesco Bianconi2 and Elisabetta M. Zanetti2

,

1 Department of Mechanical and Aerospace Engineering (DIMEAS), Politecnico di Torino,

10129 Turin, Italy [email protected] 2 Department of Engineering, University of Perugia, 06125 Perugia, Italy

Abstract. Snap-fit joints represent a simple, economical and straightforward way of joining two different components. The design of the snap-fit joint is usually performed evaluating peak stresses that must be tolerated by the material without incurring into failure or plastic deformations; in addition, the force needed to join and disassemble parts is estimated in relation to ergonomic issues. Finally, the retention force, that is the force required to start disjoining parts, needs to be estimated. The evaluation of peak stresses or insertion/retention/removal forces is commonly performed through finite element method, having identified the respective deformed configuration. A different approach has been here followed considering that it is not trivial to identify the most critical condition in a full joining/disjoining cycle, when complex geometries are being considered. In detail, the snap joint has been modelled as a multibody model including a flexible body, which replicates the part that undergoes major deflections during the process. The model has been validated against experimental force – time curves, recorded for an existing joint, and it has been used to optimize a parametrised snap-fit design. As a result, the joining force has been reduced up to −84%; the disassembly force has been reduced up to −86% and the retention force has been incremented up to +7%. On the whole, a numerical framework to study these joints has been established, keeping the computational time reasonably low (about 40 min for the entire insertion and removal simulation). Keywords: Snap-fit joint · Plastic components design · Tolerance analysis · Geometrical modelling · Multibody model

1 Introduction The design of snap-fit joints is required in many industrial applications: it can allow the production of smaller size components in order to facilitate production, transportation and handling; this is the case, for example, of 3D printed objects where the production technology itself is affected by strict limits on the maximum size [1, 2]. Secondly, snap-fit joints might be used to join components made of different materials due to technological © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 533–545, 2023. https://doi.org/10.1007/978-3-031-15928-2_47

534

G. Pascoletti et al.

input or different demands from operative conditions. Finally, these joints can realize a connection between a high-cost machine and a consumable component. Recently, a major emphasis is being given not only to the possibility of joining components, but also to the possibility of reversing this connection; in facts recycling and proper disposal inputs from full-life cycle assessment often lead to the necessity of being able to isolate single components as well as replacing consumable parts. Indeed, the EDiM concept (Ease of Disassembly Metric [3]) established that the reduction of the disassembly time and the respective costs represents a major incentive towards lifetime extension, as required by circular economy concepts and stressed by European Commission [4]. As well known, improving recycling efficiency, together with increasing material efficiency and extending product life, is one of the three product design strategies in line with the vision of a circular economy [5]. Snap-fit joints offer an interesting opportunity in relation to these perspectives, especially with reference to plastic components built by injection molding. In snap-fit joints the establishment of a connection between two parts relies on the elastic behaviour of one or both components, bringing sensible economic advantages coming from avoiding the prescription of strict tolerances [6]. The analysis of this joint is usually performed to foresee the actual working conditions, in order to verify joining/disjoining forces and components strength; the finite element (FE) method is the most common tool to provide these evaluations [7, 8], allowing the assessment of the stiffness behaviour and stress distribution in plastic components. In this work a snap-fit joint designed for connecting a plastic consumable component (moving flexible component) to a high-cost machine (fixed rigid component) was examined. More specifically, the joint is a reversible system and for this reason three main parameters have to be considered to analyse its behaviour: peak insertion and removal forces, that respectively represent the maximum value of the force required for joining and disjoining operations, and retention force, that is the force exerted between the joint’s components to maintain the system operative configuration. These parameters are strictly related to the system design, that is, in this case, to main dimensions of the flexible component. The snap-fit joint should be designed so that insertion and removal forces are as low as possible, being the joining and disjoining operations made manually, and at the same time the retention force is required to be sufficiently high to preserve the system’s position for all the working period. A new multimodal approach is here introduced where deformable parts are integrated into a multibody software in order to be able to simulate and follow the full joining/disjoining process; flexible parts are discretized through finite elements and modal reduction techniques are applied to improve computational efficiency [9]. The advantage coming from this approach is that peak stresses and strains can be correctly identified even when they are reached in a middle step between the start and the end of the joining process. In addition, the final settlement can be foreseen in those cases where it is not known a priori, and the elastic retention force [10] of joined parts can be estimated. The model has been validated against experimental results and it was used to optimize a parametrised geometry of the snap-fit joint, investigating main dimensions of the flexible component and identifying, among these, the most relevant for the system’s performances.

Design of a Snap-Fit Joint Through a Multibody Model

535

2 Materials and Methods The snap-joint system here investigated is composed of two main components that have to be joined together (Fig. 1 a): the first one, the moving component, has two protruding elements (called ‘forks’ in the following) that are deflected during the joining operation, being constrained by two circular pins mounted on the second component (fixed element). The joint is separable and so both assembling and disassembling operations must be guaranteed by the system design. The moving component is free to translate in the horizontal direction (y axis in Fig. 1 a) on a fixed rail and forks start to elastically deflect as soon as they come into contact with cylindrical pins. The system’s operating position is reached when the frontal face of the moving body is in contact with the corresponding face on the fixed body (contacting faces and mechanical stop configuration in Fig. 1 b). The joint is held in place by the ‘spring-back’ behavior of the forks against the retention pins. The moving component is made of polypropylene, while the fixed component is made of steel.

Fig. 1. Snap-fit geometrical model: a) main components; b) contacting faces and operating position

Three main parameters have been considered for the evaluation of the performances of the snap-fit joint system: the retention force, that is the force exerted by the joint when the system is in its operating position to keep it in place; the peaks of the insertion/removal force, that is the force required to manually assembly/disassembly the system. A multibody model able to replicate the actual system behavior was created; first of all, it has been validated against experimental results and then it was used for the evaluation of the investigated parameters under different design configurations. The case study here presented is referred to an industrial application and for this reason the actual geometry of the system was modified, while preserving the main design characteristics.

536

G. Pascoletti et al.

2.1 Multibody Model The multibody model is aimed to replicate the full process of joining and disjoining. Based on its geometry, the only elements that are expected to undergo deformations are the forks; for this reason, all components have been considered as rigid bodies with the exception of the forks themselves, that have been modelled as flexible bodies. The flexible fork model is a finite element model (ANSYS Mechanical software® by ANSYS) composed of 6750 20-node solid elements and 32982 nodes (Fig. 2 a); it was imported into the multibody code (MSC Adams by MSC Software), where deformations are computed based on modal superposition and modal reduction approach (Craig-Bampton method [11]) selecting a subset of modes (twenty in this study). The mesh size has been refined up to convergence, considering the modal reduced formulation as the final output. As can be seen in Fig. 2, it was not the whole fork’s body which was modelled as flexible: on the contrary, a small body portion, located at the internal side of the re-entrant fork’s side (orange body in Fig. 2 a), was implemented as rigid body. This modelling approach allowed implementing self-contact of the fork during the insertion and removal phases, as described in the following.

Fig. 2. Numerical model: a) fork element; b) flexible and rigid fork joints; c) involved contacts. Green arrows indicate contacts between the fork flexible body and pins, yellow arrows represent fork self-contacts

The continuity between fork’s parts was restored with a fixed joint between the rigid and the flexible bodies in correspondence of the surface nodes on face B (Fig. 2 a and b). In the same way, the forks were constrained to the moving component body in correspondence of face A (Fig. 2 a and b). The fixed component body, pins and rail have been fully constrained to the ground, while the moving component’s body was left free to translate along the y axis (Fig. 2 b). Contact forces have been implemented between the fork’s flexible body and the pins (green arrows in Fig. 2 c) in order to simulate forks deflection. From some preliminary simulations, it was observed that, during the joining and disjoining procedure, the fork comes into contact with itself; for this reason, a contact between the fork’s rigid body (orange element in Fig. 2 a) and the flexible body was implemented (yellow arrows

Design of a Snap-Fit Joint Through a Multibody Model

537

in Fig. 2 c) as well. Finally, the mechanical stop condition was modelled by contact between the moving and fixed components (contacting faces in Fig. 1 b). All contact forces have been implemented by the ‘IMPACT’ function (Eq. 1), that is a non-linear formulation based on the modified Hertz contact theory and defined by the sum of an elastic non-linear force and a damping force, with a damping coefficient applied following a cubic polynomial law (‘STEP’) [12]: Fn = K · g e + STEP(g, 0, 0, dmax , Cmax ) ·

dg dt

(1)

In Eq. 1 F n is the contact force, K is the contact stiffness, g the penetration between geometries, dg/dt the penetration velocity at the contact point, e is the exponent of the elastic force and d max the penetration depth at which the damping coefficient reaches its peak value C max . Contact parameters have been defined based on experimental data, as described in Sect. 2.2. A non-linear solution was needed due to large deformations, contact points which move along the flexible body, and to the above reported contact force formulation. MSC Adams C++ solver and GSTIFF integrator (Gear stiff integration algorithm) have been used for this aim. 2.2 Experimental Tests Joining and disjoining operations were tested experimentally in order to obtain typical force curves describing the interaction of the forks with the fixed pins. Tests were performed on the snap-fit joint using a single column multi-test device (Mecmesin MultiTest 2.5-i frame), a ball screw-driven motorized test frame able to apply tensile and compressive forces up to 2.5 kN. The fixed component of the snap-joint was screwed to machine base plate, while the moving component was free to slide along the vertical dovetailed crosshead, with an imposed velocity of 60 mm/min. The load cell (ILC load cell, accuracy ±0.1% of full scale, sampling rate 2000 Hz) provided the force profile during the insertion and removal phases. Experimental results have been compared to numerical results obtained by the multibody model, where the same boundary conditions were replicated (imposed velocity). From this analysis a fine tuning was carried out to set contact parameters and validate the multibody model. 2.3 Tolerance Analysis Parameters The forks design was characterized by key dimensions that are controlled by tolerance ranges. These parameters have been identified as the ones with the highest impact on retention and insertion/removal forces. Investigated dimensions are shown in Fig. 3 with respective tolerance ranges; these parameters are: • w: is the fork width measured as the horizontal distance between the right and left most prominent points • s1 , s2 and s3 : are the fork’s thicknesses at the most relevant locations (oblique side, vertical side and re-entrant side respectively)

538

G. Pascoletti et al.

• δ: draft angle, required to extract the injection molded component from the mold.

Fig. 3. Fork’s geometry main dimensions

First of all, the effects played by each dimension on forces have been analyzed separately by varying them one by one from the highest to the lowest values within the respective tolerance range; then a second tolerance analysis was performed considering the respective combined effects, in order to identify configurations providing the best operative performance. Each numerical test required to generate the specific 3D geometry from the parametrized drawing; in the following step, the respective FE models have been created and implemented in the multibody model.

3 Results 3.1 Experimental and Numerical Curves The multibody model was validated comparing numerical results with the experimental insertion/removal curves. In particular, experimental data have been used to properly set contact parameters, and final results are reported in Fig. 4. The experimental curve (red) is compared to the numerical insertion (black) and removal (blue) curves, that is the sum of the y components of the contact forces between the flexible (right and left) forks and the (right and left) pins. Contact parameters have been identified focusing the attention on the insertion and removal peaks values and the best set of these parameters is reported in Table 1. Also the friction force contribute was tuned according to experimental results and from these analyses it was clear that friction needed to be modelled with reference to the contact between the flexible fork and the pins. Therefore, Table 1 reports also friction parameters for this contact: μs and μd are the static and dynamic friction coefficients respectively, while vs and vd are the stiction and friction transition velocities. The tuning process for both contact and friction parameters has considered the constitutive materials of contacting bodies (polypropylene and steel) and a lubricated condition: the investigation started from plausible values based on Hertz

Design of a Snap-Fit Joint Through a Multibody Model

539

contact theory [13], approximating the actual interaction to plane-to-cylinder contacts, and experimental frictional properties of plastics materials [14–16]. Final optimized parameters have been proved to fall within these theoretical/experimental ranges. The global behaviour was well captured by the numerical model and major differences were located at the end of the insertion/removal phases; moreover, insertion and removal peaks are all within the ranges provided by the supplier of the joint system. Two peaks (Peak A and B in Fig. 4) are actually absent or greatly reduced in the experimental curves: this happens when the force abruptly changes its sign due to an elastic rebound, and the absence of these peaks is due to a bias in experimental tests performed by producers, where displacements have not been controlled continuously.

Fig. 4. Insertion and removal curves comparison between experimental (red) and numerical (black/blue) curves for the optimized combinations of contact parameters Table 1. Best combination of contact parameters Contact parameter

Flexible fork – pin

Flexible fork – rigid fork

K [N/me ]

2 × 108

9 × 107

e

2.2

2.1

Cmax [Ns/m]

1 × 103

1 × 103

dmax [m]

1 × 10–6

1 × 10–6

μs

0.1



μd

0.02



vs [m/s]

1 × 10–5



vd [m/s]

1 × 10–4



Some reference phases can be individuated on the insertion/removal curve. As Fig. 5 shows, the insertion force starts to grow as the oblique side of the fork comes into contact

540

G. Pascoletti et al.

with the pin (phase 1); phase 2 is characterized by a steeper increment of this force, due to the fork self-contact, up to the insertion peak (phase 3). Insertion force starts to decrease when the fork’s curvature has passed over the pin and self-contact returns to be absent (phase 4). When the forks reach their final location, they exert an elastic force on pins able to guarantee the retention condition (phase 5). The same main phases characterize the removal process, with different peak force values due to the friction action (phases 6 and 7).

Fig. 5. Main phases of the insertion/removal process

3.2 Tolerance Analysis A first tolerance analysis was performed on geometries obtained by varying fork dimensions, one by one (Fig. 3) and a total of 10 3D geometries and multibody models were implemented. For every model the insertion/removal peak forces and retention forces were retrieved from numerical results and compared to the corresponding values associated with the nominal geometry (Fig. 5). The effects of dimensional parameters on the snap-joint performances were so quantified and results are summarized in Fig. 6, where force parameters have been normalized with respect to the nominal retention force value (Fret,nom in Fig. 6). The retention force parameter appeared to be mainly affected by variations of δ, s1 and s2 dimensions: positive (enlargement) and negative (reduction) variations of the draft angle led to −12.5% and +13% of the retention force respectively; negative variations of s1 and s2 generate a reduction of the retention of -9.8% and -7.3%, while when these parameters are set equal to their upper tolerance limits, the retention force increases by +9.2% and +8.2%. For what concern the joining and disjoining operations, all dimensions have an effect on insertion and removal forces exception made for s1 parameter. In addition, as shown

Design of a Snap-Fit Joint Through a Multibody Model

541

in Fig. 6, w and s3 have significant impacts on the insertion/removal peak forces; in particular s3 is able to generate a reduction of the insertion maximum force of -52.8%, while the greater reduction of the removal force is associated to the negative variation of w (−31.2%).

Fig. 6. Tolerance analysis results compared to nominal geometry (dashed lines); ‘pos’ and ‘neg’ refer to positive and negative variations of the tolerance respectively

Based on these results, an optimized design was investigated. Optimizing the fork’s geometry is aimed to minimize the insertion and removal peak forces, in order to reduce the effort required during the joining and disjoining operations, while, at the same time, still guarantee an appropriate retention force, that must be greater than or equal to the nominal retention force for the proper functioning of the system. Table 2. Investigated optimized designs Dimension Optimized design 1 Optimized design 2 w

wnom – 0.4 mm

wnom – 0.4 mm

s3

s3,nom – 0.4 mm

s3 – 0.4 mm

δ

δ nom – 0.5°

δ nom

s1

s1,nom

s1,nom + 0.1 mm

From the tolerance analysis it was pointed out that the minimum of the insertion force can be obtained modifying the s3 dimension, while the removal force is minimized when the w value decreases. For what concerns the retention force, this one was mostly affected by variations of the draft angle, with a negative δ increasing the retention force but also

542

G. Pascoletti et al.

insertion/removal peak forces; also the s1 dimension positively affects the retention force, with limited effects on the joining/disjoining operations. For these reasons, two different optimized geometries have been investigated, combining dimensional parameters as reported in Table 2; corresponding geometries are shown in Fig. 7 and compared to the reference geometry.

Fig. 7. Comparison of the fork’s profile between the nominal and the optimized geometries

These two new designs have been implemented in the multibody model and the insertion and removal procedure was simulated. The forces registered throughout the full cycle are shown in Fig. 8 and compared to the nominal geometry force.

Fig. 8. Insertion, removal and retention forces for the optimized geometries compared to the nominal force curve

Both optimized configurations allow lowering the force peaks without negative effects on the retention force. More in detail, the first optimized design reduces the insertion and removal peaks of 84.6% and 46.5%, respectively; the second optimized geometry provided similar results with a reduction of the insertion/removal forces of 86.5% and 41.4%. Moreover, both optimized designs have increased the retention force up to +7.3% in the first case and +4.9% in the second one.

Design of a Snap-Fit Joint Through a Multibody Model

543

Having considered that the retention force value is a key parameter for the snap-fit joint performance and that both investigated designs have provided similar improvements in terms of joining and disjoining force peaks, the first optimized configuration was chosen as the optimum design.

4 Discussion The numerical model has allowed to evaluate the impact of main quotes on the retention force and on the peak insertion/removal forces for a reversible snap-fit joint. This analysis has pointed out which dimensions play the highest influence. More in detail, s3 quote allows controlling the eventual contact of the fork with itself which should be avoided in order to reduce peak insertion/removal forces. The analytical calculation of self-contact would be indeed very complex, as witnessed by specific theoretical work [17]; it has been here numerically solved by a simple trick where the same part has been virtually divided in two parts setting up a numerically efficient procedure. Another important finding is that the draft angle δ and s1 quotes can significantly improve the retention force. In facts, when the draft angle is reduced, it results in an increment of the contact area between the fork and the two pins which ultimately produces an increment of the retention force. However there are technological limits concerning the draft angle in relation to material shrinkage [18]. The s1 quote makes the fork stiffer, therefore, the same deformation results in a higher retention force. All these considerations have led to design an optimized profile with reduced insertion/removal force, though with an improved retention force. The major improvement was obtained in relation to the insertion force, while the removal force is still hampered by a significant contribute produced by the friction and so further enhancement here would require taking care of a proper lubrication or employing different materials [6]. Future activities will entail the experimental validation of the new optimized geometry here identified and the update of tolerance ranges, where necessary. Based on tolerance analysis results reported in Fig. 6, it would be expected that narrower tolerance ranges should be defined for δ, s1 and s2 , due to the impact that these dimensions have on the retention force; on the other side, s3 has shown beneficial effects on the insertion/removal forces with a little impact on the retention force, so tolerance ranges could be relaxed. Finally, w tolerances are indeed critical, considering that a further reduction of this dimension is expected to negatively affect the retention force. Apart from specific findings above outlined, a methodology has been here set up, which can be useful whenever snap-ft joints have complex geometries coming into contact and so the identification of components’ relative position resulting in the highest insertion/removal forces is not trivial. The alternative to this methodology would be extremely onerous from a numerical point of view since finite element models traditionally used for the analysis of snap-fit joints [7], should be built for different component positions or an explicit solver should be used to solve the finite element model due to large deformations, with high computational demand [19]. In the present case the entire insertion and removal simulation has required at most about 40 min on a personal computer (PC with i7-8700 CPU and 32 GB RAM). Simulation results allow to foresee components performances prior to building injection molded prototypes which would be extremely expensive.

544

G. Pascoletti et al.

5 Conclusions A methodology has been set up which allows following the whole insertion/removal maneuver in reversible snap-fit joints. The numerical model was able to calculate the peak insertion/removal forces and the retention force with a limited numerical effort, thanks to the use of FE discretization for the only deformable parts and modal reduction techniques. The methodology was tested on a specific design; the model has been validated and the geometric design was optimized, focusing on the most relevant quotes. In the specific case study here examined, the insertion force could be reduced up to -84% and the peak removal forces have been reduced up to -86%, without implying any adverse effect on the retention force, which, on the contrary has risen up to +7%. Making snap-fit joints easier to be assembled and disassembled represents a key strategy for improving recycling efficiency, as required by circular economy concepts, making viable the option of using exchangeable consumable parts in order to extend product life.

References 1. Gaa, N., et al.: Perfus. flow rate subst. contrib. to perform. HepaRG-AMC-bioartificial liver 109, 3182–3188 (2012) 2. Klahn, C., Singer, D., Meboldt, M.: Design guidelines for additive manufactured snap-fit joints. Procedia CIRP 50, 264–269 (2016). https://doi.org/10.1016/j.procir.2016.04.130 3. Jovane, F., et al.: A Key issue in product life cycle: disassembly. CIRP Ann. 42, 651–658 (1993). https://doi.org/10.1016/S0007-8506(07)62530-X 4. Vanegas, P., et al.: Ease of disassembly of products to support circular economy strategies. Resour. Conserv. Recycl. 135, 323–334 (2018). https://doi.org/10.1016/J.RESCONREC. 2017.06.022 5. Allwood, J., Cullen, J., Carruth, M., Cooper, D.: Sustainable materials: with both eyes open (2012) 6. Malloy, R.A.: Assembly of injection molded plastic parts. Plast. Part Des. Inject. Molding, 341–481 (2010). https://doi.org/10.3139/9783446433748.006 7. Soh, A.K., Lim, E.N.: A parametric study for improving snapfit design. J. Mater. Process. Technol. 25, 91–103 (1991). https://doi.org/10.1016/0924-0136(91)90104-M 8. Billal, M.K., Moorthy, B.V., Aquilina, D., Schenten, S.: CAE applications and techniques used in calculating the snaps insertions and retentions efforts in automotive trims. SAE Int. J. Passeng. Cars Mech. Syst. 7, 829–837 (2014). https://doi.org/10.4271/2014-01-1032 9. Sonneville, V., Scapolan, M., Shan, M., Bauchau, O.A.: Modal reduction procedures for flexible multibody dynamics. Multibody Syst. Dyn. 51, 377–418 (2021). https://doi.org/10. 1007/S11044-020-09770-W/FIGURES/17 10. Bonenberger, P.R.: The First Snap-Fit Handbook. Carl Hanser Verlag, Munich, Germany (2016) 11. MSC SimCompanion: Adams Flex Help. Adams Guide (2017) 12. MSC Software: Adams View Function Builder User’s Guide (2021) 13. Flores, P., Lankarani, H.M.: Contact force models for multibody dynamics (2016) 14. Whalen, E., Kodwani, R.: Snap-fit optimization for achieving desired insertion and retention forces. Altair Eng. Inc., 1–11 (2018) 15. Shooter, K.V.: Frictional properties of plastics. Proc. R. Soc. Lond. A. Math. Phys. Sci. 212, 488–491 (1952)

Design of a Snap-Fit Joint Through a Multibody Model

545

16. Coefficient of friction, Rolling resistance, Air resistance, Aerodynamics. https://www.tribol ogy-abc.com/abc/cof.htm. Accessed 24 Mar 2022 17. Majid, A., Siddiqui, S.: Self-contact of a flexible loop under uniform hydrostatic pressure. Eur. J. Mech. A Solids 84, 104082 (2020). https://doi.org/10.1016/J.EUROMECHSOL.2020. 104082 18. Throne, J.: 16 - Thermoforming. In: Kutz, M. (ed.) Applied Plastics Engineering Handbook, 2nd edn., pp. 345–375. William Andrew Publishing (2017). https://doi.org/10.1016/B978-0323-39040-8.00016-X 19. Tian, H., Li, S., Cui, X.: Development of element model subroutines for implicit and explicit analysis considering large deformations. Adv. Eng. Softw. 148, 102805 (2020). https://doi. org/10.1016/J.ADVENGSOFT.2020.102805

Towards Real-Time Physics-Based Variation Simulation of Assembly Systems with Compliant Sheet-Metal Parts Based on Reduced-Order Models Mario Brandon Russo1(B) , Alessandro Greco1 and Pasquale Franciosa2

, Salvatore Gerbino1

,

1 Department of Engineering, University of Campania Luigi Vanvitelli, 81031 Aversa, Italy

{mariobrandon.russo,alessandro.greco, salvatore.gerbino}@unicampania.it 2 WMG, University of Warwick, Coventry CV4 7AL, UK [email protected]

Abstract. Variation Simulation (VS) allows early validation and certification of the assembly process before parts are built. State-of-the-art VS models of assembly systems with compliant sheet-metal parts are based on Finite Element Method (FEM) integrated with statistical approaches (i.e., Monte Carlo simulation). A critical technical barrier is the intense computational cost. This paper proposes a novel real-time physics-based VS model of assembly systems with compliant sheet-metal parts based on Reduced-Order Model (ROM). Compared to the literature on the topic, this study reports the first application of a ROM, developed for VS by using both intrusive and non-intrusive techniques. The capability of the proposed method is illustrated in a case study concerning the assembly process of the vertical stabiliser for commercial aircrafts. Results have shown that the accuracy of ROM (based on proper orthogonal decomposition) depends on the sampling strategy as well as on the number of reduced modes. Whilst a large CPU time reduction by several orders of magnitude is achievable by non-intrusive techniques (based on radial basis functions for interpolation), intrusive models provide more accurate results compared to the full-order models. Keywords: Real-time physics-based simulation · Variation simulation analysis · Sheet metals · Compliant assembly · Reduced-Order Models · Proper orthogonal decomposition · Radial basis functions

Nomenclature VS FOM ROM μ u Tμ

Variation Simulation Full Order Model Reduced Order Model Vector of input parameters Output performance indicators Constraints on input parameters

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 546–558, 2023. https://doi.org/10.1007/978-3-031-15928-2_48

Towards Real-Time Physics-Based Variation Simulation of Assembly Systems

Tu usi i us−m Psi i Ps−m Nc ginit uFOM KFOM FFOM kp εgap NP NS NDOF Ssnap  R uROM KROM FROM POD RBF MPE

547

Constraints on output indicators Displacement of the i-th slave node Displacement of the projection of the i-th slave node on master surface i-th slave node Projection of i-th slave node on the master surface Normal vector of the master element i Initial gap between Psi and Ps−m Displacement vector of the FOM Stiffness matrix of the FOM Load vector of the FOM Penalty stiffness Gap tolerance Number of parameters Number of sampled points Number of Degrees of Freedom Snapshot matrix Reduced basis Number of retained modes Displacement vector of the ROM Stiffness matrix of the ROM Load vector of the ROM Proper Orthogonal Decomposition Radial Basis Function Mean Percentage Error

1 Introduction Combining high strength with the ability to be formed and cut with good dimensional accuracy at relatively low cost, sheet-metal parts are widely employed in automotive and aerospace applications as exterior panels and interior structural components. However, the intrinsic flexibility of these parts adds variability to the process, since real (nonideal) compliant parts need to be clamped and forced to the targeted assembly position before fastening operations. Consequently, assembled panels tends to spring back once fastening tools are released [1]. Variation Simulation (VS) techniques allow simulating the generation and propagation of variations throughout the assembly process, thus enabling early validation and certification even before assemblies are built. State-of-the-art VS models of assembly systems with compliant sheet-metal parts are based on Finite Element Method (FEM) integrated with statistical approaches (i.e., Monte Carlo simulation or polynomial chaos [2]). VS methods have been used to accelerate strategies for right-first-time and digitalisation of the manufacturing process [3]. The mechanistic models for VS can be grouped into two main categories: (1) VS-based analysis and (2) VS-based synthesis. VS-based analysis addresses the problem of finding the effect of input parameters on output performance indicators, under specific design constraints; whereas, the reverse

548

M. B. Russo et al.

problem is faced by the VS-based synthesis. Most of the publications have been addressing problems related to VS-based analysis but fewer attempts have been made to develop efficient models to face the VS-based synthesis. Urged by the need to reduce defects and waste during manufacturing, VS-based synthesis has become a critical priority since allows product and process optimisation at early design stages. VS-based synthesis covers topics related to process optimisation, parametric and sensitivity analyses [1–7]. The common denominator is the desire to generate accurate results in a reasonable time (ideally in real-time), which is sometimes not achievable due to the complexity of the problem even with powerful computational systems (High Performance Computing - HPC, cloud computing, etc.). In fact, typical VS applications involve a large number of input parameters (up to 1,000 for a typical body-in-white assembly process in automotive) related to both product and process. The leading challenge is driven by the dimensionality of the design space. For instance, finding global optima in high-dimensional problems is extremely challenging since the number of evaluations required to explore the design space increases exponentially with its dimensionality (this is also known as the “curse of dimensionality”). Several authors have been facing this problem. For example, the optimisation algorithm developed by Xing [4], who optimised the location of locators for an inner hood assembly, took 1,687 h to ensure the global best solution. Aderiani et al. [5] applied a new method to optimise the fixture layout to two simple single-station cases, elapsing 110 h and 160 h, respectively. Sinha et al. [6] proposed a Deep Learning-based methodology to aid multiple root causes analysis for an assembly process. To train their Deep Neural Network (DNN), they conducted 9 runs of 10,000 FEM simulations by varying the positions of only 5 clamps. Other approaches implement the Response Surface Methodology (RSM), which, trained on a pre-existing dataset, aims at obtaining a model that can be deployed to solve the optimisation problem [7]. Unfortunately, RSM approaches are pure data-driven and they are regarded as “black-box” models in the sense that they can find complex non-linear patterns on tested/trained cases. However, they are unable to explain the cause-effect mechanisms between variables and outside of the training dataset. Recently, the Reduced-Order Models (ROM) have been applied to solve computational intense problems, such as fluid-structure interaction problem [8], computational fluid-dynamics [9] and structural dynamics [10]. ROM methods are attractive since they allow reducing the dimensionality of the model. ROM techniques compute off-line the solution of several complete Full-Order Models (FOM) and extract the modes that best describe the solution to the full problem. Therefore, differently from RSM, ROM techniques exploit the known physical behaviour represented by the modes. The number of such modes determines the ratio between accuracy and computational time. Finding the right balance between accuracy and computational efficiency is an open topic and will be discussed in this paper. This paper proposes a novel methodology to accelerate the transition towards realtime physics-based variation simulation of assembly systems with compliant sheet-metal parts. This is the first time that a ROM approach is developed for VS using both intrusive and non-intrusive techniques.

Towards Real-Time Physics-Based Variation Simulation of Assembly Systems

549

The novelty of the paper is twofold: (1) implementation of a ROM approach to enable real-time VS of compliant sheet-metal parts; (2) integration of ROM with Active Set Method for contact modelling and thus avoid part-to-part penetration. The paper extends the work of Lindau et al. [11] who proposed to combine the method of influence coefficients [12] with a simplified contact search algorithm. However, this method was only limited to triangular mesh elements and node-to-node contact modelling. The approach proposed in our paper goes beyond the state-of-the-art since is independent of the mesh density and implements a node-to-surface contact model. The reminder of the paper is as follows: Sect. 2 describes the problem formulation. Section 3 shows the proposed methodology. Section 4 presents the case study along with results and discussions. Section 5 concludes the paper with future opportunities.

2 Problem Formulation 2.1 Representation of the VS Model A typical VS model conceptually involves finding the relationship between input parameters, μ, and output performance indicators, u, under specific design constraints, T. Input parameters define the design space and may be related to both the product (e.g., shape errors) and the process (e.g., positioning errors, shape and position of clamps, etc.). The mechanistic model for variation propagation can be conceptually represented as in Eq. (1), where f embeds the physics-based model to simulate the compliancy of the sheet-metal parts and to avoid part-to-part penetration.  u = f (μ) (1) s.t. : μ ⊆ Tμ and u ⊆ Tu

2.2 Physics-Based Model and Computational Challenges The physics-based model is based on a FEM kernel and the following modelling assumptions are made: (i) points which are candidate to come in contact are computed once on the un-deformed structure, according to the node-to-surface search method [13], shown in Fig. 1(a); and, (ii) frictionless contact between mating surfaces. For each i-th iteration, the displacements of all potential points belonging to the slave part are constrained to i are the those of the master part as in Eq. (2) (see also Fig. 1(b)), where usi and us−m i i displacements of the slave vertex Ps and its projection on the master part , Ps−m , respectively; Nc is the normal vector of the master element, and ginit is the initial gap of the contact pair. The constitutive non-linear equations are defined in Eq. (3), where uFOM is the primary variable (both master and slave), FFOM is the load vector, and KFOM is the stiffness matrix.    i ·Nc ginit = Psi − Ps−m (2) i i ginit + Nc · us − us−m ≥ 0 KFOM (uFOM , μ) · uFOM (μ) = FFOM (uFOM , μ)

(3)

550

M. B. Russo et al.

Fig. 1. Representation of the part-to-part contact model.

Equations (2–3) constitute the Full-Order Model (FOM). In this paper, the Penalty method has been implemented to enforce the conditions in (2). Other methods (such as Lagrange Multipliers) are possible but for the sake of demonstrating the methodology, we have limited the formulation only to the Penalty method. The solution uFOM of Eq. (3) is obtained by the Active Set Method: the model is iteratively solved by activating the contact pairs only where there has been part-to-part penetration (i.e., negative gaps) in the previous iteration. This translates to the fact that the active pairs are moved away by enabling the penalty stiffness, k p (see Fig. 1(c)). The solution converges only if the gaps of all the active contact pairs are lower than a pre-set gap tolerance, εgap , and all pairs are in compression (negative load). Since the Penalty method tends to approximate the conditions in Eq. (2), the accuracy of the solution is highly influenced by the choice of the penalty stiffness itself. The case study will show the sensitivity to the selection of the penalty stiffness. Since VS-based synthesis involves a large set of input parameters and also the need to recompute the stiffness matrix itself (due, for example, to variations in material properties), the solution of Eqs. (2–3) would require a prohibitive amount of time. This paper aims at providing accurate solutions of the VS model in Eqs. (2–3) and with a significant reduction in computational time.

3 Proposed Methodology The methodology is hinged on a ROM approach and combines both intrusive [14, 15] and non-intrusive [8, 10] methods. Intrusive methods act directly on the constitutive equations defined in (3) and they aim at reducing the time spent to perform each iteration of the nonlinear set of equations. Conversely, non-intrusive techniques build a surrogate model in a hybrid space and the solution to any un-tested configuration of the input parameters is obtained by interpolating the surrogate model. As such, non-intrusive techniques do not require modifications of the constitutive equations as opposed to intrusive techniques. Since those two approaches have pros and cons, this paper has implemented both to explore their full potential. Details of the methodology (Fig. 2) are:

Towards Real-Time Physics-Based Variation Simulation of Assembly Systems

551

Implementation steps and hurdles of the two methods are summarised in Table 1. It is worth noting that if, on one hand, the non-intrusive ROM generates a surrogate model of u, the intrusive ROM needs to re-run the physics-based kernel and the constitutive equations must, therefore, be projected during each iteration.

552

M. B. Russo et al. Table 1. Main features of the implemented ROM methods.

Fig. 2. Proposed methodology for real-time physics-based variation simulation.

Towards Real-Time Physics-Based Variation Simulation of Assembly Systems

553

4 Case Study 4.1 Implementation The implementation of the proposed methodology uses the Proper Orthogonal Decomposition (POD) [9, 14–16], since it has proven to be advantageous in terms of easiness of implementation, reduction of computational cost, and accuracy of results. POD methods rely on the SVD (Singular Value Decomposition) of the Snapshot matrix SSnap = U ΣV T and extract the reduced basis by collecting the first R left singular vectors Ψ = U (:, 1 : R), namely, the POD-modes of the system. Its intrusive version [9, 15] is generally called POD-Galerkin, taking its name from the projection. The nonintrusive version, called POD-RBF [9, 16], uses the Radial Basis Functions (RBF) to interpolate the approximate solution in the hybrid space. The methodology has been coded in MATLAB® R2020b and the adopted physics-based simulations have been calculated in the Variation Response Method (VRM) toolkit [17, 18]. All calculations have been run on a laptop with 12 GB of RAM and a quad-core CPU operating at Max Turbo Frequency of 3.60 GHz. 4.2 Description of the Case Study: Aircraft Vertical Stabilizer The methodology has been tested on the vertical stabiliser of a commercial aircraft, shown in Fig. 3. The assembly comprised of 2 skins (left- and right-handed), 1 rib, 14 clips, and 9 rib posts. All components are made from aluminium, with Young’s modulus 70 GPa and Poisson’s ratio 0.3. The model has been discretized with 20,497 shell elements (Fig. 3(b)), resulting in N DOF = 128,644.

Fig. 3. Vertical stabilizer. (a) CAD geometry and (b) mesh of the selected assembly.

The assembly process is modelled with three consecutive stages (shown in Fig. 4 for a given instance of the input process parameters): stage (1) incoming skins are deformable and are subject to form errors – the Morphing Mesh Procedure (MMP), implemented in [17], has been used to emulate form errors. It is worth noting that the methodology

554

M. B. Russo et al.

Fig. 4. Deformation field (in mm) during the 3 consecutive assembly stages. Deformation has been magnified 5 times. The colour code represents the y displacements in mm.

works regardless and scanning data can be also fed to the model. Stage (2) positioning of rib, clips, and rib posts which are assumed rigid. Stage (3) positioning of the skin panels and contact simulation. The output performance indicators are the deformation field (in x, y, and z axis) for each node in the mesh model. Table 2. Definition of the input parameters.

The parameter space comprises 15 parameters (Table 2). Two sampling strategies have been used, unform and gaussian, both with N S = 640. The snapshots were generated by setting εgap = 0.6 mm. Results of the model reduction are then tested against 60 new instances of process parameters, generated with uniform random sampling. The accuracy is quantified by 3 indicators: (1) Mean Percentage Error (MPE) and the (2) Pearson’s Correlation coefficient over all the 60 new instances computed between FOM and ROM

Towards Real-Time Physics-Based Variation Simulation of Assembly Systems

555

solutions. The (3) CPU time ratio (i.e., time spent by the ROM compared to the FOM) measures the computational efficiency. 4.3 Results Preliminary tests showed that the accuracy of the intrusive method is strongly influenced by the penalty stiffness (Fig. 5). It was found that when the penalty is relatively low (approx. 102 N/mm), the solution converges without respecting the gap tolerance, while it fails to converge for penalty stiffness above 107 N/mm. Only for k p = [104 ; 106 ] N/mm there is good accuracy (MPE below 0.3%) with significant reduction in computational time – up to 50% saving. The results presented hereinafter have been generated by setting the penalty stiffness, k p = 106 N/mm Figure 6 shows the results of the sensitivity study on the number of modes against the sampling strategy. While the general trend is that the MPE tends to exponentially decrease with the increasing number of modes, better accuracy is reached with the uniform sampling. This is explained by the fact that, compared to the gaussian sampling, the uniform sampling allows to scan the entire parameter space. The next set of results has been generated using uniform sampling.

Fig. 5. Sensitivity to penalty stiffness for the POD-Galerkin method.

Fig. 6. Sensitivity to no. of modes with both uniform and gaussian sampling for POD-Galerkin.

556

M. B. Russo et al.

Figure 7 depicts the contour plots of the displacements and errors along y-axis of the right skin for 2 different parameter instances, while Fig. 8(a) shows the results in terms of errors and pattern reproducibility via Pearson’s Correlation coefficient. Results of the POD-Galerkin with less than 50 modes are not shown since the reduced model exhibited convergence problems. Conversely, POD-RBF error reaches a plateau just after 10 modes, indicating that the reduced basis does not have enough information to represent the entire variability in the field space. Further reduction in error could be achieved by increasing the number of sampled points in the parameter space. It is therefore clear that the intrusive method, with the same number of modes and sampled points, is more accurate than the non-intrusive counterpart.

Fig. 7. Comparison between the FOM, POD-Galerkin and POD-RBF. The colour code represents the displacements (mm) and errors (mm) along y-axis of the right skin for two instances of parameters.

Fig. 8. Comparison between the POD-Galerkin and POD-RBF in terms of accuracy and computational efficiency.

Towards Real-Time Physics-Based Variation Simulation of Assembly Systems

557

With regards to the computational efficiency, Fig. 8(b) shows the CPU time ratio for the major steps of POD-Galerkin. It is clear that the forward projection is the bottleneck of the procedure, and the CPU time ratio increases with the increasing number of modes. Summary of findings is in Table 3. Table 3. Summary of the main findings in terms of accuracy and computational efficiency. Models

Computational time

CPU time ratio

Mean Percentage Error

FOM

3,000 s





50 modes

830 s

27.7%

8.7%

450 modes

1,500 s

50%

1.7%

5 modes

2.5 s

0.09%

16.7%

50 modes

2.5 s

0.09%

10.6%

POD-Galerkin POD-RBF

5 Conclusions This paper proposed a novel methodology to accelerate the transition towards real-time physics-based variation simulation of assembly systems with compliant sheet-metal parts. Results have shown that the accuracy of the proposed Reduced-Order Model for variation simulation strictly depends on the number of input parameters and sampled points, as well as the sampling strategy and the number of reduced modes. Further, though intrusive methods are much more accurate than the non-intrusive counterpart since they exploit the physical knowledge in the reduced space, they are limited only up to 4x reduction (with MPE just below 10%) of computational time compared to Full Order Models. Conversely, non-intrusive methods can go as high as 1000x. However, their accuracy is dictated by the number of sampled points. This paper represents the first attempt to bridge the gap between advanced CAE simulations and VS models with the final aim of generating simulation data in real-time. This research has opened interesting new avenues in the field of variation simulation and dimensional/quality management. Opportunities for hybrid approaches based on ROM and physics-driven Machine Learning will be explored in future works.

References 1. Sadeghi Tabar, R., Lindkvist, L., Wärmefjord, K., Söderberg, R.: Efficient joining sequence variation analysis of stochastic batch assemblies. J. Comput. Inf. Sci. Eng., 1–10 (2022). https://doi.org/10.1115/1.4054000 2. Franciosa, P., Gerbino, S., Ceglarek, D.: Fixture capability optimisation for early-stage design of assembly system with compliant parts using nested polynomial chaos expansion. Procedia CIRP 41 (2016). https://doi.org/10.1016/j.procir.2015.12.101

558

M. B. Russo et al.

3. Zheng, H., Litwa, F., Bohn, M., Paetzold, K.: Tolerance optimization for sheet metal parts based on joining simulation. Procedia CIRP 100 (2021). https://doi.org/10.1016/j.procir.2021. 05.127 4. Xing, Y.F.: Fixture layout design of sheet metal parts based on global optimization algorithms. J. Manuf. Sci. Eng. Trans. ASME 139(10) (2017). https://doi.org/10.1115/1.4037106 5. Rezaei Aderiani, A., Wärmefjord, K., Söderberg, R., Lindkvist, L., Lindau, B.: Optimal design of fixture layouts for compliant sheet metal assemblies. Int. J. Adv. Manuf. Technol. 110(7–8), 2181–2201 (2020). https://doi.org/10.1007/s00170-020-05954-y 6. Sinha, S., Glorieux, E., Franciosa, P., Ceglarek, D.: 3D convolutional neural networks to estimate assembly process parameters using 3D point-clouds. Multimodal Sens. Technol. Appl. 11059, 89–101 (2019). https://doi.org/10.1117/12.2526062 7. Gerbino, S., Franciosa, P., Patalano, S.: Parametric variational analysis of compliant sheet metal assemblies with shell elements. Procedia CIRP 33 (2015). https://doi.org/10.1016/j.pro cir.2015.06.077 8. Xiao, D., Yang, P., Fang, F., Xiang, J., Pain, C.C., Navon, I.M.: Non-intrusive reduced order modelling of fluid-structure interactions. Comput. Methods Appl. Mech. Eng. 303 (2016). https://doi.org/10.1016/j.cma.2015.12.029 9. Georgaka, S., Stabile, G., Star, K., Rozza, G., Bluck, M.J.: A hybrid reduced order method for modelling turbulent heat transfer problems. Comput. Fluids 208 (2020). https://doi.org/ 10.1016/j.compfluid.2020.104615 10. Mahdiabadi, M.K., Tiso, P., Brandt, A., Rixen, D.J.: A non-intrusive model-order reduc-tion of geometrically nonlinear structural dynamics using modal derivatives. Mech. Syst. Signal Process. 147 (2021). https://doi.org/10.1016/j.ymssp.2020.107126 11. Lindau, B., Lorin, S., Lindkvist, L., Soderberg, R.: Efficient contact modeling in nonrigid variation simulation. J. Comput. Inf. Sci. Eng. 16(1) (2016). https://doi.org/10.1115/1.403 2077 12. Liu, S.C., Hu, S.J.: Variation simulation for deformable sheet metal assemblies using fi-nite element methods. J. Manuf. Sci. Eng. Trans. ASME 119(3) (1997). https://doi.org/10.1115/ 1.2831115 13. Wriggers, P.: Computational contact mechanics. Comput. Mech. 32(1–2) (2003). https://doi. org/10.1007/s00466-003-0472-x 14. Chinesta, F., Huerta, A., Rozza, G., Willcox, K.: Model reduction methods. In: Encyclopedia of Computational Mechanics, 2nd edn. (2017). https://doi.org/10.1002/9781119176817.ecm 2110 15. Pfaller, M.R., Varona, M.C., Lang, J., Bertoglio, C., Wall, W.A.: Using parametric model order reduction for inverse analysis of large nonlinear cardiac simulations. Int. J. Numer. Methods Biomed. Eng. 36(4) (2020). https://doi.org/10.1002/cnm.3320 16. Buljak, V.: Inverse Analyses with Model Reduction Proper Orthogonal Decomposition in Structural Mechanics. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-227 03-5 17. Franciosa, P., Ceglarek, D.: VRM simulation toolkit (2016). http://www2.warwick.ac.uk/fac/ sci/wmg/research/manufacturing/downloads/ 18. Franciosa, P., Palit, A., Gerbino, S., Ceglarek, D.: A novel hybrid shell element formula-tion (QUAD+ and TRIA+): a benchmarking and comparative study. Finite Elements Anal. Des. 166 (2019). https://doi.org/10.1016/j.finel.2019.103319

Design and Comparison of High Performance Sailing Hulls Antonio Mancuso1(B)

, Antonio Saporito1 , and Davide Tumino2

1 Dipartimento di Ingegneria, Università degli Studi di Palermo, 90128 Palermo, Italy

[email protected] 2 Facoltà di Ingegneria e Architettura, Università degli Studi di Enna Kore, 94100 Enna, Italy

Abstract. The design of a sailing yacht is mostly based on the traditional design techniques of trial-and-error which takes time and requires iterative corrections to reach the final result. Moreover, several design teams with different and complementary expertise are required. If the standpoint is the one of the designs of a pleasure yacht, most of the activities can be solved through the related standards. While, concerning regatta’s yacht, more deep investigations become mandatory and intensive application of digital design instruments is now a common practice in modern yacht design. In this paper, a parametric procedure has been set up aimed to design and numerically compare sailing hulls performances. The hull shape design follows the classical approach based on the definition of the main curves frame and the generation of a parametric surface. Preliminary information about the hull resistance can be quickly obtained with a dedicated module of the software. Once a satisfactory shape has been modeled, detailed investigations can be automatically executed with commercial fluid dynamics software. The procedure has been successfully applied to the design of a new boat belonging to a specific class-called 1001VELAcup R3- which is two people racing dinghy designed according to a box rule with limitations on materials, dimensions and total sail area. The availability of experimental data suggested the numerical investigation about the influence of the hull geometry on the sailing performances. Keywords: CAD modeling · Yacht design · Numerical investigation

1 Introduction The design of a sailing yacht, according to [1], follows the well-known spiral design approach which takes time and requires iterative corrections to reach the final result. Moreover, several design teams with different and complementary expertise are required. So, structural, aerodynamic, fluid dynamic and system engineering are the main areas involved in the complete design process. If the standpoint is that of the designs of a pleasure yacht, most of the previous activities can be solved through the related standards. For instance, the structural design of a sailing yacht is standardized by international rules [2], returning the number and the dimension of the internal reinforcements of the hull by solving parametric equations dependent on geometric characteristics (e.g. length of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 559–568, 2023. https://doi.org/10.1007/978-3-031-15928-2_49

560

A. Mancuso et al.

waterline, maximum beam, draft). The same is true concerning sailplane and appendages since pleasure yachts are asked to sail with any weather and different sailing angles giving rise to a so-called “all-round” yacht [3]. Instead, concerning regatta’s yacht, more deep investigations become mandatory. Undoubtedly, America’s Cup gave a strong push in this direction and many authors addressed their research in the intensive application of digital design instruments from different points of view. For instance, in [4] the authors focused on the hull shape, in [5] the appendages have been numerically optimized, while in [6] the sailplane has been investigated. However, significant advances in sailing hull design have been done also in other sailing classes, focusing research activities on the smallest sailing dinghy Optimist [7] or the Olympic dinghy Laser [8]. In [9], the authors numerically investigate foil sections with an emphasis on the sailing dinghies 420, 470 and Optimist. These numerical investigations can be carried out through several dedicated software only. In any case, the core of the procedure is represented by the CAD model, which can be shared or converted for structural or fluid dynamic purposes. Although the hull shape can be modeled with just one parametric surface (generally sweeping curves along trajectories), a smooth fairing requires an iterative adjustment of the curves frame until satisfactory results have been achieved. Moreover, by tradition, the hull design is based upon the definition of a set of parameters, say the length of the waterline, the beam at the waterline, canoe body draft and some dimensionless coefficients driving the designer during the sketch of the hull curves. Once a candidate has been obtained, numerical analyses can be executed aimed to validate the related performances [10] from several standpoints (e.g. hydrostatic, hydrodynamic). In this paper, a parametric procedure has been set up aimed to design and numerically compare sailing hulls performances. The related in-house software (written in Visual Basic for Excel) is structured into two main modules. The first one is the CAD modeler making use of Rational Bezier curves for the definition of the hull shape. It interacts with the commercial software CREO (from PTC) for surface refinement and geometric analysis (e.g. curvature, volumes, dimensionless coefficients). The second module firstly manages the previous data to quickly obtain hydrodynamic performances with regression formulae [11] and then prepares the input files for the commercial software Ansys if a deeper investigation via Computational Fluid Dynamic (CFD) is advisable. The paper deals with a specific class-called 1001VELAcup R3- which is two people racing dinghy designed according to a box rule with limitations on materials, dimensions and total sail area. In Sect. 2, a brief description of these boats and the available experimental data are outlined. In Sect. 3 the proposed design approach is described while Sect. 4 deals with the obtained results.

2 High-Performance Sailing Dinghy 1001VELAcup R3 skiff (Sail Keep It Flat and Fast) are light-weight high-performance small sailing boats designed and built by students of European Universities and race yearly in Italy. The event is open to all the universities, both Italian and international, which agree to compete with each other on a didactic, engineering, technical and sporting plan. The initial starting point is the class rule, studied to offer wide space to the design work (while respecting some limitations) to permit all the universities to compete with

Design and Comparison of High Performance Sailing Hulls

561

boats equally equipped. Materials and dimensions represent the most important key points of the rules. The whole system of hull deck and wings must be made with content in natural material not lower than 70% in weight. While, the main dimensions are the length overall, 4.6 m, the maximum beam, 2.1 m and the total sail area of 33 square meters (main, jib and gennaker). Being a box-ruled competition, the hull shape, internal frame and deck layout are designed by each team according to their knowledge and expertise. Figure 1, shows a picture of the skiff LED (University of Palermo), during the regatta.

Fig. 1. The skiff LED during the 2021 regatta in Palermo.

Since 2007, the first edition of the competition, until now, more than 30 boats from 12 universities have been launched. According to the class rules, the regatta starts if the wind is in the range 2–10 m/s, so that, these high-performance skiffs must be able to sail not only in the critical Froude range Fr = 0.4 to Fr = 0.6, but also over and below. This is the reason why very different hull shapes, mostly in terms of hard chine versus round bilge, have been designed by the teams. Looking at the fleet, a general preference toward hard chine hulls can be observed with respect to round bilge shapes. Generally speaking, round bilge hulls sail better at lower Froude while hard chine at higher ones. In 2017, 8 boats of the class took part in the Midwinter Indoor Race at the towing tank of the University of Naples Federico II. Several tests in calm water and regular wave with and without appendages have been performed at carriage speed from 1 to 3 m/s. The measurements of resistance, dynamic sinkage, and trim (in calm water) were also recorded. In Fig. 2, the recorded resistance of the bare hull in calm water has been plotted for each boat. The legend shows the boat name according to the final score of the race from the 1st (UniPalermo) to the 8th (PoliTorino2). Table 1 shows the same data as Fig. 2 for clearness. The lower resistances at the three tested velocities have been highlighted in bold. As can be seen, the two best-ranked boats have been “LED” from the University of Palermo and “TryAgain” managed by the University of Naples but designed by the University of Roma Tre team. These boats gave

562

A. Mancuso et al.

Resistance [N]

160.0 140.0 120.0

UniPalermo UniNapoli UniPadova1 PoliMilano

100.0

PoliTorino1 UniPadova2

80.0

UniTrieste PoliTorino2

60.0 40.0 20.0

Boat speed [m/s]

0.0 0

0.5

1

1.5

2

2.5

3

Fig. 2. Resistance curves recorded during the 1st Midwinter Indoor Race in 2017.

evidence of a lower motion resistance with respect to the other competitors, although very different in the main section shape. In fact, LED is a round bilge hull while TryAgain is a hard-chine one. Table 1. Bare hull resistance. Resistance [N] Boat speed [m/s]

1

Palermo

Napoli

Padova1

Milano

Torino1

Padova2

Trieste

Torino2

7,0

7,4

7,3

7,6

7,5

7,7

9,2

8,0

2

33,5

37,6

34,4

34,6

35,3

37,4

43,4

39,0

3

122,7

120,8

125,4

129,1

132,3

124,3

138,4

149,6

3 Design Approach The applied approach poses its basis in previous works. In [12] the CAD modeler has been deeply described and successfully tested by replicating two skiffs (LED and TryAgain). In [13], experimental and CFD results on the two previous boats have been assessed. In the present work, the integration of these two activities led to an instrument able to help the designer not only during the early stage of the design process but even later on for a deep investigation on one or more candidates. Figure 3, shows the design flow chart in a graphical way. The vertical dotted line represents the interface between the two modelers.

Design and Comparison of High Performance Sailing Hulls

563

Fig. 3. Flow chart of the design approach.

The curves frame defining the hull shape, are shown in Fig. 4 (left). The 9 key points are used to define the longitudinal curves (sheer, chine and keel) and the sections (fore, mid and aft). All these curves, based on the Rational Bezier formulation, exhibit G0, G1 or G2 continuity depending on some geometrical constraints imposed by the user during the design process. For instance, in Fig. 4 (right) a screenshot of the Graphical User Interface of the CAD modeler shows the sections of a hard chine hull for which a G0 continuity is required from aft to, at least, midsection. While the fore section is G2 continuous.

Fig. 4. Hull curves frame (left) and screenshot of the GUI of the modeler software (right).

Once a satisfactory curves frame has been designed, the software automatically generates an IGES file for each of the curves. A macro recorded in CREO Parametric (from PTC) loads the curves, creates the surface and writes a file with geometrical information like dimensions, areas, volumes, etc. In Fig. 5 (left) the curvature of the hull sections is shown; while on the right side, water plane and main section surfaces are hatched. The resistance estimation can be evaluated with the embedded Velocity Prediction Program (VPP) [11], so giving quick feedback to the designer. If a candidate is worth being deeply investigated, numerical analyses can be set up with the CFD modeler.

564

A. Mancuso et al.

Fig. 5. Hull sections curvature (left) and (right).

The module is interfaced with Ansys ICEM-CFD via batch files and automatically generates a structured hexahedral mesh. The physical domain (Fig. 6 left) has an extension depending on a characteristic dimension (length of water line). Due to symmetry, only half of the hull can be modeled. Also the most important parameter like first cell height, mesh grading or number of elements, can be established. The resulting mesh shown on the right side of Fig. 6, will be the input for the RANS (Reynolds Averaged Navier-Stokes) solver Fluent which adopts the Volume Of Fluid (VOF) method to capture the free surface effect [13].

Fig. 6. Computational domain (left) and close up of the hexahedral mesh (right).

In order for the boat to be in equilibrium at a given speed, actual sinkage and trim corrections can be estimated [14] with the following Eqs. 1: z =

Fz ; ρgAw

α =

My ρgIy

(1)

where Fz and My are the force and trim moment, ρ is the water density, Aw is the waterplane area and Iy is the corresponding moment of inertia about the y axis. This procedure requires several attempts before reaching equilibrium. Two/three iterations are generally enough if an initial estimate of sinkage and trim is available. At the end of the simulations, the designer may have access to detailed information about the hull performances and behavior. For instance, Fig. 7 shows the wave pattern at a

Design and Comparison of High Performance Sailing Hulls

565

boat speed of 3 m/s and trim angle of 2° (left) and the skin friction coefficient distribution over the hull surface (right). According to the designer’s experience and knowledge, the analysis of these (and other) results can be used to refine the hull geometry.

Fig. 7. Free surface mapping (left) and skin friction coefficient over the hull (right).

4 Case Study The design approach described in the previous section has been applied with the aim to numerically compare three boats. The first two are LED and TryAgain while the third, called LED_UP, is a newly designed boat having a shape and main coefficients similar to that of the round bilge LED but with a hard chine like TryAgain [12]. Figure 8 shows the resistance curves estimated with the VPP module of the software. 120.0 Resistance [N] LED TryAgain LED_UP

80.0

40.0

Boat speed [m/s]

0.0 1.0

1.5

2.0

2.5

Fig. 8. Resistance curve of the three bare hulls.

3.0

566

A. Mancuso et al.

The plot ranges from 1 to 3 m/s according to the experimental data (see Fig. 2). An overlap between the resistance curves of LED and LED_UP can be observed while the curve of TryAgain shows alternate behavior. The breakeven point is approximately located at 2.6 m/s. At lower speeds, she performs worst, at higher ones she performs better. These results are in accordance with the experimental data. For a deeper investigation, the effect of the trim has been numerically simulated via CFD simulations. In fact, the longitudinal position of the crew may significantly change the location of the center of buoyancy and, consequently, the attitude of the hull. According to [15], a mesh convergence analysis has been done by applying the Least Square Root method over four grids by doubling each time the number of cells from the coarser mesh (0.45 M) to the finest one (3.6 M). Considering the large number of simulations to be performed and the hardware availability, all the simulations have been carried out with the medium mesh of 0.9 M cells giving a very low numerical uncertainty (~2%). The following Figs. 9 and 10, show the estimated resistance at 2 and 3 m/s respectively and different trim angles. Particularly, five different trim angles have been simulated. As a matter of course, the experimental trim recorded during the regatta was 0° at 2 m/s and 1° at 3 m/s both for LED and TryAgain. At this trim, numerical estimation and experimental data are in agreement. From Fig. 9 one can note that the optimal trim angle is about 0°. At lower or higher speeds, all the boats perform worst. On the contrary, at 3 m/s, the optimal trim does not correspond to the experimental one as shown in Fig. 10. The minimum resistance can be obtained with a trim angle of 2.5°, 2° and 1.5° for LED, LED_UP and TryAgain respectively. Moreover at these trim angles both LED and LED_UP perform better than TryAgain differently from the prediction of the VPP in Fig. 8. Finally, it seems that the new boat has quite similar behavior to the one of LED, at least in the examined range of velocity. Once the procedure has been successfully tested, in a future work the behavior at higher boat velocity will be examined. 38.0 Resistance [N] LED TryAgain

36.0

LED_UP 34.0

32.0

30.0 trim [°]

28.0 -1.0

-0.5

0.0

0.5

Fig. 9. Resistance curve of the three bare hulls at 2 m/s.

1.0

Design and Comparison of High Performance Sailing Hulls

567

130.0 Resistance [N] 125.0 LED TryAgain LED_UP

120.0 115.0 110.0 105.0

trim [°]

100.0 0.0

1.0

2.0

3.0

4.0

Fig. 10. Resistance curve of the three bare hulls at 3 m/s.

5 Conclusions In this paper, a procedure to design and numerically compare high-performance sailing hulls has been outlined. A design approach based on the traditional design techniques adopted by the naval designer has been set up and successfully tested. The software, written in Visual Basic for Excel, helps the designer not only during the early stage of the design phase but even later, for a deeper investigation of one or more candidates. The CAD modeler represents the core of the procedure being fully integrated with commercial CAD software and computational fluid dynamics software. Two existing boats and a new one have been modeled and numerically compared in terms of performances at different speeds. Although the new design shows good performances, it seems that the round bilge hull performs better in a wide range of boat speeds. In future works, the selected design will be investigated in a wider range of real sailing conditions in terms of velocity and trim angles, allowing the designer to understand in a deeper way why the hulls present differences in terms of performance. Acknowledgements. Authors are grateful to Prof. Carlo Bertorello from the University of Naples Federico II for supplying experimental data and to the Italian branch office of the software house Ansys that gives them the academic license used to run the simulations presented in this paper.

References 1. Larsson, L., Eliasson, R.E., Orych, M.: Principles of Yacht Design, Intl Marine Pub (2013) 2. ISO 12215-5n Small craft — Hull construction and scantlings — Part 5 3. Garrett, R.: The Symmetry of Sailing. Adlard Coles Ltd., London (1987)

568

A. Mancuso et al.

4. Harries, S., Abt, C., Hochkirch, K.: Hydrodynamic modeling of sailing yachts. In: The 15th Chesapeake Sailing Yacht Symposium, Annapolis (2001) 5. Nicolopoulos, D., Berton, E., Gouvernet, G., Jaque, A.: A hybrid numerical method to develop America’s Cup yacht appendages. Sports Eng. 11, 177–185 (2009) 6. Viola, I.M.: Downwind sail aerodynamics: a CFD investigation with high grid resolution. Ocean Eng. 36, 974–984 (2009) 7. Andersson, A., et al.: The foiling optimist. In: The Proceedings of the 4th International Conference on Innovation in High Performance Sailing Yachts, Lorient, France, 28–30 June 2017 (2017) 8. Day, A.H.: Performance prediction for sailing dinghies. Ocean. Eng. 136(2017), 67–79 (2017). https://doi.org/10.1016/j.oceaneng.2017.02.025 9. Saporito, A., Persson, A., Larsson, L., Mancuso, A.: A New systematic series of foil sections with parallel sides. J. Mar. Sci. Eng. 8, 677 (2020). https://doi.org/10.3390/jmse8090677 10. Fossati, F.: Aero-Hydrodynamics and the Performance of Sailing Yachts. Adlard Coles Nautical, London (2009). ISBN 978-1-4081-1338-7 11. Keuning, J.A., Katgert, M.: A bare hull resistance prediction method derived from the results of the Delft systematic yacht hull series extended to higher speeds. In: International Conference on Innovation in High Performance Sailing Yachts, Lorient (2008) 12. Ingrassia, T., Mancuso, A., Nigrelli, V., Saporito, A., Tumino, D.: Parametric hull design with rational Bézier curves and estimation of performances. J. Mar. Sci. Eng. 9, 360 (2021). https:// doi.org/10.3390/jmse9040360 13. Begovic, E., Bertorello, C., Mancuso, A., Saporito, A.: Sailing dinghy hydrodynamic resistance by experimental and numerical assessments. Ocean Eng. 214 (2020). https://doi.org/ 10.1016/j.oceaneng.2020.107458 14. Yang, C., Löhner, R., Noblesse, F., Huang, T.T.: Calculation of ship sinkage and trim using unstructured grids. In: European Congress on Computational Methods in Applied Sciences and Engineering, ECCOMAS 2000, Barcelona, Spain (2000) 15. ITTC: Uncertainty analysis in CFD verification and validation methodology and procedures. In: International Towing Tank Conference (2017)

Real-Time Approximative Thermal Simulation for Wire Arc Additive Manufacturing Thibaut Tezenas Du Montcel1 , Nicolas Beraud1(B) , Fr´ed´eric Vignat1 , J´erˆome Pailh`es2 , Philippe Marin1 , and Franck Pourroy1 1

2

Univ. Grenoble Alpes, CNRS, Grenoble INP, G-SCOP, 38000 Grenoble, France {thibaut.tezenasdumontcel,nicolas.beraud,frederic.vignat, philippe.marin,franck.pourroy}@grenoble-inp.fr Arts et M´etiers Institute of Technology, Universit´e de Bordeaux, CNRS, INRA, Bordeaux INP, HESAM Universit´e, I2M UMR 5295, 33405 Talence, France

Abstract. Wire Arc Additive Manufacturing (WAAM) is a rising metal additive manufacturing technology. Having access to a real-time simulation of the WAAM process is an increasing challenge since it would make it possible to do live monitoring or to more efficiently adjust process parameters during the CAM operations. One major difficulty to overcome to create a real-time simulation is the importance of thermal conditions. Numerous works showed that thermal conditions of the manufactured part during the process have a direct impact on the quality and the geometry of the part. At the same time, it is a well-known fact that precise thermal simulations in complex and evolving conditions are much slower than real time. The proposed approach is based on the idea that defects happen only when the thermal conditions are far from their usual value. Under this assumption, a real-time approximative thermal simulation is sufficient to simulate the thermal side of the WAAM process. Such a simulation, which is a first step in simulating in real time the whole WAAM process, is proposed in this paper.

Keywords: Additive manufacturing Fast simulation

1

· WAAM · Thermal simulation ·

Introduction

Wire Arc Additive Manufacturing (WAAM) is a direct energy deposition process in which a metallic part is produced by depositing beads of weld metal fused, from a metallic wire, by an electrical arc (Fig. 1). WAAM allows high deposition rates [2], enables to manufacture large parts in a theoretically unbounded workspace [3], reduces production time [4] but it is well accepted that the quality of WAAM parts highly depends on their thermal history during the manufacturing process [4–6]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 569–580, 2023. https://doi.org/10.1007/978-3-031-15928-2_50

570

T. Tezenas Du Montcel et al.

Fig. 1. Illustration of WAAM (a) initial layer (b) multiple layers [1]

Thermal simulations of WAAM have been proposed, and even fast thermal simulations [7,8]. However, in [7], authors uses a steady state that occurs only when welding long straight weld beads to reach real-time performances while the simulation reported in [8] is still four times slower than real time on 16 cores when welding and reduces its number of time increments during cooling to approximately catch up to real time. This type of simulation can not be used to simulated a full part trajectory either than for trajectory optimisation. The goal of this paper is to propose a simulator that can be used for this purpose. Chergui M. A. [6] and Beraud N. et al. [9] show that the temperature field near the melt pool is one of the key indicator of the future part quality. Moreover, in order to be used for trajectory evaluation, the calculation must be near to real time. To reach this goal, the phenomena inside the melt pool will be not studied and the simulation will be less accurate above the temperature of solidus. This paper is organized in two main sections. First, the thermal simulation is detailed. In particular, the different approximations used to reach a real-time computational time are explained. The second section focuses on the results obtained with the proposed thermal simulator. Those results will be compared with results obtained using a Finite Element Method based thermal simulator [6] and with measurements from two thermocouple sensors positioned close to a metal deposition during a real experimentation.

2

Thermal Simulation

To build a thermal simulation, it is necessary to: assess all the heat transfer mechanisms (conduction, convection, radiation), simulate the phase changes and assess any other inputs and outputs of energy. Each of those steps and the approximations used to speed up their computation will be explained in the current section. The current paper focus on an Al4043 aluminium alloy which is a challenging alloy for thermal simulation due to its high conductivity but the presented approach can be adapted to other metals. All the characteristics of this aluminium have been taken in Chergui M. A. [6].

Real-Time Thermal Simulation for WAAM

571

Problem Definition. The objective is to answer temperature queries for all metallic elements at any time during the WAAM process. Those elements are all the surface elements of the substrate and of the part that is being created. The subset substrate and part will be noted S. It is considered to be surrounded by air and constituted by a single material. It is evolving over time with the successive welds that constitute the WAAM process. The 3D workspace W s is voxellized. For each voxel, two informations will be kept, the energy it contains in E and if it is a member of S in M etal. Time is also discretized. The time step, which is adaptative, will be noted Δt. Lets denote x a voxel of W s, the following relations hold: x ∈ S ↔ M etalx = 1 x ∈ S ↔ M etalx = 0 Ex (t + Δt) = Ex (t) + ΔEx,conv (t, t + Δt) + ΔEx,cond (t, t + Δt) + ΔEx,rad (t, t + Δt) + ΔEx,input (t, t + Δt)

(1)

Conversions Between Energy and Temperature. To have the link between energy and temperature, we need to simulate latent heat capacity. The idea is to calculate a constant Cp between solidus and liquidus that takes into account the latent heat capacity as in Eq. 2. 

T liquidus

T solidius

Cp dT = Lf

(2)

The specific heat capacity Cp is known for every temperature in the metal constituting S. It is therefore possible to compute the energy contained in an element x using Eq. 3 with ρ(T ) the density of the metal constituting S at temperature T.  T Ex (T ) = Cp (T ) ρ(T ) V olumex dT (3) 0

The exact transformation between temperature and energy is shown in blue on Fig. 2. In dotted orange, the transformation is linearized by segments which can fasten the computations depending on the IT constraints and implementation choices. In Sect. 3, the results are presented with this linearization. Energy Input. Einput (t, t + Δt) = U I η Δt with U I are respectively the welding voltage and intensity and η the welding energetic efficiency. This energy input Einput is then distributed using a modified Goldak source [6,10]. Half the input energy is considered to be transmitted directly in the electrical arc when the other half is included in the newly deposited elements. Figure 3 displays a modified Goldak source. The number of newly deposited element is computed depending on WAAM process parameters and can change during the simulation

572

T. Tezenas Du Montcel et al.

since the shape of the deposited bead depends on a large set of process parameters among which: the wire feed speed, the torch speed, the diameter of the wire, the energy input (which implicitly modify the wire feed speed). In this simulation, the shape of the bead is considered constant and is determined using experimental data. Taking into consideration the shape modifications is out of the scope of this paper.

Fig. 2. T(E). Exact transformation in blue, linearized by segment in dotted orange.

Fig. 3. Modified Goldak source and elements added during a deposition [6]

Real-Time Thermal Simulation for WAAM

573

Convection and Radiation. By definition, convection between a fluid f and a solid s follows Eq. 4, with h being the heat transfer coefficient, Asf the heat exchange area, Ts the solid surface temperature and Tf the fluid temperature. dEs = h Asf (Ts − Tf ) dt

(4)

In the studied WAAM process, argon is blown towards the welding area at high speed in order to protect hot weld from oxidation. In the rest of the part, air is in contact with the part and an airstream created by the blown argon is noticeable. To speed up computations, the fluid in contact with the part will be considered air for the full part and, since there is an air stream, it will be considered that Tair = 298.25 K. All those elements lead to Eq. 5. ΔEx,conv (t, t + Δt) = hair Axf (Tx (t) − Tair ) Δt

(5)

For radiation, the definition leads to Eq. 6 with σ the Stefan-Boltzmann constant,  the emissivity of the surface Aarad , F the view factor between the surfaces a at temperature Ta and b at Tb . The sum of F over all objects the surface a radiates towards is 1. dEa = σ  F Aarad (Ta4 − Tb4 ) dt

(6)

To simplify the radiation computation, it will be considered that the energy is fully radiated towards the outside of the workspace and that the surface it is radiating against Srad is always at Trad = 298,25 K. From our observations, the first one of those approximations is almost true when dealing with very simple shapes, with a few sharp edges. Error becomes more sensible with complex geometries. Those approximations lead to Eq. 7. Artificially modifying the temperature of Tb to balance this strong approximation could lead to an improvement of the simulation of the radiation. 4 ΔEx,rad (t, t + Δt) = σ  Axrad (Tx (t)4 − Trad ) Δt

(7)

Since all surfaces of an element x that are in contact with a fluid are equivalent to all surfaces of x that are in contact with transparent mediums, Axf = Axrad = Ax . Those surfaces will be computed using Eq. 8 with A the surface of one face of a voxel and xijk the voxel whose relative voxel distance to x is [i, j, k]. For example, x100 is the direct neighbour of the voxel x when moving in the increasing direction of the first dimension. There are six elements since this is the connectivity when considering the surfaces of a voxel and that energy transfers are computed through surfaces. Ax (t) = A (6 − M etalx100 (t) − M etalx−100 (t) − M etalx010 (t) − M etalx0−10 (t) − M etalx001 (t) − M etalx00−1 (t))

(8)

574

T. Tezenas Du Montcel et al.

Conduction as Gradient with Neighbouring Voxels. Conduction is following the Fourier’s law of thermal conduction which is given in its integral form in Eq. 9, with Q the thermal heat inside a volume bounded by the surface s and λ the thermal conductivity.  ∂Q = −λ ∇T ∂s (9) ∂t s In the current case, on a discretized grid, it can be approximated using an explicit formulation as in Eq. 10. Ex,cond (t, t + Δt) = −λ (6A − Ax (t)) Δt M etalx (t). (M etalx100 (t)(Tx (t) − Tx100 (t)) + M etalx−100 (t)(Tx (t) − Tx−100 (t)) + M etalx010 (t)(Tx (t) − Tx010 (t)) + M etalx0−10 (t)(Tx (t) − Tx0−10 (t)) + M etalx001 (t)(Tx (t) − Tx001 (t)) + M etalx00−1 (t)(Tx (t) − Tx00−1 (t)))

(10)

At this point, every step of a possible thermal simulator have been presented and when all the energy terms are added as described in the Eq. 1, it leads to a working explicit thermal simulator for WAAM process. This conduction computation, however, is only valid with very small Δt. Figure 4 display the computed temperature of an element near a weld bead deposition for multiple Δt. All computed temperatures are similar with Δt ≤ 0.0025 s. Larger increments of time generate some errors that can even, in a couple iteration, reach 0 K (for Δt ≥ 0.005). This constraint makes this approach unsuitable for a fast simulation. A new approach of conduction will therefore be presented in the next section. Conduction as a Local Average. Calculation of the conduction as presented before is the most time consuming part of the overall simulation time of a time step. To speed up the process, we will replace it by a phenomenological solution. Conduction tends to homogenize temperature inside a solid with a speed correlated to λ, the idea is to update the energy as in Eq. 1 but without the conduction and then simulate the conduction as a local average over the energy. This hypothesis is tested and illustrated on a one dimensional case. We compare the law of heat conduction and a local averaging on the energy of a 1-dimensional Goldak source in direction x increasing (diffusivity = 1, T (x = −50 mm, ∀t) = T (x = 50 mm, ∀t) = 0). The asymmetry correspond to the definition of the Goldak source. Figure 5 illustrate the closeness of both approach on a simple case. The results, for different timestamps, using the law of heat are shown with continuous lines while the approximations using local averaging are in dashed lines. We will therefore apply it to our 3D simulation.

Real-Time Thermal Simulation for WAAM

575

Fig. 4. Computed temperatures of an element near a weld bead deposition for multiple time increments.

Fig. 5. Comparison between a numerical solution of the heat equation (continuous lines) and a local averaging (dotted lines) at different times.

576

T. Tezenas Du Montcel et al.

A local average in 3 dimension is equal to a discrete centred convolution noted ∗, Kis is described as a tensor of rank 3, size [s, s, s], s odd. Thus, E(t + Δt) is given by the Eqs. 11 to 14: Einterx (t) = Ex (t)+ΔEx,conv (t, t + Δt) + ΔEx,rad (t, t + Δt) + ΔEx,input (t, t + Δt)

(11)

ElocalAverage = Einter(t) ∗ Kis

(12)

M etallocalAverage = M etal(t) ∗ Kis

(13)

Ex (t + Δt) =

Ex,localAverage M etalx,localAverage

(14)

s, the size of the local average depends on λ, the conduction of the metal used, and Δt, the simulated time step. This parameter is set to maximise the fit between the proposed simulation and reference results. Figure 4 display the computed temperature of an element near a weld bead deposition for multiple size of Ki along a reference using the explicit gradient conduction simulation. A Ki too small (Ki10 ) does not simulate enough conduction, the heat is therefore transmitted too slowly to neighbouring elements which leads to a delayed increase of the temperature. A Ki too large (Ki70 ) simulate too much conduction and the heat is transmitted too fast to far elements which leads to an underestimation of the temperature increase of the considered element (Fig. 6).

Fig. 6. Computed temperatures of an element near a weld bead deposition for multiple size of local average compared to a reference temperature computed using the explicit gradient approach.

Real-Time Thermal Simulation for WAAM

3

577

Results

In order to evaluate the current work, it will be executed against the results from Chergui M. A. [6] which have been extensively used to tune the parameters of the presented simulator and which consist of both experimental and simulated values. The Experimentation. The experimentation is precisely defined in [6]. A wall of 8 layers is built in aluminum AL4043, on a 250 mm * 250 mm * 5 mm substrate of AL5083 initially at 292 K, with a torch speed of 10 mm/s, 2 s idle time between each layer, an effective input energy of 796.8 W and using a raster strategy. This experimental design is illustrated by Fig. 7.

Fig. 7. Experimental design [6]

The results obtained with the proposed fast simulator from the measuring points P1, P3 and P6 are presented in Fig. 8, 9 and 10. The results from the real thermocouple, in blue, are compared with the simulation of Chergui, M. A. [6], in dotted blue, and the proposed simulation with conduction as a local average and a fixed time step of 1 s in orange. As expected, the solution which simulate conduction as a local average leads to more error, especially during a couple of seconds after a close deposition. Nevertheless, the error remains lower than 50 K in all cases, and in particular, below solidus, it stands to be an acceptable range to distinguish between problematic or non-problematic thermal condition in a WAAM process. The accuracy is judged acceptable for a first indicator during trajectory optimization.

578

T. Tezenas Du Montcel et al.

Fig. 8. Comparison at P1 of results from the proposed simulator in orange with results from Chergui, M. A. [6]: experimental temperatures in blue, and simulation results in dotted blue

Fig. 9. Comparison at P3 of results from the proposed simulator in orange with results from Chergui, M. A. [6]: experimental temperatures in blue, and simulation results in dotted blue

Real-Time Thermal Simulation for WAAM

579

Fig. 10. Comparison at P6 of results from the proposed simulator in orange with results from Chergui, M. A. [6]: experimental temperatures in blue, and simulation results in dotted blue

Computation Times. On a 250 * 250 * 54 grid, using python/numpy on a single laptop I5 core @1.70 Ghz the computation times for a single timestep of the method simulating conduction as a local average are presented in Table 1. Table 1. Computation times of the proposed thermal simulation for a single timestep. Surface state (Ax ) T(E) Convection Radiation Conduction Total 41 ms

31 ms 19 ms

50 ms

116 ms

257 ms

A fixed time step of 1s, which is the one used for the previous accuracy validation, leads to a simulation 4 times faster than real time which is, considering the difference in computing power, more than 60 times faster than the fast thermal WAAM simulation proposed by Michaleris et al. [8]. Reducing the time step would lead to an unnecessary accuracy improvement for trajectory optimization but that would lower the real time factor of the simulation. While the accuracy will not reach the accuracy of a more standard thermal simulation, it may still be a useful feature for other uses.

4

Conclusion

The aim was to create a fast approximative thermal simulation in order to add thermal informations into WAAM trajectory optimization. Starting back from the heat transfer equations, we applied multiple simplifications in order to speed up the process. In particular, we simulated the conduction using a local averaging. This lead to a thermal simulation which is a lot faster than the regular, and more accurate, thermal simulations.

580

T. Tezenas Du Montcel et al.

This simulator, which is faster than real time, offers new possibilities for WAAM process. One of those new possibilities is real time supervision of the process. Coupled with different trajectory options such as a faster greedy trajectory and a safe trajectory, the supervisor would allow taking the greedy approach until it detects a potential heat issue. It would then order the WAAM process to fall back to the safe trajectory until this potential issue is resolved. Another possibility offered by simulations is to preprocess a large number of trajectories which could be used to automatically optimize a set of process parameters. This type of preprocessing already exists but is costly and is currently limited by the time factor of the simulations, it would be a clear gain for those preprocessing software to use a faster simulator. Acknowledgment. The authors acknowledge the support of the French National Research Agency (ANR) under reference ANR-20-CE10-0012-01 KAM4AM.

References 1. N¨ asstr¨ om, J., Brueckner, F., Kaplan, A.F.: Laser enhancement of wire arc additive manufacturing. J. Laser Appl. 31(2), 022307 (2019) 2. Liberini, M., et al.: Selection of optimal process parameters for wire arc additive manufacturing. Procedia CIRP 62, 470–474 (2017) 3. Derekar, K.S.: A review of wire arc additive manufacturing and advances in wire arc additive manufacturing of aluminum. Mater. Sci. Technol. 34(8), 895–916 (2018) 4. Wu, B., et al.: A review of the wire arc additive manufacturing of metals: properties, defects and quality improvement. J. Manuf. Process. 35, 127–139 (2018) 5. Zhao, Y., Jia, Y., Chen, S., Shi, J., Li, F.: Process planning strategy for wirearc additive manufacturing: thermal behavior considerations. Addit. Manuf. 32, 100935 (2020) 6. Chergui, M.A.: Simulation based deposition strategies evaluation and optimization in wire arc additive manufacturing. Chem. Process Eng. (2021). Universit´e Grenoble Alpes [2020]. NNT: 2021GRALI040. tel-03273221 7. Ding, J., et al.: Thermo-mechanical analysis of Wire and Arc Additive Layer Manufacturing process on large multi-layer parts. Comput. Mater. Sci. 50(12), 3315– 3322 (2011) 8. Michaleris, P.: Modeling metal deposition in heat transfer analyses of additive manufacturing processes. Finite Elem. Anal. Des. 86, 51–60 (2014) 9. B´eraud, N., Chergui, A., Limousin, M., Villeneuve, F., Vignat, F.: An indicator of porosity through simulation of melt pool volume in aluminum wire arc additive manufacturing. Mech. Ind. 23, 1 (2022) 10. Goldak, J., Chakravarti, A., Bibby, M.: A new finite element model for welding heat sources. Metall. Trans. B 15(2), 299–305 (1984)

Simulation Methods for Plant Layout Design-A Case Study Mario Caterino1(B) , Marta Rinaldi1 , Maria Antonietta Turino1 , Marcello Fera1 , and Natalia Hartono2 1 University of Campania Luigi Vanvitelli, 81031 Aversa, Italy

[email protected] 2 University of Pelita Harapan, Tangerang 15811, Indonesia

Abstract. The aim of this paper is to propose a simulation study to assess the advantages deriving from associating plant layout (PL) modifications and the introduction of some key technologies of the fourth industrial revolution in a production and assembling environment of bus transportation vehicles. The software Simul8® was used to design and realize a simulation model relying on the information and data coming from a real working environment. Six experiments were carried out to test the model, mixing different PL configurations and technologies that were assumed to improve the working times of the different production and assembling phases. The results demonstrate the feasibility of the simulation model and its good accuracy in reproducing the behavior of the working environment. The simulation approach allowed to identify feasible solutions for reducing the lead times and minimizing the issues related to the bottlenecks that characterize the actual working environment for each type of bus produced. Keywords: Plant layout · Simulation · Production system

1 Introduction The layouts of a manufacturing system can be classified into four basic categories: product, process, fixed-position and cellular layout [1–3], according to the type of production: a fixed-position layout is typically used for large products hard to move, while a process layout is chosen when a large variety of products is produced, grouping in the same zones similar facilities (machines). The product layout is organized based on manufacturing operation sequences and they are typically found on manufacturing systems with high production volumes and low product variety. Machines are grouped together in a cellular layout to process similar parts families. Thus, the plant layout (PL) can be considered as the first and crucial step for designing a new manufacturing plant. It consists in the optimization of facilities allocation, personnel, storage space and material handling equipment. The same is true for existing manufacturing plants. The optimization of PL is known to have a significant effect on cost reduction, lead times reduction, work in process (WIP) improvement and productivity [1, 2]. The correct design of the PL allows to reach better production performances by minimizing the transports within the plant, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 581–592, 2023. https://doi.org/10.1007/978-3-031-15928-2_51

582

M. Caterino et al.

eliminating the bottlenecks, maximizing the use of the cubic space in the factory area, and making effective the utilization of the installed facilities. The increased necessity for manufacturing plants to quickly respond to changes in demand, production volume, and product mix may often require the modification of production flows and layouts [1, 5–7]. The PL problem is known to be NP-hard [4]. It attracted many researchers, who developed or used different methods to solve it. They can be divided into three categories: exact methods [1–3], approximated methods [1–3] and simulation [7]. Exact methods include algorithms such as branch and bound [1, 3], dynamic programming [1, 3] and constraint integer programming [3]. Approximate methods use heuristic and metaheuristic such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Ant Colony Optimization (ACO), Simulated Annealing (SA), and Tabu search [3]. The third method, simulation, became popular as a result of technological advancements [2]. In fact, simulation requires high computational capacity, but it allows to evaluate different scenarios without the necessity to physically realize them. A huge impact on the study of PL was brought by the raise of the fourth industrial revolution (I4.0), which emerged in the last 10 years and is strongly affecting factories, leading them towards a transition to smart factories. The introduction of I4.0 technologies may lead to deep changes in the factory environments and can be responsible for very important improvements in the factory performance. Many studies rely on the simulation approach to evaluate the advantages related to the introduction of I4.0 technologies in existing production environments [8], since it provides a powerful representation of the factory flows and systems [9] and enables rapid computations. Moreover, for PL problems, simulation has been used as a possible approach to evaluate different solutions. Huynh et al. and Chen et al. developed discrete event plant simulations to evaluate the best layout models for specific factory environments [6, 10], while Septiani et al. optimized a warehouse layout [11]. Sharma et al. [12] used the simulation approach for the optimization of job shop manufacturing system of gears. All the studies previously described have successfully demonstrated the strength of the simulation for companies to find the best layout designs using different parameters. Such studies do not present an analysis of the improvement brought to companies considering both the change of PL and the introduction of I4.0 technologies. Thus, this paper aims to address this problem by presenting a simulation study on a real company working in the bus production sector, in which the impact of PL changes and the introduction of I4.0 technologies are assessed. The outline of the paper is organized as follows. Section 2 describes the simulation model built for the case study, while in Sect. 3 the experiments are carried out and the results are analyzed. Finally, the conclusion section highlights the main limitations and possible future developments.

2 The Simulation Model 2.1 Real Plant Description The study that will be presented in this section is based on a real production plant working in the bus production sector. The current PL will be analyzed and simulations

Simulation Methods for Plant Layout Design-A Case Study

583

will be proposed to evaluate the potential benefits deriving from PL changes and the introduction of I4.0 technologies. The company produces 3 products, named in this paper as bus_A, bus_B and bus_C. For each product, the company provided the information related to process execution, such as the sequence of operations and the required times to accomplish each part of the production cycle. It is worth underlining that the operation times represent sensitive data for the company. For this reason, the values related to the company lead times reported in Table 1 were scaled by a constant factor and do not represent the real working times. The current PL of the company is represented in Fig. 1.

Fig. 1. Actual plant layout of the company

The principal working zones are represented by 12 production and assembling zones (PAZs), in which most of the production and assembling tasks are executed. Each PAZ is dedicated to a single product. In particular, according to the current yearly market demand for products, 7 PAZs are dedicated to bus_A, 4 PAZs to bus_B and 1 PAZ to bus_C. The other operations needed for completing the bus production are painting, electric components assembling and sandblasting operations; these operations are executed in dedicated zones, i.e. the painting zone (PZ), the zone related to electric components assembling (EZ) and the sandblasting zone (SZ). The transfer times to the different working zones are also considered as part of the total operational times. The production is organized into one shift per day of 8 h and 240 working days per year. Such configuration allows to fulfill the current market demand, which is considered flat during the year, i.e. it does not change. 2.2 Simulation Model Construction Starting from the real PL configuration, a discrete event simulation (DES) was used to simulate different PL configurations by means of the software Simul8® to assess the benefits achieved by changing the PLs and introducing some I4.0 technologies. Figure 2 shows the simulated model reproducing the layout of the As-is system.

584

M. Caterino et al.

Fig. 2. Actual plant layout in simulation environment

The simulation model requires some features to reproduce the behavior of the real system. These features include: • The Entry point: it is the feature that generates the input elements for the system. Three entry points were defined in the simulation model to represent the three types of products (bus_A, bus_B, bus_C). The generations of a new product at the entry points depend on the market demand and, once a product is generated, the simulation system checks whether there are free work centers that are able to work the product. Only in case there is at least one free work center, the product is released by the entry point and the production process starts. • Work center (WC): each PAZ, SZ, EZ, PZ is characterized by several WCs, which are the features used to simulate the jobs realization in those zones. In particular, concerning the PAZs, 12 WCs are used to simulate the jobs for bus_A, while 9 WCs are used for bus_B and bus_C. Concerning the PZ, three WCs were created to simulate the painting tasks; two WCs were created for simulating tasks in the EZ and 1 WC for the tasks of the SZ. • The End point: it allows to simulate the exit of the products from the working zones once the entire work cycle has been completed. Moreover, it allows to collect data on the entire working cycles realized. To make the simulation model stochastic, the lead times for accomplishing the single tasks were modeled with a statistical distribution. By analyzing the original data provided by the company, the lead times were supposed to have a symmetric triangular distribution, which needs three values to be defined: the mode, the upper bound value (UBV) and the lower bound value (LBV). In the case of this paper, the mode was considered as the mean value provided by the company for each task, while the UBV and LBV were obtained by varying the mode by ±15%. These values were obtained by studying the real company processes. Thus, the lead times of all tasks are modeled as a symmetric triangular distribution, whose probability density function is represented in Fig. 3.

Simulation Methods for Plant Layout Design-A Case Study

585

Fig. 3. Probability density function of the triangular distribution.

To validate the simulation model, the correspondence between the real and simulated logics was verified. This step was achieved by simulating the entire manufacturing process for the 3 products and by comparing the simulated lead times with the actual ones. Simulations were run 25 times. Table 1 reports the comparison between the data retrieved by simulations and the real ones concerning the lead times. Table 1. Real and simulated lead times bus_A

bus_B

bus_C

Company LT (CLT) [weeks]

30.45

35.55

20.60

Average LT simulation (ALTS) [weeks]

31.74

35.66

20.56

Standard deviation (SD)

3%

2%

2%

Minimum LT simulation (MinLTS) [weeks]

30.34

34.78

20.30

Maximum LT simulation (MaxLTS) [weeks]

32.98

36.24

21.00

Difference between CLT and ALTS

4.1%

0.3%

−0.2%

The results demonstrate a good accuracy of simulation models in reproducing the behavior of the real system; thus, the simulation models can be considered validated.

3 Experiments and Results Analysis Once validated the simulation models, a set of Key Performance Indicators (KPIs) was identified to assess the system performance under variable conditions, related to the change of PL and the introduction of new technologies in the factory environment. The KPIs are listed in Table 2.

586

M. Caterino et al. Table 2. Description of the KPIs

KPI

Description

Notation

Number of completed jobs Number of total buses completed in one year

CJ

Work in Process

Number of buses in the working process at the end of the simulated period

WIP

Lead time

Difference between the instant of time in which the bus reaches the end point and the instant of time in which the bus leaves the entry point

LT

Idle time

The amount of time calculated as a percentage of the simulation time in which there are no products to process

IT

Working time

The amount of time calculated as a percentage of the simulation time in which the system is processing a product

WT

Blocked time

The amount of time calculated as a percentage of the BT simulation time in which a phase is finished, but the product is blocked because the next working station is busy

The changes introduced in the simulation models with respect to the current company situation are related to 3 main factors: i) number of working shift, passing from 1 to 2. This will allow to increase the production and satisfy a greater market demand. The company aims to create a production system that is able to produce 30 bus_A, 12 bus_B and 5 bus_C; ii) PL and space organization, by removing the EZ and supposing that workers move among PAZs to complete the assembling of electric components. To this end, 3 WCs were added to simulate the assembling of electric components for bus_C and 2 for bus_A and bus_B for each PAZ; iii) system efficiency, that was improved by simulating the adoption of lean organization strategies, the introduction of an Additive Manufacturing (AM) machine and the introduction of technologies of Industry 4.0 (I4.0) paradigm. The AM machine allows to increase the flexibility of the entire system to respond to sudden changes, while I4.0 technologies enable to retrieve and analyze data coming from sensors equipped on machines to identify possible productivity limits and to optimize system management through the integration in the Manufacturing Execution System (MES). In this paper, it was assumed that the system efficiency was improved in a range between the 10% and 40% according to the number of changes introduced within the system. The simulations reproduce the system efficiency improvement by decreasing the processing time for each phase. The configurations ii) and iii) change the PL as showed in Fig. 4 and 5. Six different simulation scenarios were run considering different combinations of the 3 factors previously described. The plan of experiments is reported in Table 3. Concerning the system efficiency, several studies demonstrated how it can be improve in ranges between 10% and 40% when I4.0 technologies are introduced [13]. In this paper, an “improved” system efficiency is considered as a decrease of the average processing times for each production and assembling phase of the corresponding percentage.

Simulation Methods for Plant Layout Design-A Case Study

587

The experiments have been run for 25 times and the results reported refer to the average values. The simulated time represents a one-year production period.

Fig. 4. Plant layout without EZ

Fig. 5. Plant layout with AM machine

Table 3. Plan of experiments Number of shifts

Plant layout

System efficiency

Experiment 1

1

Current

Current

Experiment 2

2

Current

Current

Experiment 3

2

Without EZ

Current

Experiment 4

2

Current

Improved

Experiment 5

2

Without EZ

Improved

Experiment 6

2

Without EZ+1 resource

Improved

588

M. Caterino et al.

It is interesting to start the analysis of results from the Experiment 1, which represents the actual system. Table 4 reports the values of the CJ, WIP and LT (identified as ALTS) for such experiment. It is evident that the current PL with 12 PAZs does not allow to reach the desired demand. In fact, in one year, the current system is able to realise 11 products, while 13 are still in process. Moreover, this configuration is also characterised by a bottleneck, which increases the ALTS. Table 4. CJ, WIP and ALTS KPIs for the actual situation KPI

bus_A

bus_B

bus_C

Total

CJ

7

2

2

11

WIP

7

4

2

13

CLT

30.45

35.55

20.6



ALTS

32.76

38.61

21.47



SD

9%

4%

7%



The bottleneck of this configuration is the EZ, which has a WT of 78% and an IT of 22%. Specifically, the WC that simulates the transfer of the bus to EZ has a BT of 48% because, often, the bus is ready to be transferred but the EZ is busy. Concerning SZ and PZ, they have the opposite problem of being idle (WT low). WT, IT and BT values for EZ, SZ PZ and the related transfer WCs (indicated with “tr”) are reported in Table 5. Table 5. WT, IT and BT KPIs for the actual situation KPI

EZ

EZ_tr

SZ

SZ_tr

PZ

PZ_tr

WT

78%

8%

8%

1%

35%

1%

IT

22%

44%

92%

99%

65%

98%

BT

0%

48%

0%

0%

0%

1%

Experiments 2 and 3 still represent a situation in which the new technologies have not been introduced in the production system. For both, the working shift is doubled and in Experiment 3 the PL is changed according to Fig. 4. The results of the simulations (Table 6) show that, even if the CJ is more than doubled with respect to Experiment 1, the WIP is still high, thus the company is not able to fulfill the market demand. The average lead time decreases when considering 2 work shifts for both Experiments, but it is higher for Experiment 3, highlighting that the change of PL does not create advantages for the company. This can be explained by noting that the change of PL only makes the WT for transferring products from PAZ to EZ equal to 0; but the transfer time was very short (4 h) compared to the total lead time. Moreover, by eliminating EZ, the electric assembling operations are performed by considering new WCs in PAZ, which seems to be not sufficient to equal the results of Experiment 2.

Simulation Methods for Plant Layout Design-A Case Study

589

Table 6. CJ, WIP and ALTS KPIs for the actual situation

Experiment 2

Experiment 3

KPI

bus_A

bus_B

bus_C

Total

CJ

14

8

5

27

WIP

7

4

0

11

CLT

30.45

35.55

20.60



ALTS

16.97

19.56

12.48



SD

6%

5%

13%

CJ

14

8

5

27

WIP

7

4

0

11

CLT

30.45

35.55

20.60



ALTS

17.16

20.04

12.57



SD

7%

3%

14%



Table 7 shows the values of the KPIs useful to identify the bottlenecks. For Experiment 2, the EZ still remains the bottleneck, since the products remain blocked for the 48% of time waiting to be transferred. In Experiment 3, where the EZ is eliminated, the WT for assembling electric components does not change (76%), while increases BT, indicating that the electric assembling operations still remain a bottleneck and more resources would be necessary to remove the bottleneck. Table 7. WT, IT and BT KPIs for experiments 2 and 3.

Experiment 2

Experiment 3

KPI

EZ

EZ_tr

SZ

SZ_tr

PZ

PZ_tr

WT

76%

7%

8%

1%

35%

1%

IT

24%

48%

92%

99%

65%

99%

BT

0%

45%

0%

0%

0%

0%

WT

76%

0%

8%

1%

35%

1%

IT

24%

48%

92%

99%

65%

99%

BT

0%

52%

0%

0%

0%

0%

The next experiments all include the introduction of the I4.0 technologies described above. It was assumed that their introduction should result in improving the LTs in a range between 10% and 40%. For this reason, the results of the experiments from 4 to 6 will be analyzed referring to different values of system efficiency. Figure 6 reports the analysis of the CJ, WIP and ALTS for the experiments as a function of the percentage of the improvement due to the introduction of I4.0 technologies.

590

M. Caterino et al.

Fig. 6. CJ, WIP and ALTS KPIs for experiments 4, 5 and 6.

It is evident that, in the same experiment, by increasing the improvement percentages, the CJ increases, while the WIP and the ALTS decrease. It is interesting noting that the bus_A is the most affected by the improvement of the production system, increasing the CJ of at least 10 products passing from 10% to 40% of improvement. Considering the improved configurations, the desired production (30 products) is reached within the simulation time for Experiments 5 and 6. The product bus_C is not affected by the improvement of the production system because the desired production is always reached. The desired production is reached for all types of products (47) in the experiments 5 and 6 only when considering a system improvement of 40%. Experiment 6 demonstrated the effectiveness related to the introduction of a new electric worker, since the number of CJ is always greater than experiment 5 and 4, which have very similar results. Concerning the production lead times, the tree experiments demonstrated that ALTS was reduced more than 30% when passing from a system efficiency of 10% to 40% more of the actual one. The highest reduction of ALTS was for bus_A in experiment 4, where a saving of 40.2% was verified. The bus_C is the less affected by the system improvement, realizing always an ALTS reduction between 35.2% and 35.7% in the three experiments. Concerning the bottlenecks, Table 8 reports the values of the WT, IT and BT for experiments 4, 5 and 6

Simulation Methods for Plant Layout Design-A Case Study

591

only considering the EZ, that was the only bottleneck identified in experiments 1, 2 and 3. Table 8. WT, IT and BT KPIs for experiments 4, 5 and 6. KPI Experiment 4

Experiment 5

Experiment 6

EZ_10%

EZ_tr_10%

EZ_20%

EZ_tr_20%

EZ_30%

EZ_tr_30%

EZ_40%

EZ_tr_40% 10%

WT

79%

8%

78%

9%

78%

10%

72%

IT

21%

44%

22%

43%

22%

52%

28%

60%

BT

0%

48%

0%

48%

0%

38%

0%

30%

WT

80%

0%

78%

0%

79%

0%

73%

0%

IT

20%

49%

22%

50%

21%

53%

27%

63%

BT

0%

51%

0%

50%

0%

47%

0%

37%

WT

57%

0%

56%

0%

55%

0%

48%

0%

IT

43%

88%

44%

90%

45%

94%

52%

96%

BT

0%

12%

0%

10%

0%

6%

0%

4%

The bottleneck is related to the small number of limited workstations for electric operations. It is evident that the WT and BT decrease by increasing the system efficiency. These results are both related to the lead-time reduction previously described, as they determine faster processing and reduce the waiting time for transporting products among zones. The best solution is represented by experiment 6, where the EZ is removed and a new worker for electric assembly operations is introduced. In this case, in fact, there is the possibility to work simultaneously on 3 products and not on 2 as in experiments 4 and 5. This led to reach a BT of only 4% when a system efficiency improved by 40% is considered.

4 Conclusions This paper proposed a simulation model to study the impact of PL changes and the introduction of I4.0 technologies in a production environment. The simulations were built relying on the case of a real company working in the bus production and assembling sector. The results demonstrated the effectiveness of the simulation tool in reproducing the behavior of the real working environment, allowing to assess different experiments and different configurations of PL and I4.0 technologies implementation. In the case study proposed in this paper, simulations allowed to find different experiments in which the desired production was reached and the problems related to the bottleneck workstation were partially eliminated. The main limitations of this work consisted in assessing only very specific use cases, not fully exploring the possible case studies regarding the possibility of completely reorganizing the PL of the company or to find solutions for reducing the idle times of the PZ and SZ. These parts will be the subjects of further studies in the future.

592

M. Caterino et al.

References 1. Drira, A., Pierreval, H., Hajri-Gabouj, S.: Facility layout problems: a survey. Annu. Rev. Control. 31, 255–267 (2007) 2. Al-Zubaidi, S., Fantoni, G., Failli, F. Analysis of drivers for solving facility layout problems: a literature review. J. Ind. Inf. Integr. 21, 100187 (2021) 3. Hosseini-Nasab, H., Fereidouni, S., Fatemi Ghomi, S.M.T., Fakhrzad, M.B.: Classification of facility layout problems: a review study. Int. J. Adv. Manuf. Technol. 94(1–4), 957–977 (2017). https://doi.org/10.1007/s00170-017-0895-8 4. Zhou, J., Wang, J., Zhang, Y., Guo, Q., Li, H., Lu, Y.: Particle swarm optimization algorithm with variety inertia weights to solve unequal area facility layout problem. In: 2020 Chinese Control and Decision Conference (CCDC), pp. 4240–4245 (2020) 5. Zhang, Z., Wang, X., Wang, X., Cui, F., Cheng, H.: A simulation-based approach for plant layout design and production planning. J. Ambient Intell. Humaniz. Comput. 10(3), 1217– 1230 (2018). https://doi.org/10.1007/s12652-018-0687-5 6. Huynh, B., Akhtar, H., Li, W.: Discrete event simulation for manufacturing performance management and optimization: a case study for model factory. In: 2020 9th International Conference On Industrial Technology And Management (ICITM), pp. 16–20 (2020) 7. Ruiz Zúñiga, E., Flores García, E., Urenda Moris, M., Fathi, M., Syberfeldt, A.: Holistic simulation-based optimisation methodology for facility layout design with consideration to production and logistics constraints. Proc. Inst. Mech. Eng. Part B J. Eng. Manuf. 235, 09544054211017310 (2021) 8. Laudante, E., Greco, A., Caterino, M., Fera, M.: Human-robot interaction for improving fuselage assembly tasks: a case study. Appl. Sci. 10(17), 5757 (2020) 9. Armenzoni, M., Bottani, E., Rinaldi, M., Gallo, S., Montanari, R.: Analysis, simulation and optimization of the milking process in a cowshed for the production of parmigiano reggiano. Int. J. Food Eng. 12, 851–865 (2016) 10. Chen, T., Chen, J., Huang, C., Chang, P.: Solving the layout design problem by simulationoptimization approach–a case study on a sortation conveyor system. Simul. Model. Pract. Theory 106, 102192 (2021) 11. Septiani, W., Divia, G., Adisuwiryo, S.: Warehouse layout designing of cable manufacturing company using dedicated storage and simulation promodel. IOP Conf. Ser. Mater. Sci. Eng. 847, 012054 (2020) 12. Sharma, J., Jayant, A., Tyagi, M.: Simulation based design of production and multi echelon supply chain network for job shop manufacturing environment: a case study. J. Phys: Conf. Ser. 1240, 012066 (2019) 13. Ghafoorpoor Yazdi, P., Azizi, A., Hashemipour, M.: An empirical investigation of the relationship between overall equipment efficiency (OEE) and manufacturing sustainability in Industry 4.0 with time study approach. Sustainability 10(9), 3031 (2018)

Optimizing Racing Wheelchair Design Through Coupled Biomechanical-Mechanical Simulation Adrien Loiseau1 , Thibault Marsan2 , Pablo Navarro1 and Yann Landon1(B)

, Bruno Watier2

,

1 Institut Clément Ader (ICA), Université de Toulouse, CNRS, IMT Mines Albi, INSA,

ISAE-SUPAERO, UPS, 3 rue Caroline Aigle, 31400 Toulouse, France [email protected] 2 LAAS-CNRS, Université de Toulouse, CNRS, UPS, Toulouse, France

Abstract. The purpose of this study is to optimize the design of racing wheelchairs to improve the performances of the athletes. The design of manual wheelchair allows athletes to express their full potential. Two models have then been created. The first one to compute the optimal position of the shoulder of the athlete relatively to the wheelchair to obtain the maximal wheelchair speed for long distance races. The second one was designed to represent the 100 m race and to optimize the pelvis position of the athlete on the wheelchair to reduce the time to reach 100 m. Our model quantified the maximal speed reached by the wheelchair to 32 km/h and the optimal time to 14.35 s. To obtain these performances, the athlete would be in a lying position, with the vertical position of the pelvis centre close to the vertical position of the shoulder. The second program also returned the optimal speed curve of the wheelchair during the 100 m race. The coaches could then use the optimal acceleration curve found in this study to match the acceleration of the wheelchair of their athlete. Keywords: Wheelchair design · Racing · Athlete positioning · Para athletics

1 Introduction Manual wheelchair is a key element in the performance of wheelchair racing athletes. For this reason, the design of such wheelchairs is a particular issue for high performance athletes. Many advances have been made since the first racing wheelchair [1] but improvements remained to be made. The purpose of this study was to optimize the design of racing wheelchairs to improve the performance of the athlete. As a design requirement, the optimal position of the athlete in his wheelchair must be determined for each sport and each athlete, as well as the global mass repartition [2]. Indeed, in wheelchair racing the athletes present very different pathologies which necessitate customized settings. The work presented in this paper concerns athletics, especially wheelchair racing. The manual wheelchair designed for racing is specific to this sport. At first, it was just a modified wheelchair [3]. One of the major changes in the manual racing wheelchair was the evolution of a four-wheeled wheelchair to a threewheeled wheelchair [4], reducing the total weight of the wheelchair. Then, the materials © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 593–604, 2023. https://doi.org/10.1007/978-3-031-15928-2_52

594

A. Loiseau et al.

evolved [5], reducing the weight once more to make the wheelchair easier to propel and maneuver. It would then be interesting to study the optimal position of the athlete on the wheelchair, that would allow him to express his full potential. It has already been demonstrated that the position on the wheelchair would influence both the pushing pattern and the kinematics of the subject [6]. Different methods have been used in the literature. Experimental data were collected using inertia rollers, inertial measurement units, or motion capture [7]. Propulsion models have also been developed [8]. It has been stated that, since most of the motion occurred in the sagittal plane, a 2D model of the wheelchair would be sufficient to study the wheelchairs’ propulsion. Most of the developed models in the literature were used to compute optimal parameters of the wheelchair. However, none of the models of the literature have been used to obtain the optimal position of the athlete on racing wheelchair. Para-athletes compete in different racing length, from 100 m to marathons, and the athletes’ kinematics changes depending on the race. For instance, T53 athletes, which is a category of para-athletes with abdominal deficiencies who have little to no control over their trunk and their legs, use their shoulder to propel the wheelchair during the first acceleration but not once they reached their maximal speed [9]. However, T54 athletes, who also have low control over their legs but can control their trunk, use their shoulders both during acceleration and while maintaining their maximal speed. During the 100 m race, athletes keep accelerating the wheelchair since they reach their maximal speed at the end of the race [9]. It would then be interesting to develop two models, one with a fixed shoulder, which would be useful for long-distance races, and another one with a moving shoulder, for the short-distance races. In this context, the desired optimal position of the athlete in his wheelchair aims at: reducing the aerodynamic drag, maximizing the mechanical work exerted on the pushing rim at each pushing cycle, and minimizing the loss of energy especially for long races. Thus, this study focused on obtaining the optimal position of the athlete on the wheelchair for two scenarios, using dynamics models. The first model aimed at maximizing the wheelchair speed during the permanent regime, for long-distance races. The second one focused on minimizing the time to reach 100 m.

2 Material and Methods Two models have been designed, with different objectives and hypotheses. The main difference between the two models was the motion of the shoulder, which was fixed for the long-distance race model but not for the short distance model. This was due to the fact that athletes had a higher torso amplitude, therefore, a high shoulder motion, during the first accelerations, than when they reached their cruising speed [9]. 2.1 Long Distance Race Model This model aimed at finding the best position of the athlete in its wheelchair to reach a maximal permanent speed. For that objective, the energy provided by the athlete to the wheelchair was computed, depending on the position of the athlete on the wheelchair and its velocity. The optimal position was defined when this energy was maximized.

Optimizing Racing Wheelchair Design

595

This long-distance race model (Fig. 1 on the left) was implemented with the following assumptions: – – – – –

The athlete’s trunk was fixed. The shoulder and elbow joints were assimilated as pivots. The hand was assimilated as a point on the pushing rim. The model only considered the rear wheels and the athlete. The speed during the pushing cycle was considered constant.

The model varied the position of the athlete’s shoulder relatively to the centre of the wheel. For each position, the total energy of the pushing cycle needed for a specific speed of the wheelchair was calculated using an incremental method. A pushing cycle was defined as the elementary cycle of action of the athlete on the pushing rim, between the first contact with the pushing rim until the athlete removed his hands. The rotation was split into 360 points, each point corresponding to one degree (angle α). Between two points on the pushing rim, the point representing the wrist made a circular rotation of one degree. The rolling resistance was modelled using the model developed by Masson [10] and the aerodynamic drag was adapted from the Samozino’s model [11]. For each point on the pushing rim, the position of each joint centre and the angles of the joints were computed. Then, the joint speed was also computed using the information of a second point on the pushing rim close to the first one. Since, we did not have access to data about the forces generated by the athletes at their shoulder and torque during a propulsion, the values have been based on the literature. It was then possible to obtain the shoulder [12] and elbow torques [13, 14] based on the joint speeds. The torque value used in our study was the maximal torque exerted. The torques developed at the joints by the muscles created a force on the pushing rim, and the tangential part of this pushing force was considered. The scalar product of the tangential force and the motion of one degree of the hand on the pushing rim corresponded to the energy supplied by the athlete to the wheelchair between two points. The energy computed for all the points of the push cycle was summed up to find the total energy. If the total energy (W ) was greater than the energy required to counteract the resistive forces (E n ), the algorithm increased the wheelchair speed and restarted the calculation for the same position, but with the new wheelchair speed. Otherwise, the maximum speed was reached. 2.2 Sprint Race Model The long-distance model used the maximal speed to obtain the best position. However, for a 100 m race, the maximal speed was reached only on the last few meters. Indeed, since the athletes keep accelerating all along the 100 m, the maximal speed reachable by the athlete in the wheelchair might not be the best criterion for performance. Consequently, another model was developed to determine the best position in the wheelchair in order to minimize the time needed to reach 100 m.

596

A. Loiseau et al.

When starting, athletes tended to be mobile with their trunk whereas their trunk was fixed when they reached their cruise speed [9]. It was therefore considered in this second model since the angle between the trunk and the wheelchair influenced the shoulder and elbow angles. This sprint race model (Fig. 1 on the right) was implemented with the following assumptions: – The shoulder and elbow joints were assimilated as pivots. – The speed of the wheelchair was assumed to be constant during one pushing cycle. – The athlete considered is in the category T53 which meant that he did not have control over his abdominal muscles to maintain his posture, and that his back was bound to the wheelchair. The same methodology than the first model, in terms of computation of the tangential force, was applied to this second one with the difference that the shoulder position was not fixed. Also, the parameter computed to stop the iterating process was the distance reached by the athlete. The model stopped when the distance travelled (Length) reached 100 m. The time needed to reach these 100 m was then computed.

Fig. 1. Workflow of the long-distance race model (left) and sprint race model (right)

Optimizing Racing Wheelchair Design

597

2.3 Dataset Considered to Test the Models To test the different models, a dataset including the entry parameters of the models was defined. First, the anthropometric data such as torso, arm, and forearm length were needed. As an example, to try out the models, test-data were taken from one of the authors. The arm length was set at 250 mm, the forearm length at 300 mm and the torso length at 550 mm. The parameters of the wheelchair needed were only the rear wheel diameter and the pushing rim diameter. The rear wheel diameter was set at 700 mm and the pushing rim diameter at 350 mm, which were based on the observation of an existing racing wheelchair. Then, constraints had to be implemented. First, the degrees of freedom of the joints were set between a minimal and a maximal value: the torso angle was set between βmin = 20◦ and βmax = 45◦ , the shoulder joint angle between δmin = −30◦ and δmax = 170◦ , and the elbow joint angle between θmin = 20◦ and θmax = 175◦ . These values were also used as an example to test the model. These angles (Fig. 2) were used to determine when the hand is in contact with the pushing rim.

Fig. 2. Definition of the angles used in the model. α is the angle between the hand and the shoulder, β is the torso angle, δ the shoulder joint angle, and θ the elbow joint angle. Xshoulder , Yshoulder , Xpelvis , and Ypelvis are respectively the horizontal and vertical position of the shoulder and the pelvis, with respect to the wheel centre.

Then, for each position of the hand on the pushing rim, the position of the other joints was computed as well as their instantaneous speed. The torques generated by the joints were then taken from the literature based on their angle and speed [12–14]. The torque considered was assumed to be the maximal torque obtained in those studies. This assumption could be discussed as the subjects of these studies were able-bodied whereas our study concerned para-athletes.

598

A. Loiseau et al.

3 Results 3.1 Results of the Long-Distance Race Model Given the dataset considered, the long-distance race model was first tested. The first iteration reached a maximal speed of 8.33 m/s with a position of the shoulder relative to the centre of the wheel of Xshoulder = 100 mm and Yshoulder = 350 mm using a step of  = 20 mm in both axes. Then, with a second optimisation using a step of  = 2 mm, the optimal position of the shoulder relative to the wheel centre was Xshoulder = 80 mm, and Yshoulder = 332 mm. With this position, the maximal velocity reached by the athlete on his wheelchair was 8.88 m/s. This first result seemed to be in accordance with the actual position used by the athletes. These iterations allowed us to map the optimal position of the shoulder of the athlete as a function of the total energy generated by athlete on his wheelchair (Fig. 3). It was observed that, for the athlete with the anthropometry and the wheelchair geometry used in this study, the optimal position of his shoulder would be between Xshoulder = 0 to 90 mm, and Yshoulder = 300 to 400 mm.

Fig. 3. Map of the total energy supplied by the athlete to the wheelchair as a function of the horizontal and vertical position of the shoulder.

The evolution of the total energy as a function of the α angle is presented on Fig. 4. It can be observed that there was no evolution of the total energy between α = 0◦ to 70◦ and between α = 260◦ to 360◦ . This was because the athlete was not able to hold on the pushing rim for these angles. This incapacity to grab the pushing rim was linked to the anthropometry of the athlete and his position on the wheelchair.

Optimizing Racing Wheelchair Design

599

Fig. 4. Evolution of the simulated total energy as a function of the angle α. The hatched area represents the angles for which the athlete was not able to grab the pushing rim.

3.2 Results of the Sprint Race Model Using the same dataset, the sprint race model was then tested. The first iteration of the second model used the following position of the pelvis, with respect to the wheel centre: Xpelvis = 100 mm and Ypelvis = 50 mm. The time needed to reach 100 m was then of t100m = 21.21 s and the maximal wheelchair velocity reached was vmax = 5.8 m/s. After the optimisation of the position of the pelvis on the wheelchair, the time returned was 14.35 s to complete the race. The maximal velocity of the wheelchair reached was then 9.08 m/s. The optimal position of the hip was Xpelvis = 386 mm and Ypelvis = 386 mm. The results of the first model suggested that the shoulder should be between Yshoulder = 300 and 400 mm to reach an optimal position. The second model returned about the same vertical position of the pelvis. This means that the athlete should have different settings depending on the race he’s participating in. The wheelchair speed as a function of the time was computed (Fig. 5) and it can be observed that it followed an exponential curve. Also, the speed always increased, and it can be suspected that there would be a plateau when the maximal speed was reached but this maximal speed was not reached during the 100 m race.

4 Discussion The values of maximal wheelchair speed obtained with the Long-distance race model was vmax = 8.88 m/s. This value was close to the one found in the study by Sauret et al. [9] where experimental data have been captured using IMU. Indeed, they found a maximal speed of 8.86 m/s for the first athlete and 8.09 m/s for the second athlete. However, the position of the shoulder used by these athletes was more towards the front of the chair rather than close to the wheel centre, which was suggested by our optimization.

600

A. Loiseau et al.

Concerning sprint races, in the literature [9], two athletes performed a 100 m race and the best of them had a time of 15.38 s and reached his maximal speed of 8.86 m/s at the end of the race. Since it was a short race, the speed of the athlete kept increasing, until the 100 m. However, for longer races, a plateau could be seen. This phenomenon was also observed in our results with the sprint race model (Fig. 5) where the speed kept increasing during the first hundred meters. Therefore, the use of both models can be useful to coaches and athletes, as it may give the optimal positions of the pelvis and the shoulder to obtain either the best acceleration pattern and/or the highest maximal speed during both short and long-distance races. Nevertheless, the comfort of the athlete should also be considered when selecting the optimal position on the wheelchair. Indeed, if the athlete would not be able to maintain this optimal position due do discomfort, then this position would not be optimal for the performance of the athlete. The dataset used in this study was not taken from a Paralympic athlete, which could explain the differences found in this study. Indeed, the parameters used in the models were based on the measurements made on one of the authors. Differences could exist between the actual measurements of a Paralympic athlete and the model could gain in accuracy with a dataset coming from an actual athlete. One of the hypotheses made for the first model was that the shoulder was not moving during the propulsion when the athlete reached his maximal speed. This was the case for the T53 athletes since they do not have control over their abdominal muscles but T54 athletes do have control over those muscles. Therefore, they had a non-negligible motion of their shoulders with an amplitude of almost 40° [9]. The optimal position of the shoulder found with the first model could also be used for the athletes with control over their abdominal muscles as a mean position of the shoulder during propulsion, and the athletes would be able to generate even more mechanical work during the cycle. The speed of the wheelchair was deemed constant throughout the pushing cycle; however, as it can be observed on the Fig. 5, the hand was not in contact with the pushing rim during the whole cycle. Neglecting this allowed us to compute the optimal acceleration curve of the wheelchair. For coaches, this might be seen as the optimal mean acceleration curve that the athletes should aim to achieve. As it can be observed on the Fig. 5, the sprint race model over-estimated the speed of actual athletes. Those differences could be due to the fact that the athletes may not have the optimal position in their wheelchairs, or the wheelchair might be poorly designed. It might also be due to the different hypotheses made in this study. Indeed, the torques computed in this study were based on the literature, and they were computed based on the angles and rotational speeds of the joints. The torque applied was then the maximal torque the joint could supply. Even if the athletes were the best in their field, they were not able to provide maximal torque during the whole cycle. Also, only the in-plane components of the force applied by the hand on the pushing rim was considered. However, the force applied by the hand is three-dimensional [15]. Therefore, the out-of-plane part of the force was neglected, and this force may be detrimental to the propulsion, so neglecting it may result in an increased wheelchair speed. This could be one of the improvements to be made in the future. Considering the three components of the force on the pushing

Optimizing Racing Wheelchair Design

601

rim dynamics should result in a diminution of the wheelchair speed, and the simulated data would probably match better the experimental ones.

Fig. 5. Superposition of our simulated data (black) with the sprint race model and the experimental data of the literature [9] (with authorization) for different distances.

The model would also be more realistic if the wheelchair speed was not supposed constant during the pushing cycle. Indeed, considering the freewheel phase of the pushing cycle would lead to a more realistic propulsion phase. Also, going from a 2D model to a 3D model could help considering more than the tangential component of the force of the hand on the pushing rim [15]. Another possible evolution of the models would be to consider the shoulder and elbow as actual joints with multiple degrees of freedom [16], which would result in a more complex, but more realistic model of the propulsion. Finally, after the optimization proposed, a last model was developed to determine the optimal position of the centre of mass of the athlete and his wheelchair set in order to avoid the wheelie effect during the acceleration phase [17].

5 Conclusion The models developed in this study allowed to compute the optimal position of both the shoulder and the pelvis of the athlete to generate the maximal mechanical work on the pushing rim, leading to a maximal wheelchair speed. This could help athletes, professionals as beginners, to set up and/or optimize their wheelchair and to be seated at the optimal position that would allow the maximal speed. The second model might also be helpful for coaches, who could set the optimal acceleration curve as an objective to their athletes. To improve the models, the 3D force applied by the hand on the pushing rim might be considered. Also, considering the freewheel phase of the pushing cycle could provide information about a more realistic pushing cycle.

602

A. Loiseau et al.

Acknowledgments. This work has benefited funds managed by the National Research Agency (ANR) under the “Future Investment 3” plan within the framework of the “Priority Sport Research Program of Very High Performance” bearing the reference ANR–19-STHP-0005.

Annex The formulas used to compute the kinematic parameters of the model are presented in this annex. This allowed us to find the shoulders and elbow angles. The first step to calculate the kinematics parameters of the Long-distance model was based on the current position of the shoulder centre (Xshoulder and Yshoulder respectively the horizontal and vertical position):    2 2 ; D = (Xshoulder + Yshoulder ) inc = arctan XYshoulder shoulder For the sprint model, the first step to compute the kinematic parameters was based on the dimension of the torso (Ltorso ), the horizontal and vertical position of the pelvis (respectively Xpelvis and Ypelvis ), and β, the angle between the torso and a horizontal line (figure attached). Xshoulder = Ltorso cos(β) − Xpelvis ; Yshoulder = Ltorso sin(β) + Ypelvis    Yshoulder ; D = (Xshoulder 2 + Yshoulder 2 ) inc = arctan Xshoulder Then, the next step was the same for both models. The parameters used were: α, the π ν = α + − inc  2  L= R2mc + D2 − 2DRmc cos(ν) 2

Larm + L2forearm − L2 θ = −arccos 2Larm Lforearm   Rmc sin(ν) ψ = −arctan D − Rmc cos(ν) 2

L + L2arm − L2forearm μ = −arccos +ψ 2Larm L ϕ =μ+θ δ = inc + μ − β

Optimizing Racing Wheelchair Design

603

θ

References 1. Bundon, A., Mason, B.S., Goosey-Tolfrey, V.L.: Expert users’ perceptions of racing wheelchair design and setup: the knowns, unknowns, and next steps. Adapt. Phys. Act. Q. 34(2), 141–61 (2017). https://journals.humankinetics.com/view/journals/apaq/34/2/articlep141.xml 2. Mason, B.S., van der Woude, L.H.V., Goosey-Tolfrey, V.L.: The ergonomics of wheelchair configuration for optimal performance in the wheelchair court sports. Sport. Med. 43(1), 23– 38 (2013). https://www.scopus.com/inward/record.uri?eid=2-s2.0-84873125920&doi=10. 1007%2Fs40279-012-0005-x&partnerID=40&md5=f984184907b46f0696f61c0c3cfad544 3. MacLeish, M.S., Cooper, R.A., Harralson, J., Ster, J.F.: Design of a composite monocoque frame racing wheelchair. J. Rehabil. Res. Dev. 30(2), 233–249 (1993) 4. Cooper, R.A.: Wheelchair racing sports science: a review. J. Rehabil. Res. Dev. 27(3), 295–312 (1990) 5. Cooper, R.A., et al.: Engineering and technology in wheelchair sport. Phys. Med. Rehabil. Clin. N. Am. 29(2), 347–369 (2018) 6. Mâsse, L.C., Lamontagne, M., O’Riain, M.D.: Biomechanical analysis of wheelchair propulsion for various seating positions. J. Rehabil. Res. Dev. 29(3), 12–28 (1992). https://doi.org/ 10.1682/jrrd.1992.07.0012 7. Hybois, S.: Approche numérique pour l’optimisation personnalisée des réglages d’un fauteuil roulant manuel. Arts et Metiers Paristech - Campus de Paris, Institut de Biomécanique Humaine Georges Charpak (2019) 8. Morrow, D.A., Guo, L.Y., Zhao, K.D., Su, F.C., An, K.N.: A 2-D model of wheelchair propulsion. Disabil. Rehabil. 25(4–5), 192–196 (2003) 9. Sauret, C., Loisel, J., Poulet, Y., Bascou, J.: Performance analysis of wheelchair racing using IMU sensors. In: 19ème congrès ACAPS, Montpellier, France, 27–29 October 2021 (2021) 10. Masson, G.: Amélioration des performances d’athlètes en fauteuil roulant d’athlétisme à partir d’une configuration optimale (2015) 11. Samozino, P., et al.: A simple method for measuring power, force, velocity properties, and mechanical effectiveness in sprint running. Scand. J. Med. Sci. Sport. 26(6), 648–658 (2016)

604

A. Loiseau et al.

12. Mayer, F., Horstmann, T., Rocker, K., Heitkamp, H.C., Dickhuth, H.H.: Normal values of isokinetic maximum strength, the strength/velocity curve, and the angle at peak torque of an degrees of freedom in the shoulder. Int J Sports Med. 15(Suppl. 1), 19–25 (1994) 13. Provins, K.A., Salter, N.: Maximum torque exerted about the elbow joint. J. Appl. Physiol. 7(4), 393–398 (1955) 14. Colson, S., Pousson, M., Martin, A., Van Hoecke, J.: Isokinetic elbow flexion and coactivation following eccentric training. J. Electromyogr. Kinesiol. 9(1), 13–20 (1999) 15. Chénier, F., Pelland-Leblanc, J.-P., Parrinello, A., Marquis, E., Rancourt, D.: A high sample rate, wireless instrumented wheel for measuring 3D pushrim kinetics of a racing wheelchair. Med. Eng. Phys. 87, 30–37 (2021). https://www.scopus.com/inward/record.uri?eid=2-s2. 0-85096702517&doi=10.1016%2Fj.medengphy.2020.11.008&partnerID=40&md5=f9f005 3e65b1fad0b7a61994a2cdf436 16. Bourgain, M., Hybois, S., Thoreux, P., Rouillon, O., Rouch, P., Sauret, C.: Effect of shoulder model complexity in upper-body kinematics analysis of the golf swing. J. Biomech. 75, 154–158 (2018). https://doi.org/10.1016/j.jbiomech.2018.04.025 17. Kirby, R.L., Kirby, R.L., Smith, C., Seaman, R., Macleod, D.A., Parker, K.: The manual wheelchair wheelie: A review of our current understanding of an important motor skill. Disabil. Rehabil. Assist. Technol. 1(1–2), 119–127 (2006). https://doi.org/10.1080/096382 80500167605

Preliminary Validation of a Virtual Environment for Simulation and Recognition of Human Gestures Giuseppe Sanseverino1(B) , Dominik Krumm1 , Lena Kopnarski2 , Julian Rudisch2 , Claudia Voelcker-Rehage2 , and Stephan Odenwald1 1 Department of Sports Equipment and Technology, Chemnitz University of Technology,

Reichenhainer Str. 70, 09126 Chemnitz, Germany [email protected] 2 Department of Neuromotor Behavior and Exercise, University of Münster, Wilhelm-Schickard-Str. 8, 48149 Münster, Germany

Abstract. Humans are able to communicate by a wide variety of means. Gestures often play an important role in this multimodal communication. In order to also ensure robust interaction between humans and machines, it is important that machines are able to recognize human gestures. This typically requires timeconsuming subject tests that limit the number of conditions that can be tested. However, by moving these tests from the physical to a virtual environment, each test condition can be evaluated quickly, eliminating the need for numerous repetitions. The purpose of this work was to validate the use of a virtual test environment in comparison to physical testing. This was done by conducting a subject test and developing a virtual model of the human upper limb. The motion profile of the subject performing a simple gesture was recorded with a visual optical motion capture system and used as input for the newly developed virtual model. Acceleration signals captured with an IMU attached to the subject’s right wrist were used as a reference signal and compared to signals simulated by a digital twin of the sensor. The pilot study proved the capabilities of the proposed approach and showed some of its limitations. Keywords: Virtual environment · Simulation · Digital twin · Human body model · Human gestures

1 Introduction The idea of using virtual environments to represent real-world scenarios has gained ground in recent years. As a result, numerous studies on this topic have been carried out in the field of product design and product development. Dahmen and Rossmann [1] highlighted the benefits of using digital twins and simulation-based engineering to reduce the time and costs associated with the development of complex systems (e.g. reducing or eliminating physical prototyping). Digital twins are widely employed also in Industry 4.0. Cinar et al. [2] reviewed the use of digital twins in this area and concluded © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 605–613, 2023. https://doi.org/10.1007/978-3-031-15928-2_53

606

G. Sanseverino et al.

that they can be used to identify problems at an early stage and predict outcomes more accurately. In this way, costs are reduced and equipment life is extended. The use of virtual environment is also common in the fields of biomechanics and ergonomics. Several digital models of the human body are available on the market. Some of them are musculoskeletal models and allow dynamic simulation of movements [3–5]. Others, such as Jack (Siemens PLM Software, Texas, USA), are used to perform ergonomic analysis. Although these are powerful solutions, none of them provide the ability to implement customizable digital twins of sensors to virtually evaluate Body-Attached Sensor Networks (BASNs). The framework proposed by Sanseverino et al. [6] aims to enable virtual assessment of BASNs for gesture recognition. The basic idea behind this methodology is to replace time-consuming subject tests with virtual tests. When developing a BASN, it is necessary to specify the type, number and location of sensors, which is usually done experimentally. This leads to the need to perform separate subject tests for each sensor set-up. Also, since it is impossible to execute a gesture more than once without minimal variation, it is necessary to perform replicates for each condition tested to increase the statistical power of the test. Virtual tests, on the other hand, provide the ability to repeat a gesture in the identical manner each time. Once the simulation is set up, testing a new condition is very fast. However, a prerequisite for the effectiveness of this methodology is that the key features of the signals captured with physical sensors are reliably reproduced by their digital twin. The aim of this work was to validate the methodology proposed by Sanseverino et al. [6]. For this purpose, a multibody model of the human upper limbs was developed based on the proposed framework. Motion data collected during a subject test were used to animate the model. To validate the model, accelerations recorded with an inertial measurement unit (IMU) attached to the subject’s wrist were used as reference signals and compared to accelerations simulated by the IMU’s digital twin.

2 Methods The work consisted of three parts. In the first part, a subject study was conducted to record the motion profile of a basic gesture. In the second part, a virtual multibody model of the human upper limbs was developed and animated using the data from the collected motion profile. In the third part, a simulation was first performed using the multibody model, and then the signals collected in the virtual environment were compared with those in the real environment. 2.1 Subject Test A male subject aged 18 years participated in this pilot study. The subject is right-handed according to the Edinburgh Handedness Inventory [7], has normal vision, no psychiatric or neurological disorders, and no orthopedic impairments of the upper extremities. The test was carried out in accordance with the Declaration of Helsinki. Materials. An optical marker-based motion capture system consisting of ten optoelectronic infrared cameras (5 × Vero and 5 × Vantage, Vicon Motion Systems Ltd, Oxford,

Preliminary Validation of a Virtual Environment

607

United Kingdom) was used to track the SUBject’s movements. The sampling rate was set to 100 Hz. Sixteen spherical reflective markers with a diameter of 6.4 mm were used for the upper body (head, trunk, shoulders, right arm) and 22 markers with a diameter of 4 mm were used for the right hand (Fig. 1). in addition, an IMU (Movesense, Vantaa, Finland) with a sampling rate of 13.33 Hz was attached to the right wrist.

Fig. 1. Photo of the subject’s hand with reflective markers (white spheres) and an IMU attached to the wrist.

Test Protocol. The test was carried out in the laboratories of the Department of Neuromotor Behavior and Exercise of the University of Münster in Germany. The subject was asked to sit at a desk with the dorsal (back) side of the hand resting on the table. The subject initially performed a predefined action to synchronize the IMU with the Vicon system. To do this, after starting both measurement systems, the subject supinated the hand 180° so that the palmar rested on the table. The subject then performed the actual movement task. This involved moving the right hand from the resting position R sequentially to positions 1, 2 and 3, with the hand being returned to R after reaching each position (Fig. 2).

Fig. 2. Schematic representation of the arrangement of the subject and the rest position (R) and target positions (1, 2, and 3) marked on the desk.

608

G. Sanseverino et al.

To determine the joint angles and trajectories of the markers, the data acquired with the motion capture system were analyzed with both the Vicon Plug-In Gait Model and the Hand Model Plug-In developed by GPEM (GPEM srl, Pescara, Italy).

2.2 Modeling of the Human Upper Limb The human upper limb was modeled using the block modeling principle implemented in SimScape Multibody, the multibody modeling package of the MATLAB Simulink environment (The MathWorks, Inc., Natick, MA, USA). The developed model consists of three rigid bodies (solids). These represent the upper arm, the forearm and the palm, respectively. To simplify the simulation, the fingers were not modeled, especially since the movement performed in the subject test did not require the use of the fingers. The dimensions of each modeled solid were determined by direct measurements on the test subject (Table 1). Table 1. Dimensions of the model solids representing the upper limbs of the subject. Model solid

Length (z-axis)

Max radius (x-axis)

Max radius (y-axis)

Upper arm

335 mm

30 mm

30 mm

Fore arm

270 mm

25 mm

25 mm

Hand palm

100 mm

30 mm

10 mm

To simulate the subject’s movements, joints were placed between each solid to mimic human joints. The shoulder joint was modeled by combining three revolute joints, each providing one rotational degree of freedom. This created a custom joint that can simulate shoulder flexion-extension, abduction-adduction and internal-external rotation. According to its anatomy, the elbow joint, has two rotational degrees of freedom: flexionextension and pronation-supination of the forearm [8]. In the model, the elbow joint was implemented with a revolute joint that only allows the simulation of flexion-extension angles. In fact, the subject’s captured motion did not require the degree of freedom associated with pronation-supination of the elbow. Also, to reduce computation times, the wrist was not modeled, so the solid representing the palm was rigidly connected to the lower part of the forearm. To ensure a good agreement between the motion profile obtained from Vicon and the one simulated in the model, it was necessary to add some degrees of freedom. In fact, the Vicon model also considers the movements of the subject’s torso and clavicle, while the model developed with the above-mentioned joints considers the shoulder as rigidly connected to the world frame of the simulation environment. To overcome this limitation and to have the modeled shoulder joint in the same absolute position as the Vicon one, a custom joint representing the subject’s right clavicle was inserted between the world frame and the modeled shoulder joint. This additional joint provides both three translational degrees of freedom and three rotational degrees of freedom. These

Preliminary Validation of a Virtual Environment

609

represent, first, the linear displacements of the marker attached to the right clavicle of the subject with respect to the world frame of the Vicon system, and second, the relative angles between the shoulder joint itself and the sternum (elevation-depression, protraction-retraction, axial rotation) [9], which complete the so-called shoulder girdle. In order to provide a clearer representation, graphical solids with spherical shape and without any inertial property were positioned at each joint (Fig. 3).

Shoulder Joint

Upper arm IMU digital twin Fore arm Hand

Elbow Joint

Wrist Joint

Fig. 3. Multibody model of the human upper limb showing the subject placing his right hand on a desk. Joints are represented by circles, body segments by solids and the digital twin of an IMU as a cube.

A digital twin of the IMU used in the physical test was implemented just in front of before the model’s wrist joint. It can capture triaxial accelerations with reference to the world frame. For ease of visualization, the digital IMU is also represented by a graphical solid that has no mass (black cube in Fig. 3). Simulation. To simulate the motion performed by the subject, joint angles provided by Vicon analysis were given as input to the respective joint in the multibody model of the subject upper limb. The simulation time was set to 10 s. Data Processing. After completing the simulation, signals of interest were exported to MATLAB for processing and comparison. The trajectories taken from the model were resampled at 100 Hz to match the sampling frequency of Vicon. The first and last 5 s were truncated. To compare the trajectories, the difference between the reference and model trajectories was calculated. The absolute values of the maximum and mean error thus defined and the standard deviation were calculated. The acceleration signals were resampled at the frequency rate of the IMU (13.33 Hz) for both the model and the reference. Reference accelerations were trimmed using the predefined action performed by the subject at the beginning of the test. To remove noise, both model and reference signals were filtered using a lowpass filter with a passband frequency of 2 Hz, a steepness of 0.85 and a stopband attenuation of 60 dB. Cross-correlation [9] was calculated to evaluate the time shift between the signals.

610

G. Sanseverino et al.

3 Results The component of the motion profile along the antero-posterior axis of the subject is the one that contributes the most to the execution of the gesture under examination. Therefore, this axis is taken as reference. To verify the accuracy of the movement performed in the virtual environment, the trajectories of the modeled shoulder and elbow were recorded along the antero-posterior axis. Figure 4 shows the overlapping plots of the antero-posterior trajectories of the marker attached to the subject’s right shoulder (reference) and the trajectories obtained by simulation (model). A mean absolute error of 0.0021 m, a standard deviation of 0.00069 m and an absolute maximum error of 0.0037 m were determined.

Fig. 4. Overlapping plots of the reference antero-posterior trajectory of the shoulder measured during subject test and antero-posterior trajectory of the shoulder in the virtual environment.

Figure 5 shows the antero-posterior reference trajectory of the marker attached to the subject’s right elbow and the one obtained from the model. A mean absolute error of 0.066 m, a standard deviation of 0.059 m and an absolute maximum error of 0.19 m were calculated for the elbow trajectories. To validate the framework proposed by Sanseverino et al. [6], the acceleration signals along the antero-posterior axis measured with an IMU attached to the subject’s right wrist joint and the signals provided by the IMU’s digital twin are compared. Figure 6 reports the signals in the time domain and the associated power spectrum. Figure 7 shows the cross-correlation between the antero-posterior acceleration measured with IMU and the acceleration output by the model. The peak of cross-correlation is located on zero axis, which shows that there is no time shift between the signals.

Preliminary Validation of a Virtual Environment

611

Fig. 5. Overlapping plots of the reference antero-posterior trajectory of the elbow measured during subject test and antero-posterior trajectory of the elbow in the virtual environment.

(a)

(b)

Fig. 6. Overlapping plots of the reference acceleration signal along the antero-posterior axis measured with an IMU attached to the subject wrist joint and the one obtained with simulation in the time domain (a). Power spectrum of the signals (b).

Fig. 7. Normalized cross-correlation between the antero-posterior acceleration measured with IMU and the one output by the model.

612

G. Sanseverino et al.

4 Discussion The comparison of the trajectories measured at the shoulder joint of the test subject (reference) with those provided by the model shows an almost perfect match (Fig. 4). In addition, the trajectories are characterized by a low error and an even lower standard deviation. Thus, it can be assumed that the modeled shoulder joint and the additional degrees of freedom introduced with the clavicle joint are capable of simulating the behavior of the human shoulder-girdle. On the contrary, the trajectory given by the model for the modeled elbow joint does not perfectly match the reference trajectory. Even though the shape of the signals is similar (Fig. 5), there is a considerable difference in their value. In this case, the error reaches a maximum value of 0.19 m (in coincidence with the second negative peak). Its mean and standard deviation are also higher. The reason for this worsening of the model behavior is to be found in the kinematic chain associated with the model. The employed virtual environment is very powerful and provides high flexibility in the simulation. However, it is very difficult to control all the rigid transform required to move from the world frame to more remote frames. Additional constraints are represented by the use of rigid bodies. Although the above-mentioned limitation also affects the measurement of the digital acceleration signals at the modeled wrist joint, it is still possible to compare the reference accelerations of the subject’s wrist those of the model. It is encouraging to note, that there is no time lag between the signals, as endorsed by the location on zero axis of the cross-correlation peak (Fig. 7). In addition, the model and reference signals show a comparable frequency response when going from the time to the frequency domain (Fig. 6b). In particular, the acceleration signals have the same fundamental frequency (first peak) of 0.38 Hz.

5 Conclusion and Outlook This pilot study demonstrated that the virtual environment proposed by Sanseverino et al. [6] has the potential for virtual assessment of Body-Attached Sensor Networks for gesture recognition. The model was capable of reproducing human movements captured during a subject study. However, the accuracy of the reproduction was not constant and worsened as the complexity of the kinematic chain increased. Despite a proper validation of the framework was not possible, some interesting results were achieved and the main limitations of the proposed model were highlighted. Nevertheless, there is a need to further investigate the proposed framework. To this end, additional subject tests with repeated motions will be performed and kinematic chain of the model will be further refined. Acknowledgements. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 416228727 – SFB 1410.

Preliminary Validation of a Virtual Environment

613

References 1. Dahmen, U., Rossmann, J.: Experimentable digital twins for a modeling and simulation-based engineering approach. In: 2018 IEEE International Systems Engineering Symposium (ISSE), pp. 1–8 (2018). https://doi.org/10.1109/SysEng.2018.8544383 2. Cinar, Z.M., Nuhu, A.A., Zeeshan, Q., Korhan, O.: Digital twins for industry 4.0: a review. In: Calisir, F., Korhan, O. (eds.) Industrial Engineering in the Digital Disruption Era. Lecture Notes in Management and Industrial Engineering, pp. 193–203. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-42416-9_18 3. AnyBody Technology Homepage. https://www.anybodytech.com/software Accessed 8 Feb 2022 4. Delp, S.L., et al.: OpenSim: open-source software to create and analyze dynamic simulations of movement. IEEE Trans. Biomed. Eng. 54(11), 1940–1950 (2007). https://doi.org/10.1109/ TBME.2007.901024 5. Shippen, J., May, B.: BoB – biomechanics in MATLAB. In: Proceedings of 11th International Conference Biomdlore 2016. VGTU Technika, Vilnius Gediminas Technical University, Lithuania (2016). https://doi.org/10.3846/biomdlore.2016.02 6. Sanseverino, G., Krumm, D., Odenwald, S.: A framework for virtual evaluation of bodyattached sensor networks. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) Design Tools and Methods in Industrial Engineering II. Lecture Notes in Mechanical Engineering, pp. 557–568. Springer, Cham (2022). https://doi.org/10.1007/9783-030-91234-5_56 7. Oldfield, R.C.: The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia 9(1), 97–113 (1971). https://doi.org/10.1016/0028-3932(71)90067-4 8. Islam, S.U., Glover, A., MacFarlane, R.J., Mehta, N., Waseem, M.: The anatomy and biomechanics of the elbow. Open Orthop. J. 14(1), 95–99 (2020). https://doi.org/10.2174/187432 5002014010095 9. Sternoclavicular joint–Kenhub. https://www.kenhub.com/en/library/anatomy/sternoclavic ular-joint Accessed 10 Feb 2022 10. Lynn, P.A., Fuerst, W.: Introductory Digital Signal Processing with Computer Applications. 2nd edn, pp. 308–312. Wiley, Chichester (1997)

Energy Efficient Trajectory Planning in Robotic Cells via Virtual Prototyping Tools Mario Baggetta , Giovanni Berselli , Roberto Razzoli(B)

, and Marta Zucchinetti

DIME, University of Genova, Via opera Pia 15/a, 1614 Genova, Italy [email protected]

Abstract. The constant growth in global energy demand, and corresponding prices rise, is soaring new engineering methods for reducing energy consumption in manufacturing processes. For decades, industrial robotics have been enabling quality enhancement of end-products by using flexible manufacturing processes, without much concern to energy cost, but now a makeover is happening. Calls for sustainable and green manufacturing processes are being promoted across the globe with the aim to produce more goods and with less consumption. In this paper, a new method is presented focusing on the optimization of energy intake by industrial robots, without the need to change their hardware set and just modifying the trajectory planning of the end-effector. A test case scenario consisting of a robotic cell with 4 pick-and-place manipulators has been set to validate the method. Starting from a pre-scheduled trajectory, robots are moved at the highest speed and acceleration and, by performing the sequenced operations, the optimal trajectories are defined. The goal is to find a trajectory that minimizes the time cycle and the total energy consumption, while avoiding collisions between the robots’ links: comparing the results thereof to those of the pre-scheduled trajectory, noticeable energy saving has been obtained along with possible decrease of the cycle time. Keywords: Computer-aided engineering tools · Virtual prototyping · Energy efficiency · Trajectory planning · Robot scheduling

1 Introduction In the last 25 years, there has been a constant expansion in the use of industrial robots which led to 2.7 million units at the end of 2020, a 12% growth rate from the previous year. At the same time, global demands are more and more pushing towards sustainable economies, capable of maximizing energy and process outputs still reducing both costs and the production’s environmental footprint. For this reason, energy optimization has become of primary importance in industrial robotics, both for the possibility of reducing energy consumption and for complying to future standards. Taking the highly automated automotive industry as an example, the energy consumption of large industrial robots represents around 8% of the global worldwide utilization: it is worthwhile noting that almost a quarter of the energy used over the entire © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 614–625, 2023. https://doi.org/10.1007/978-3-031-15928-2_54

Energy Efficient Trajectory Planning in Robotic Cells

615

life cycle of a vehicle occurs in its production phase. Energy efficiency is one of the topics of Europe 2020, a ten-year strategy proposed by the European Commission in 2010 for smart and sustainable growth [1]. Among the objectives, in terms of energy sustainability, there appear to be: 20% reduction of greenhouse gas emissions than the previous decade, the possibility of obtaining 20% of the energy from renewable sources and 20% increase in energy efficiency (for this reason, such a target is described as 2020-20). Increasing efficiency is, therefore, contemplated as one of the best methods to ensure a continuous supply of energy resources, to reduce greenhouse gas emissions and other pollutants. It has been estimated that the jump in efficiency by 13%, between 1996 and 2007, resulted in a 160 MTOE overall saving. To estimate the amount of energy not wasted in a specific process, a new unit of measurement, the “Negawatt-hour”, has been introduced in 1985 by the American physicist Amory Lovins, to encourage people to reduce energy consumption. In the world of industrial robotics, diminishing the actuators consumption may be achieved in different ways, such as: by lightening the mechanical system, by using regenerative brakes, by introducing elastic elements which allow a continuous conversion between potential and kinetic energy. The following section illustrates these and other methods for reducing consumption as a synthetic representation of some research directions.

2 Techniques for Reducing Energy Consumption in Robotics In literature there are numerous examples of methods for cutting down the energy consumption of a robot: they can be divided into two main categories: hardware and software [2–7]. The former ones achieve the goal by applying changes to the physical system, the others operate on the planning of the motion system thus only requiring changes at the software level. 2.1 Hardware Methods The main hardware methods include: the reduction of the weight of the robot, the replacement of traditional drives with regenerative drives (which convert the braking energy into electrical energy that can be reused by the system) and the insertion of compliant elements, such as springs. These, already introduced in robotic systems for gravitational balancing, can be inserted at the joints with a series or parallel configuration. The first one requires the replacement of the actuator, while the other does not and is therefore cheaper and simpler. A generalized approach to reduce the energy consumption of robotic systems avails with regenerative drives and linear springs mounted in parallel [9]: the addition of springs, suitably modeled in terms of stiffness and preload, may represent a benefit for reducing the system’s energy consumption for any arbitrary trajectory. The use of regenerative drives allows a drastic reduction in consumption since the energy is not only taken from the supply, but also from the capacitors. It is possible to further reduce energy consumption by equipping the robot with compliant elements, which accumulate elastic energy and release it as kinetic energy which, together with the actuation system, helps move the robot.

616

M. Baggetta et al.

2.2 Software Methods The main one is certainly the optimization of the trajectory: it starts from the dynamic and electromechanical model of the system and arrives at the definition of a function, the energy consumption, that is subsequently optimized. An example of this approach is in [10], where the analytical method of optimization of the trajectory and its experimental verification is performed on two robotic systems: a Cartesian manipulator with linear axis and a test bench consisting of two servomotors, both in the case of direct coupling and with the interposition of a planetary gearbox.

3 Trajectory Planning To avoid confusion between terms that are often used interchangeably, the difference between path and trajectory must be specified. The term path means the set of points, in the space of the joints or in the workspace, which connect the initial position of the robot to the final one. Trajectory, on the other hand, means a specific relationship between time and space that allows to know, at any moment, the corresponding position of the system. In particular, the trajectories can be specified: in the operating space or in the joints space. The trajectories in the operating space represent the place of the points that the end effector must occupy over time and for their implementation it is necessary to resolve the inverse kinematics of the machine, for every instant in time; the trajectories, in fact, can be calculated in the operative space, but they are implemented in the joint space. Alternatively, these can be studied directly in the joints space thus specifying the position, speed and possibly also the acceleration and jerk of the motion axes associated with the individual joints. Assuming that the trajectories are specified in the joints space, their expression will be of the type: q = q(t) where q is the dependent variable, that is an angular or linear position and t is the independent variable. Usually, a manipulator may be required to perform a specific task, such as, for example, passing from the current position to a final position (point-to-point motion), or through a sequence of specified points (motion through a sequence of points). Starting from these constraints, it is possible to define the path to follow (path planning) and through the algorithm of trajectory planning, to obtain the trajectory of the end effector, or the trends of position, speed and acceleration as a function of time. Finally, a control system is needed to verify that the robot is actually following the chosen path. All of these steps are part of the motion planning required for any manipulator. The trajectory may need to satisfy particular constraints such as: the initial/ final position, the passage through intermediate points, the maximum speed/acceleration values, any values assigned at certain points along the speed/acceleration path and limits on jerk to avoid mechanical vibrations that might lead to the destruction of the machine [11–13]. With the same constraint conditions, it is often possible to identify different interpolating curves, so to evaluate which of these is optimal. Starting with the polynomial laws, the simplest expression to define a motion once given the starting and ending positions is the linear one (constant speed, zero acceleration); the parabolic one imposes constant values for the acceleration which can change sign when the system switches

Energy Efficient Trajectory Planning in Robotic Cells

617

from acceleration to deceleration; the cubic trajectory allows to impose four conditions, respectively the position and speed at the initial and final instant: it is possible to define motion through a sequence of points, imposing the continuity of position and velocity. To obtain trajectories with also acceleration continuity, a total of six conditions are required, therefore a fifth degree polynomial must be adopted. As a rule, when the degree of the polynomial increases, the smoothness increases [14, 15]. The trigonometric trajectories are based on the functions of the same name and have the property of being derived as many times as desired in the definition interval with continuity; however, they may present discontinuities at the initial and final instant. Some examples of trigonometric laws are the harmonic, the elliptical and the cycloidal trajectory. Other laws can be derived by using Fourier series or Spline curves. A type of trajectory widely used, even in the industrial field, is the trapezoidal one, characterized by three phases: the first has a positive and constant acceleration; the second zero acceleration, while the last part consists of negative and constant acceleration. The trapezoidal law is the one used for the case study addressed below.

4 Case Study: A Pick-and-Place Robotic Cell The robot is the PUMA 560, an industrial manipulator with anthropomorphic serial architecture with six degrees of freedom. This robotic arm quickly became widely used all over the world and though nowadays it is obsolete, the fact that its geometry and its dynamic parameters are well known make it a very suitable model for research [16]. Both the MATLAB Robotics System Toolbox (RTB) and CAE RecurDyn software were used for the study, The PUMA560’s STEP file has been used to create its model in RecurDyn: each body was assigned the proper values of inertia, mass and position of the center of mass present in the Robotics Toolbox, Fig. 1. Then, markers were inserted into the model, following the Denavit-Hartenberg parameters; these references were used, later, to correctly positioning the Revolute joints between one link and the next.

Fig. 1. CAE model of the PUMA robot in Recurdyn.

618

M. Baggetta et al.

For the positioning of the six rotational joints present in the manipulator, according to the Denavit-Hartenberg convention, reference was made to the provided parameters (θ, d, a, α). The values used are shown in Table 1. In literature there are several values of the inertias of the motors [17, 18]; some of them are listed in Table 2, together with the values recommended by Peter Corke (author of the RTB) and used within the Matlab Robotic Toolbox. Table 1. Denavit-Hartenberg parameters. Joint

θ

d

a

α

1

q1

0

0

1.5708

2

q2

0

0.4318

0

3

q3

0.15005

0.0203

−1,5708

4

q4

0.4328

0

1.5708

5

q5

0

0

−1.5708

6

q6

0

0

0

Table 2. Motors inertia figures (values are to be multiplied by 10−6 ). Param

Armstrong

Tarn

Kawasaki

Preferred

Jm1

291

198

200

200

Jm2

409

203

200

200

Jm3

299

202

200

200

Jm4

35

18.3

20

33

Jm5

35

198

20

33

Jm6

35

198

20

33

The last step was to insert the resistant torque due to friction. Finally, it was verified that the calculated torques in RecurDyn were consistent with those obtained from the Matlab RTB by means of a trajectory test between two generic points. Contact management uses four vectors of Boolean variables, created by the calculation software and passed as splines to the RecurDyn environment. The spline values are used to define eight equations that will constitute the On Conditions of the inserted On/Off joints. It has been decided to study the contact problem using simple parts, inserting each link of the robot inside a cylinder or a prism as depicted in Fig. 2. These new solid elements, in addition to host the elements of the robot, have the same dynamic properties and joint positions as the PUMA560. In this way, it is immediate to apply the same motion laws and friction torques to each joint of this new model. In Fig. 2 the envelope parts are shown in transparency. Figure 3 shows all the contacts that have been inserted into the model and which, consequently, compose the control equation.

Energy Efficient Trajectory Planning in Robotic Cells

Fig. 2. PUMA robot members inside simple parts for contact management.

619

Fig. 3. Contacts in RecurDyn.

A MATLAB function has been set providing 4 different trajectories (one for each robot), once known the waypoints and the duration of each single leg of the route. Furthermore, the program can run RecurDyn in batch mode, to verify that no contact occurs between the parts during the motion. After the check has been carried out, MATLAB proceeds with the calculation of the torques, power and overall energy of the cell. For the sake of completeness, a flowchart illustrating the different steps performed by the script is shown in Fig. 4.

Fig. 4. Flowchart Matlab.

620

M. Baggetta et al.

The cell includes four PUMA560 robots arranged at the vertices of a rectangular work surface, Fig. 5. The robots are named R1, R2, R3, and R4 and the movements they have to perform are as follows: • • • • • • • •

Positioning of a perforated plate on the bench by R1 Homing R1 R2 and R3 taking one pin Positioning of the pins inside the holes of the plate Homing R2 and R3 Movement of R4 towards the workbench R4 removes the plate Homing R4

Fig. 5. The robotic cell at time zero.

Overall, inside the cell, in a work cycle, 26 distinct movements are performed: their durations are going to be modified during the optimization in order to minimize the energy global consumption, Fig. 6. The following maximum and minimum values for the end-effector were considered: vmax = 0.8 m/s; amax = 5 m/s2 ; vmin = 0.2 m/s; amin = 1 m/s2 Before starting the optimizations, some simulations were performed to obtain the energy consumption and the duration of the work cycle in the two limit cases: maximum speed & maximum acceleration, minimum speed & minimum acceleration. The motor models and the friction models, which are inserted in Recurdyn environment prior to the simulations, are thoroughly explained in [3, 19] for single and multiple degrees of freedom systems. Results obtained are as in Table 3.

Energy Efficient Trajectory Planning in Robotic Cells

621

Fig. 6. Some snapshots of the 26 movements; R1 (top), R2 | R3 (mid), R4 (bottom).

Table 3. Benchmarks in two borderline conditions: results. Cycle time (s)

Energy used (J)

Speed and Acc

43.2

620

min-min

13

815

max-max

Four set of optimization studies were executed (related results are shown in Table 4 and Fig. 7): 1. 2. 3. 4.

Optimization of the total cycle time (using a genetic algorithm) Optimization of energy consumption (using a genetic algorithm) Optimization of energy consumption by setting T max = 13 s (using fmincon [3]) Optimization of energy consumption by setting, as the maximum time, the result of opt #1 (T max = T opt1 using fmincon)

622

M. Baggetta et al. Table 4. Optimizations results. Optimization

E [J]

Var to 815 J

Time [s]

Var to 13 s

1

708.2

−13%

9.2

−30%

2

624.9

−23%

15.5

+20%

3

637.6

−22%

13.0

0

4

679.1

−17%

9.2

−30%

Fig. 7. Energy and cycle time for the 4 optimisation runs.

The entire optimization model can be schematized as in Fig. 8.

Fig. 8. Optimisation scheme.

A comparison among the optimizations shows that in case time were a unsignificant concern the maximum efficiency is given by opt #2: though being very close to the value of the min-min case, it displays a 15.5 s cycle time instead of a huge 43.2 s. Indeed, it is more than sound the best option is opt # 4 because compared to opt #2, for only 9% more energy, the cycle time drops from 15.5 s to 9.2 s granting a productivity about 70% higher. The power diagrams show, Fig. 9, as obvious, that opt #2 spreading the energy in a bigger time, causes smaller peaks: this could come in handy while designing the cell, as well. Note that, in the first two simulations (namely, case #1 and #2), since not even a rough estimation of the potential saving was available nor a tentative initial solution (to

Energy Efficient Trajectory Planning in Robotic Cells

623

be employed as a starting point for a gradient-based optimizer), stochastic optimization (as genetic algorithms) has been preferred. In the second run, so as to further optimize the final solution, the genetic-algorithm outcome is employed as the initial guess for the gradient-based optimizer (namely, fmincon). Also, the cycle time equaling 13s in Table 3 has been set as the reference benchmark solution since is measured by enforcing all the manipulators to move in sequence at their maximum speed. At last, note that a trapezoidal motion law has been employed, as it represents the most common law employed in heavy-duty industrial robots [6]. Nonetheless, a comparison with more complex profiles (employable in e.g. co-bots), will be subject of future research.

Fig. 9. Comparison among the power trends for the four optimizations.

5 Conclusion The goal of this work was to create a robotic cell model capable of offering energy consumption figures as the working conditions vary. The model was built using the MATLAB Robotic Toolbox and the RecurDyn software environment. The first was used for the definition of the trajectories of the different robots, given the durations of the single sections, and for the torques evaluation, the latter was used to put the cell in motion and verify that there were no contacts between the parts. A further study could concern the search for optimal speeds and accelerations to reduce energy consumption, passing to the function of generating the trajectory no longer the times of the individual sections

624

M. Baggetta et al.

of the routing, but the peak speeds and acceleration. Another possible consideration to deepen is the generation of trajectories. Robot manufacturers do not provide precise information on the logics that manages the movement of the robot, so it is impossible to replicate it exactly. This generates an inconsistency between the trajectories used in the simulations and the real ones. Even high-level software, used for the simulation of a robotic cell and equipped with algorithms very similar to those used in real robots, makes mistakes in robots’ movement prediction. A possible future development could be the study of more recent robots, such as the Kuka KR210, and the use of specific software for the generation of trajectories like KUKA.Sim 4.0 [19] and the an investigation on how Recurdyn may be used for optimizing the work-cell layout (similarly to [20]. Also, the generation of a Pareto-front combining the two objective functions (energy and cycle time) could be generated. In conclusion, the use of advanced simulation systems, applied to a complete model, allows to effectively optimize the design of a robotic cell considerably reducing energy request.

References 1. https://ec.europa.eu/eu2020/pdf/COMPLET%20EN%20BARROSO%20%20%20007% 20-%20Europe%202020%20-%20EN%20version.pdf. Accessed July 2021 2. Meike, D., Pellicciari, M., Berselli, G.: Energy efficient use of multirobot production lines in the automotive industry: detailed system modeling and optimization. IEEE Trans. Autom. Sci. Eng. 11(3), 798–809 (2020) 3. Meike, D., Pellicciari, M., Berselli, G., Vergnano, A., Ribickis, L.: Increasing the energy efficiency of multi-robot production lines in the automotive industry. In: IEEE International Conference on Automation Science and Engineering, pp. 700–705 (2012) 4. Hovgard, M., Lennartson, B., Bengtsson, K.: Applied energy optimization of multi-robot systems through motion parameter tuning. CIRP J. Manuf. Sci. Technol. 35, 422–430 (2021) 5. Gadaleta, M., Pellicciari, M., Berselli, G.: Optimization of the energy consumption of industrial robots for automatic code generation. Robot. Comput. Integr. Manuf. 57, 452–464 (2018) 6. Siciliano, B., Sciavicco, L., Villani, L., Giuseppe, O.: Robotics, Modelling, Planning and Control. Springer, London (2009). https://doi.org/10.1007/978-1-84628-642-1 7. Pellicciari, M., Berselli, G., Leali, F., Vergnano, A.: A minimal touch approach for optimizing energy efficiency in pick-and-place manipulators. In: 15th International Conference on Advanced Robotics (ICAR), pp. 100–105 (2011) 8. Vergnano, A., et al.: Embedding detailed robot energy optimization into high-level scheduling. In: 6th Annual IEEE Conference on Automation Science and Engineering, pp. 386–392 (2010) 9. Palomba, I., Wehrle, E., Carabin, G., Vidoni, R.: Minimization of the energy consumption in industrial robots through regenerative drives and optimally designed compliant elements. Appl. Sci. 10(21), 74–75 (2020) 10. Carabin, G., Scalera, L.: On the trajectory planning for energy efficiency in industrial robotic systems. Robotics 9(4), 89 (2020) 11. Ata, A.A.: Optimal trajectory planning of manipulators: a review. J. Eng. Sci. Technol. 2, 32–54 (2007) 12. Scalera, L., Carabin, G., Vidoni, R., Gasparetto, A.: Minimum-energy trajectory planning for industrial robotic applications: analytical model and experimental results. In: Zeghloul, S., Laribi, M.A., Sandoval Arevalo, J.S. (eds.) RAAD 2020. MMS, vol. 84, pp. 334–342. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-48989-2_36

Energy Efficient Trajectory Planning in Robotic Cells

625

13. Biagiotti, L., Melchiorri, C.: Trajectory Planning for Automatic Machines and Robots. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85629-0 14. Riazi, S., Bengtsson, K., Lennartson, B.: From trapezoid to polynmial: next-generation energy-efficient robot trajectories. In: 13th IEEE Conference on Automation Science and Engineering (CASE), Xi’an, China (2017) 15. Berselli, G., Balugani, F., Pellicciari, M.: Energy-optimal motions for Servo-Systems: a comparison of spline interpolants and performance indexes using a CAD-based approach. Robot. Comput. Integr. Manuf. 40, 55–65 (2016) 16. Corke, P.I., Armstrong-Hèlouvry, B.: A search for consensus among model parameters reported for the PUMA 560 robot. In: Proceedings of the 1994 IEEE International Conference on Robotics and Automation, vol. 2, pp. 1608–1613 (1994) 17. Armstrong, B., Khatib, O., Burdick, J.: The explicit dynamic model and inertial parameters of the Puma 560 arm. In: IEEE International Conference on Robotics and Automation, Washington, USA, vol. 1, pp. 510–518 (1986) 18. Tarn, T.J., Bejczy, A.K., Han, S., Yun, X.: Inertia parameters of PUMA 560 robot arm. Technical Report SSM-RL-85-01, Washington University, St. Louis, MO (1985) 19. Pellicciari, M., Berselli, G., Balugani, F.: On Designing optimal trajectories for servo-actuated mechanisms: detailed virtual prototyping and experimental evaluation. IEEE/ASME Trans. Mechatron. 20(5), 2039–2052 (2015) 20. Gadaleta, M., Berselli, G., Pellicciari, M.: Energy-optimal layout design of robotic work cells: potential assessment on an industrial case study. ELSEVIER Robot. Comput. Integr. Manuf. 47, 102–111 (2017) 21. https://it.mathworks.com/help/optim/ug/fmincon.html. Accessed Sept 2021 22. KUKA.Sim: KUKA AG (2021)

Design and Implementation of a Passive Agitator to Increase UV Dose in WWTPs Disinfection Channels Delia Trifi1 , Javier Climent2 , Rosario Arnau3 , Pablo Carratal` a3 , 4 5 5 1 Mairena Garc´ıa , In´es Beltr´an , Cristobal Badenes , Sergio Chiva , and Ra´ ul Mart´ınez-Cuenca1(B) 1

4

Department of Mechanical Engineering and Construction, Universitat Jaume I, Ave. Vicent Sos Baynat s/n, 12071 Castell´ o de la Plana, Spain {trifi,raul.martinez}@uji.es 2 CEO, Hydrens, Ave. del Mar 53, 12003 Castell´ o de la Plana, Spain 3 R&D Technician, Hydrens, Ave. del Mar 53, 12003 Castell´ o de la Plana, Spain WWTP Coordinator, FACSA, St. Mayor 82-84, 12001 Castell´ o de la Plana, Spain 5 Castell´ o Town Hall, Sq. Mayor 1, 12001 Castell´ o de la Plana, Spain

Abstract. UV disinfection is the preferred technology for treating clear waters given its low environmental impact and moderate economic cost. In the field of wastewater treatment, they are still not so efficient since the suspended solids absorb and scatter the UV light. Given that the solids concentration changes significantly over time, it is difficult to obtain UV disinfection units for all working conditions. This work was performed to optimize the commercial UV disinfection channel installed in the WWTP of Castell´ o, Spain. The analysis of this facility with the aid of Computational Fluid Dynamics (CFD) helped to identify the main hydrodynamic causes for the reduction of the UV dose. It was detected that UV radiation does not reach the intermediate regions between lamps for high concentrations of suspended solids. In addition, the flow in this region has poor mixing, so the paths of the pathogens that pass through these dark regions receive barely no UV dose. This hydraulic fault can be solved by inserting multitubular passive agitators to increase the mixing, thanks to the turbulent vortex shedding after each tube. First, an advanced CFD setting serves to find the optimal tube diameter. The simulation accounts for the full transient resolution of irradiation (Monte Carlo model), flow turbulence (LES model) and dose distribution (Lagrangian model). Next, the optimal multitubular structure was placed in the UV channel to analyze E. Coli contents for several configurations, where the most favorable combination increased the inactivation by 1.1 decades. Keywords: CFD Simulation · UV Disinfection optimization · Wastewater treatment

· Analysis and

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 626–636, 2023. https://doi.org/10.1007/978-3-031-15928-2_55

Design and Implementation of a Passive Agitator to Increase UV Dose

1

627

Introduction

European directives for the regulation of urban wastewater treatment impose increasing requirements on the effluent [1]. To accomplish with the new quality standards wastewater treatment plants (WWTPs) need to improve their tertiary treatment and further reduce organics, turbidity, nitrogen, phosphorus, metals, and pathogens. This is of especial importance when the effluent is expected to be reused for irrigation, recreational purposes or even as drinking water. At present, chlorination is still the most common method for disinfection applied in wastewater treatment plants. Nevertheless, it is no longer the best option for reclaimed water treatments because of potential problems with disinfection byproducts [14]. Thus, alternative disinfection methods have stood out due to the increasingly stringent environmental regulation. The most promising is the use of UV radiation to inactivate the reproduction of microorganisms through alterations in DNA, causing the formation of thymine dimers [6]. In order to generate the UV dose required for the inactivation of pathogenic particles, a uniform radiation distribution is needed. In practice this is not possible due to the high UV absorption caused by the total suspended solids (TSS). This is illustrated in Fig. 1, that depicts the UV distribution between the lamps for high TSS concentrations. In this figure, the UV lamps are drawn as purple squares (top and bottom), and the light intensity between them is represented by a contour plot with red representing higher values and blue representing no intensity. Given the high absorption of the solids, the light emitted by the UV lamps is limited to a small region close to the lamps. From the Fig. 1.(a), it is apparent that bacteria with paths close to the disinfection tubes receive an excess of UV dose, whereas bacteria that pass close to the center receive practically no dose at all. To improve the disinfection performance, the system must consider modifications to be able to increase the turbulence in the flow thus generating multiple pathogenic paths between lamps. Given these new trajectories (Fig. 1.(b)), pathogens pass alternatively close to the lamps and the center as they advance. Consequently, all pathogens receive a similar dose. To sum up, the system performance strongly relies on a proper hydrodynamic design. The objective of this work is the increase of the turbulence in the regions between UV lamps. Hence, with this enhanced hydrodynamics the pathogens describe paths that pass through areas where high UV radiation reaches them and others where it is much lower, integrating the dose along each pathogenic path. Thus, with this imposed behavior all the pathogens receive a proper UV dose at the end of this facility. This improved performance can be achieved by inserting a multitubular passive agitator before each UV bank.

2

CFD Modeling Setup

In this section, the geometry of the proposed passive agitator is optimized thanks to the use of an advanced Computational Fluid Dynamics (CFD) model. The

628

D. Trifi et al.

Fig. 1. (a) Non-irradiation uniformity between lamps,(b)Ideal performance of the installation

optimal design was implemented in the WWTP demonstrating a clear enhancement of the overall inactivation. This integrated tool to evaluate the UV disinfection performance was developed with a commercial CFD software, CFX 2020R2, that provides the code to implement and execute these submodels: turbulence, radiation and particle tracking (for the E. Coli paths). Also, CFD results were analysed using Octave 6.2.0 to calculate the dose provided for every pathogenic particle. 2.1

Case Study: Geometry and Meshing

This experimental analysis is placed in the wastewater treatment plant (WWTP) of Castell´o (Spain), where Fomento Agr´ıcola Castellonense, S.A (FACSA) has the operation and maintenance agreement for this public treatment facility. The UV disinfection system analyzed is an open channel (see Fig. 2), whose dimensions are 93 cm wide, 100 cm deep and 9 m long. There are two disinfection banks on it, each one having 40 lamps placed in a matrix arrangement (8 columns and 5 rows). Each UV lamp has a diameter of 4.8 cm and 2 m length. There is a separation of 2 m between each UV bank, so the full length of the treatment region is 6 m. Since it is a channel and not a closed reactor, its original configuration is easily modifiable. The passive agitator is composed by a set of 7 vertical tubes placed 10 cm before the first UV bank with a tube between each pair of UV tubes columns. This choice reduces the risk of equipment damage using a structure with reduced cost.

Design and Implementation of a Passive Agitator to Increase UV Dose

629

Fig. 2. Open channel UV disinfection system.

Note that the simulation of the full channel would require a huge computational cost both in terms of memory and computational time (>10 million nodes). To accelerate the study, a simplified geometry was proposed. The new domain represents the gap between four lamps of one of the UV banks (Fig. 3) and one vertical tube at the entrance. This simplified domain entails less computational cost (600000 nodes) affording exhaustive parametric studies to optimize the performance of the proposed solutions. This reduced region represents 86% (modeling the top part of the channel as free slip) of the flow area, so it is considered to describe the whole disinfection performance. It is worth to mention that the spaces between these tubes and the walls are so reduced that there is no significant reduction in the UV irradiation and no structures can be placed in them.

Fig. 3. (a) Geometrical dimensions, (b) Discretized domain.

630

2.2

D. Trifi et al.

Turbulence Model

The open channel UV disinfection system installed in the WWTP was modeled with the aim of being able to capture the real-time hydrodynamic behavior inside the system. The Large Eddy Simulation (LES) [15] is the model capable of describing the largest eddies formed inside the UV systems during the transient resolution. According to the reference bibliography [7,9,12,14], practically no one of the studies related with the CFD inactivation performance have implemented the LES model. This purely transient model is used with the aim to provide a better description of the turbulent vortex detachment after the cylindrical tubes. This better description is illustrated in Fig. 4, that compares CFD results for the velocity fields after a cylinder, using the Standard k- model [3] (Fig. 4.(a)) and the LES (Fig. 4.(b)). The improvement in the description is apparent through these figures. On the one hand, the Standard k- model provides a smooth velocity field after the cylindrical obstacle predicting almost no mixing of the inlet flow. On the other side, the LES model reproduces the characteristic series of alternating vortexes detaching after the cylinder. This provides better mixing and paths for the particles similar to those represented in Fig. 1.(b). 2.3

Lagrangian Approach for Simulating Reactor Performance

The paths followed by the individual E. Coli bacteria were simulated using a Lagrangian approach. This framework treats the organisms as discrete particles, with a density of 998.2 kg/m3 , integrating every time step the received dose along each trajectory. Each pathogenic path is predicted from a force balance on the particle, that involves buoyancy, lift and drag forces. Random effects of turbulence on the pathogenic particles are accounted for by the Discrete Random Walk (DRW) model [2,12]. 2.4

Radiation Model

The UV irradiation field [2] generated by the UV tubes is modeled with a Monte Carlo radiation model. This model assumes that the radiation is emitted as photons from luminous surfaces following a uniform angular distribution. Light absorption is modeled as a statistical process between photons and the fluid, so every individual photon is tracked and at every time step may disappear or change its path according to the probabilities of absorption and scattering, respectively. Thousands of photon histories need to be computed to get accurate UV-fluence rate values, which increases the computational cost. In this work, light absorption is considered to be independent from the E. Coli concentration, so the UV light field is computed in the first iterations and remains constant for the remaining simulation time. 2.5

Disinfection Kinetics Model

Biodosimetry [7,12] is one of the methods of choice for logarithmic inactivation studies of UV-disinfection systems. Usually, strains of these studies are labgrown and non-pathogenic, so their dose response for UV inactivation can be

Design and Implementation of a Passive Agitator to Increase UV Dose

631

Fig. 4. Flow velocity fields for the flow after a cylindrical obstacle using (a) the k- flow field, averaged values, and (b) the LES model.

modeled by conventional first-order Chick-Watson model, where the pathogenic concentration reduction rate is described as: dC = −kµ × I n × C (1) dt In this equation, C represents the microorganism concentration, kµ is the inactivation rate constant [cm2 /mJ], I is the UV irradiance [mW/cm2 ] and n the coefficient of dilution, which usually is taken as 1 [14]. Several studies have reported an increased UV resistance of environmental bacteria compared to lab-grown strains, particularly when dealing with waters

632

D. Trifi et al.

Fig. 5. Inactivation curves extracted from the literature review.

containing suspended solids. This means that a higher UV fluence is required to reach the same level of inactivation as in clean waters with lab-grown strains. As a result, the conventional Chick-Watson model is unable to fit the pathogenic particles behavior, [5] (Fig. 5). A literature review of inactivation kinetic models based on the target pathogen, (Escherichia Coli), has been carried out [4– 6,8,10,11,13,16]. Figure 5 shows several experimental inactivation curves corresponding to different waters and E. Coli strains. The dispersion in the results is apparent, and current state of the art can not explain these differences from the characteristics of the waters and bacteria. According to the conclusions exposed in the Hijnen et al.(2006) review [6], each study should use its own inactivation curve. This procedure is not always possible, specially when dealing with rapidly changing waters as those used in wastewater treatment. Consequently, this approach uses the dose as the parameter for the optimization, and the inactivation is considered to be just an estimation thanks to the use of the model. In this study the chosen model is the one determined by Chang et al., 1985 [6], as it considers the well-known tailing effect (the inactivation tends to stabilize for high doses for non-clean waters) but is not as conservative as others [5].

3

CFD Results

The simulation results for the simplified domain in Fig. 3 are summarized in Fig. 6. Firstly, Fig. 6.(a) compares the dose distribution corresponding to the original configuration and those using tubes. The plot analyzes the distribution for different tube diameters, showing an optimal performance for 20 mm. In this case (Fig. 6.(b)), the inactivation is increased by 0.5 decades (factor of 3 in dose).

4

Experimental Results

As stated previously, the proposal was the insertion of a set of vertical tubes before the first UV bank. The tubes of standard PVC pipes with an outer diam-

Design and Implementation of a Passive Agitator to Increase UV Dose

633

Fig. 6. (a) Diameter optimization study,(b) Improvement of the E. Coli inactivation using the optimal passive agitator (green line) versus the original configuration (red line).

eter of 20 mm according to the CFD study. To avoid possible damages on the structures, a single array of tubes was placed at the inlet of the first UV bank, keeping 10 cm between this structure and the first UV bank (Fig. 7). These tubes were set vertically in such a way that they were facing the gaps between UV lamp columns.

634

D. Trifi et al.

Fig. 7. Passive agitator placed into the channel.

4.1

Velocity Measures

The flow field was characterized with the Vectrino sensor (Nortek, A.S.). This high-resolution acoustic velocimeter registers the flow fluctuations produced by the analyzed structure. Indeed, it was found that the structure actually produced a strong increase in the turbulence field (Fig. 8). In particular, for points located between the two lamps of the same row, the original configuration had turbulence intensities of 50%, 60% and 30% in the axial, lateral and vertical directions, respectively. When the new structure was installed, these turbulence intensities were increased up to 150%, 230% and 90%, respectively. This huge increase in the turbulence was expected to produce a huge increase in the dispersion of E. Coli bacteria.

Fig. 8. Vectrino signals in the three spatial coordinates, for both configurations.

Design and Implementation of a Passive Agitator to Increase UV Dose

4.2

635

Water Quality Measures and Comparison Between Systems

Escherichia coli strains were cultured by IPROMA (Eurofins Scientific) laboratories. These samples were collected before and after the UV bank, with the original configuration and with the passive agitator placed in its position. In addition, these samples were combined with the dosage of hydrogen peroxide at the inlet of the channel. From these analyses, it was found that the insertion of the passive agitator increased just 0.2 decades the inactivation. This value was well below the one provided by the CFD simulation. Note that the optimal configuration considers the tubes inside the banks, whereas the actual device had to be placed outside the banks to prevent from damaging the tubes. Also, in the CFD results the inactivation magnitude depends on the kinetic model considered, i.e. other models would provide different inactivation improvements. Anyways, this improvement was over that achieved by using peroxide (only 0.1 decades). The best result was obtained thanks to the combined use of passive agitation and peroxide, with an improvement of 1.1 decades.

5

Conclusions

The first part of this study provides an advanced setting for modelling inactivation of pathogens in UV disinfection systems, integrating flow turbulence, irradiation and dose distribution. With the aid of CFD, the UV-dose distribution of the original system was obtained, highlighting its hydrodynamic faults and the problems with TSS concentrations. Also, this CFD tool has been used to enhance the hydrodynamic performance, selecting the optimal diameter for the multitubular passive agitator (d = 20 mm). In practice, this structure had to be placed outside the bank (10 cm before the entrance of the first UV bank) to prevent from damages in the structures (specially UV lamps). The introduction of just one multitubular low-cost passive agitator improved the E. Coli inactivation by 0.2 decades. This improvement is over that obtained by injecting peroxide. Finally, the combined use of peroxide injection (with a similar flow rate) and the proposed agitator produced a huge increase of 1.1 decades.

References 1. Regulation (EU) 2020/741 of the European Parliament and of the Council of 25 May 2020 on minimum requirements for water reuse (Text with EEA relevance). Official J. Eur. Union L 177, 32–55 (2020). https://data.europa.eu/eli/reg/2020/ 741/oj R CFX-Solver Theory Guide. ANSYS, Inc. (Release 2020 R2). https:// 2. Ansys: www.ansys.com 3. Blocken, B.: LES over RANS in building simulation for outdoor and indoor applications: a foregone conclusion? Build. Simul. 11(5), 821–870 (2018). https://doi. org/10.1007/s12273-018-0459-3

636

D. Trifi et al.

4. Harris, G.D., Adams, V.D., Sorensen, D.L., Curtis, M.S.: Ultraviolet inactivation of selected bacteria and viruses with photoreactivation of the bacteria. Water Res. 21, 687–692 (1987). https://doi.org/10.1016/0043-1354(87)90080-7 5. Hassen, A., Mahrouk, M., Ouzari, H., Cherif, M., Boudabous, A., Damelincourt, J.J.: UV disinfection of treated wastewater in a large-scale pilot plant and inactivation of selected bacteria in a laboratory UV device. Biores. Technol. (1999). https://doi.org/10.1016/S0960-8524(99)00179-0 6. Hijnen, W.A., Beerendonk, E.F., Medema, G.J.: Inactivation credit of UV radiation for viruses, bacteria and protozoan (oo)cysts in water: a review. Water Res. 40, 3–22 (2006). https://doi.org/10.1016/j.watres.2005.10.030 7. Ho, C.K., Khalsa, S.S., Wright, H.B., Wicklein, E.: Modeling UV disinfection using integrated computational fluid dynamics and discrete ordinates radiation models. In: Disinfection and Reuse Symposium (2009). https://www.osti.gov/biblio/ 1141718 8. Mounaouer, B., Abdennaceur, H.: Bacteriological quality of effluent submitted consecutively to a macrofiltration and ultraviolet light systems in the Tunisian conditions. J. Environ. Health Sci. Eng. 13, 3 (2015). https://doi.org/10.1186/s40201015-0154-6 9. Munoz, A., Cralk, S., Kresta, S.: Computational fluid dynamics for predicting performance of ultraviolet disinfection - sensitivity to particle tracking inputs. J. Environ. Eng. Sci. 6, 285–301 (2007). https://doi.org/10.1139/S06-045 10. Qualls, R.G., et al.: Factors controlling sensitivity in ultraviolet disinfection of secondary effluents. Water Pollution Control Federation 57, 1006–1011 (1985). https://www.jstor.org/stable/25042770 11. Sommer, R., Lhotsky, M., Haider, T., Cabaj, A.: UV inactivation, liquid-holding recovery, and photoreactivation of Escherichia coli O157 and other pathogenic Escherichia coli strains in water. J. Food Prot. 63, 1015–1020 (2000). https:// doi.org/10.4315/0362-028X-63.8.1015 12. Sozzi, D.A., Taghipour, F.: UV reactor performance modeling by Eulerian and Lagrangian methods. Environ. Sci. Technol. 40, 1609–1615 (2006). https://doi. org/10.1021/es051006x 13. Vermeulen, N., Keeler, W.J., Nandakumar, K., Leung, K.T.: The bactericidal effect of ultraviolet and visible light on Escherichia coli. Biotechnol. Bioeng. 99, 550–556 (2008). https://doi.org/10.1002/bit.21611 14. Wols, B.: CFD in drinking water treatment. Delft University of Technology, Ph.D thesis (2010) 15. Zhiyin, Y.: Large-Eddy simulation: past, present and the future. Chin. J. Aeronaut. 28, 11–24 (2015). https://doi.org/10.1016/j.cja.2014.12.007 16. Zimmer, J.L., Slawson, R.M.: Potential repair of Escherichia coli DNA following exposure to UV radiation from both medium- and low- pressure UV sources used in drinking water treatment. Appl. Environ. Microbiol. 68, 3293–3299 (2002). https://doi.org/10.1128/AEM.68.7.3293-3299.2002

Numerical Optimization of a Composite Sandwich Panel with a Novel Bi-directional Corrugated Core Using an Animal-Inspired Optimization Algorithm Giuseppe Marannano(B) , Tommaso Ingrassia , Vito Ricotta , and Vincenzo Nigrelli Engineering Department, University of Palermo, Viale delle Scienze, 90128 Palermo, Italy [email protected]

Abstract. Composite sandwich panels with honeycomb, corrugated, tetrahedral, trapezoidal, 3D periodic and hybrid lattice cores have long been studied for their use in various industrial fields. In this study, several numerical analyses were conducted in ANSYS APDL environment in order to analyze the effect of a novel bi-directional corrugated core configuration on the flexural performance of a CFRP sandwich panel. In particular, the sandwich core is obtained by repeating a regular unit cell in two different directions to form a three-dimensional lattice structure. In order to determine the optimal values of the geometrical parameters of the core unit cell and to evaluate how the layout of the composite laminate could affect the mechanical performances of the structure, a numerical study was conducted by using the Group Search Optimizer (GSO) algorithm, a metaheuristic animalinspired optimization algorithm used to solve various real-world problems. The obtained results show that the GSO algorithm is very effective to optimize the main geometrical parameters of the composite sandwich panel with the novel bidirectional corrugated core. More generally, the implemented procedure provides an open framework to solve complex optimization problems that are very difficult to solve using exact methods, making the GSO algorithm particularly attractive for many industrial applications. Keywords: Numerical optimization · Group search optimizer · Finite element analysis · ANSYS

1 Introduction In the last years, composite sandwich panels are used in a wide range of structural applications in order to realize lightweight structures designed to guarantee high stiffness and low weight ratio, high fatigue life, high damage tolerance and load capacity, both in compression and in bending. For specific applications, i.e. in automotive, aerospace, marine transportation etc., the technology used for the core realization is significant. In fact, composite sandwich panels with honeycomb, corrugated, tetrahedral, trapezoidal, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 637–651, 2023. https://doi.org/10.1007/978-3-031-15928-2_56

638

G. Marannano et al.

3D periodic, origami-based fold-cores and hybrid lattice cores have been broadly analyzed for their high mechanical properties in these specific industrial fields. In contrast to the commonly used polymeric foam core, structural cores provide optimal stiffness and strength characteristics, good stability under compressive loads as well as high energy-absorbing capacity and good and corrosion strength. Between composite core, corrugated-cores offer significant potential for applications in composite construction [1, 2]. In more detail, a corrugated core is constituted by a corrugated composite sheet that keeps the skins apart allowing a high strength-toweight ratio. In addition to guarantee an elevate bending stiffness and shear strength, an important feature of these structural cores it’s the ability to guarantee optimal ventilation characteristics, preventing moisture problems that are common in polymeric foam or honeycomb cores [1, 3]. However, despite of the remarkable mechanical characteristics of the corrugated core panel, the noticeable bending properties are strongly dependent on the core arrangement. The simplest solution that could be used to reduce this “not always desired” behavior is by placing corrugated core with both longitudinal and transverse directions (bi-directional corrugated core). In more detail, in this work, a corrugated composite sheet was cut into strips (ribbons) that where periodically cross-combined in two different directions and assembled into a novel three-dimensional lattice structure using epoxy resin. An analysis of the influence of the different parameters involved in defining the core geometry is a complex task if, in the study, advanced numerical techniques are not used [4–7]. In fact, to reach the (near-) optimum solution is considerably complicated if, as in the current study, the number of independent variables involved in the analysis can be greater than or equal to ten. Recently, several metaheuristics algorithms are successfully employed for solving very complex problems [8]. Metaheuristic is a term that was coined by Glover in 1986 and it is a high-level problem-independent algorithm framework that provides a strategy that “guide” the search process. The goal is to efficiently explore the search space in order to find (near-) optimal solutions. Between them, nature-inspired optimization algorithms are metaheuristic techniques that are developed from the principles of biological evolution, swarm behavior, and physical and chemical processes [9]. Nature-inspired optimization techniques are computational methods that incorporate intelligence techniques within the code. They generate solutions that are close to the optimum (even if not exactly the global optimum) in a finite reasonable amount of time. In the literature studies there are several metaheuristic techniques: Simulated annealing [10], neural networks [11], genetic algorithm [12], particle swarm [13], ant colony optimization [14], bacterial foraging [15], cuckoo search [16], bat algorithm [17], firefly algorithm [18–20], group search optimizer [21, 22], etc. Among them, the Group Search Optimizer (GSO) algorithm is an approximate method, conceptually simple and easy to implement, which is widely used in the literature study to analyze benchmark functions but, until now, it was never implemented in a finite element solver. In this work, the GSO algorithm was implemented in ANSYS Academic Research Mechanical APDL (release 2022 R1) environment and extensively used to determine the influence of different design variables involved in the definition of the studied bidirectional corrugated core.

Numerical Optimization of a Composite Sandwich Panel

639

2 Novel Bi-directional Corrugated Core As shown in Fig. 1, the corrugated-core geometry is defined by a repeating a unit cell, characterized by a trapezoidal profile, in two orthogonal directions (x and y). A series of unit cells constitutes a ribbon.

Fig. 1. Schematic representation of the novel bi-directional corrugated core.

In more detail, the bi-directional corrugated core is realized by bonding with epoxy resin four different series of ribbons in two different directions (x and y) and on four different layers (increasing z-value) in order to form a three-dimensional lattice structure. Due to the particular stacking sequence, the unit cells of layer 1 and 3 (odd layers) and the unit cells of layer 2 and 4 (even layers) have the same geometry. Specifically, with reference to Fig. 2, the unit cells 1 and 3 have the same size, as well as the unit cells 2 and 4. Figure 2 and Table 1 also shows the main geometric parameters of the unit cells.

Fig. 2. Main geometric parameters of the novel bi-directional corrugated core.

640

G. Marannano et al. Table 1. Main geometric parameters of the unit cells

Height of the unit cell

Unit cell 1 (Unit cell 3)

Unit cell 2 (Unit cell 4)

h = 3 mm

h = 3 mm

Geometric angle of the unit cell

10° ≤ δ 1 ≤ 60°

10° ≤ δ 2 ≤ 60°

Bonding length on x direction [mm]

2 mm ≤ l 1 ≤ 6 mm

2 mm ≤ l 2 ≤ 6 mm

Stacking sequence

[α 1 , α 2 ]s

[α 3 , α 4 ]s

Ply stacking angles

α 1 = 0° 0° ≤ α 2 ≤ 180°

α 3 = 0° 0° ≤ α 4 ≤ 180°

Ply thickness [mm]

0.05 ≤ Tk 1 ≤ 0.45 0.05 ≤ Tk 2 ≤ 0.45

0.05 ≤ Tk 3 ≤ 0.45 0.05 ≤ Tk 4 ≤ 0.45

To realize the core structure, a carbon fiber/epoxy composite laminate (CFRP) made of unidirectional layers of Cycom 5320 carbon fiber prepreg was considered [23]. The stacking sequence of the composite laminate is shown in Fig. 3(b), with reference to the local coordinate system situated at the vertex of each area of the unit cell (Fig. 3a).

Fig. 3. (a) Local coordinate system; (b) stacking sequence of the composite laminate for the unit cell 1 and 3

In more detail, as shown in the Fig. 3(b), the ribbon is constituted of a symmetric laminate according to the lay-up sequences shown in Table 1. Two CFRP composite laminates (skins) constituted by 6 layers of unidirectional carbon/epoxy prepreg with stacking sequence [0, 90, 0]s are bonded to the corrugated core by means of epoxy resin. The thickness of the single composite layer is equal to 0.25 mm. Therefore, the total thickness of the skin is equal to 1.5 mm.

Numerical Optimization of a Composite Sandwich Panel

641

Table 2 shows the main mechanical properties of the carbon fiber prepreg. The numerical subscript (1) denotes the direction of the fiber, (2) in-plane transverse to the fibers and (3) through the thickness of each lamina. The letter subscript denotes tensile (t) and compressive (c) while the capital letter F denotes the ultimate stress values. Table 2. Mechanical properties of Cycom 5320 carbon fiber prepreg E11 (GPa) E22 = E33 (GPa) G12 = G13 (GPa) G23 (GPa) ν 12 = ν 13 ν 23 141 9.7 5.2 3.4 0.34 0.44 F1t (MPa) F1c (MPa)

F2t (MPa)

F2c (MPa)

F6 (MPa)

2703

1737

81

312

57

F3c = −F3t

F4

F5

106

106

106

The corrugated ribbons can be obtained cutting into strips a corrugated laminate obtained through a mold properly machined (see Fig. 4a). The mold can be also obtained by means of additive manufacturing process [24]. The Fig. 4(b) shows the three-dimensional lattice structure of the core.

Fig. 4. (a) corrugated mold; (b) three-dimensional corrugated core.

The mechanical properties of the composite sandwich panel realized with the novel bi-directional corrugated core have been numerically obtained by three-point bending loading configuration. As shown in Fig. 5, the main dimensions are W = 40 mm, L = 200 mm, H = 12 mm, Ls = 180 mm. The applied load is equal to P = 1000 N.

642

G. Marannano et al.

Fig. 5. schematic representation of the three-point bending test configuration.

3 Group Search Optimizer (GSO) In order to study the influence of the main parameters that define the geometry of the novel bi-directional corrugated core, a metaheuristic optimization algorithm called Group Search Optimizer (GSO) was implemented in ANSYS APDL environment. The GSO algorithm was proposed by He et al. in 2006 [21] and it is based on animal searching behavior. This algorithm, as evidenced by several literature studies, shows over time good search performance for complex structural optimization problems. As defined in the original version of the algorithm, there are three kinds of member in the available research space (n-dimensional): a) Producers, that search for opportunities (i.e. food); b) Scroungers, that perform strategies to join the resources found by the others members (in particular by producers). c) Rangers, that perform random searches to avoid entrapment in local minima. At the kth searching iteration, the member located at the most promising resource is the producer, a specified number of members are classified as scroungers, and the remaining members are selected as rangers. 3.1 Scanning Mechanism of the Producers General animal scanning mechanism are employed for producers. In more detail, the scanning field in 3D space is a series wedges or cones [22], which were characterized by maximum pursuit angle θ max and maximum pursuit distance lmax (Fig. 6).

Numerical Optimization of a Composite Sandwich Panel

643

Fig. 6. General scanning field in 3D space

√  In particular, in an n-dimensional search space, θ max = π /a2 , a = round n + 1 ,  n 2 lmax = i=1 (ui − li ) where ui and l i are, respectively, upper and lower bounds for the n-th√dimension. In this work, considering that the variables will vary from 0 to 1, lmax = n is the largest diagonal of a unitary n-dimensional hypercube. At each iteration, the producer searches the space for optimal resources. At the k-th iteration, the member with the best fitness is named xkp = (x1k , x2k , . . . , xnk ), where each value in the vector is ranging from 0 to 1. The scanning mechanism of the producer consists in the search at zero degrees (forward direction or z direction), one point on the right side (r direction) and on point on the left side (l direction) of the hypercube. In more detail, at the new iteration, the routine determines the three following vectors:   (1) xkz = xkp + r1 lmax Dk ϕ k   xkr = xkp + r1 lmax Dk ϕ k + r2 θmax /2

(2)

  xkl = xkp + r1 lmax Dk ϕ k − r2 θmax /2

(3)

where r1 is a normally distributed random number (with mean 0 and standard deviation 1), r2 ∈ [0, 1], D is a unit vector responsible for the search direction (see Eq. 4), ϕ is the head angle vector (see Eq. 5).     Dk ϕ k = d1k , d2k , d3k , . . . , dnk ∈ Rn (4)   k ϕ k = ϕ1k , ϕ2k , ϕ3k , . . . , ϕn−1 ∈ Rn−1   The values of the Dk ϕ k vector can be obtained by the following equations: d1k =

n−1  i=1

  cos ϕik

(5)

644

G. Marannano et al.

 n−1     k djk = sin ϕj−1 cos ϕik (j = 2, 3, . . . , n − 1)

(6)

i=j

  k dnk = sin ϕn−1

At the first iteration, the initial head angle vector ϕ 1 is set to following values are obtained by the Eq. 7.



π 4, 4,

 ... , π4 . The

ϕ k+1 = ϕ k + r2 αmax

(7)

where αmax = θmax 2 is the maximum turning angle. If the producer cannot find a better result after “a” iterations, the head angle vector is set to the initial one. 3.2 Scrounging Mechanism In this work, the commonest scrounging behavior explained in [22] is adopted. In more detail, at each iteration, all the selected scroungers perform a movement toward the producer (see Eq. 8).   (8) xSk+1 = xkS + r3 ◦ xkP − xkS where r3 ∈ [0, 1], while the operator “◦” is the Hadamard product to compute the product of the two vectors. During iterations, if the algorithm computes a better solution for the scrounger, the software automatically updates the position of producer and scrounger. 3.3 Ranger Simulations In GSO algorithm, dispersed members are called rangers. In particular, at each producer it possible to associate several rangers. Each ranger performs a random walk in the available research space. In the APDL routine, in the contrary of the common procedures, three different possibility of ranger simulations are available: a) by randomly selecting the design variables in the whole search space; b) by randomly selecting the design variables in a producer-centric search space; c) with a combination of a) and b). During iterations, if the algorithm computes a better solution for the ranger, the software automatically updates the position of producer and ranger.

4 Finite Element Analysis The GSO algorithm was implemented in several macros realized in ANSYS APDL language [25–29]. The geometry of the composite sandwich panel was modeled by means of the parametric definition of keypoints and areas. Shell181 elements, suitable

Numerical Optimization of a Composite Sandwich Panel

645

for analyzing thin to moderately-thick shell structures, were used to discretize the ribbon areas of the corrugated core and the skin surfaces. An average element size equal to 0.5 mm was used to discretize the model. Orthotropic material properties were defined using the same laminate coordinate system shown in Fig. 3. As shown in Table 3, ten Design Variables (DV ) were defined for the optimization problem (see also Table 1). In particular, the design variables DV_i (i = 1, 2, …, 10) are decimal numbers in the range [0, 1]; a specific ANSYS macro converts the decimal number into a variable allowing to define the geometry of the corrugated core. Table 3. Design variables of the optimization problem Variable

Increments

Design Variable (DV)

Increments

Geometric angle of 10° ≤ δ 1 ≤ 60° the unit cell 1 and 3

Free (rounded to integer value)

0 ≤ DV_1 ≤ 1

Free

Geometric angle of 10° ≤ δ 2 ≤ 60° the unit cell 2 and 4

Free (rounded to integer value)

0 ≤ DV_2 ≤ 1

Free

Bonding length on 2 ≤ l 1 ≤ 6 x direction [mm]

0.5 mm

0 ≤ DV_3 ≤ 1

0.125

Bonding length on 2 ≤ l 2 ≤ 6 y direction [mm]

0.5 mm

0 ≤ DV_4 ≤ 1

0.125

Ply stacking angles 0° ≤ α 2 ≤ 180° of the unit cell 1 and 3

Free (rounded to integer value)

0 ≤ DV_5 ≤ 1

Free

Ply stacking angles 0° ≤ α 4 ≤ 180° of the unit cell 2 and 4

Free (rounded to integer value)

0 ≤ DV_6 ≤ 1

Free

Ply thickness of the unit cell 1 and 3 [mm]

0.05 ≤ Tk 1 ≤ 0.45

0.05 mm

0 ≤ DV_7 ≤ 1

0.125

Ply thickness of the unit cell 1 and 3 [mm]

0.05 ≤ Tk 2 ≤ 0.45

0.05 mm

0 ≤ DV_8 ≤ 1

0.125

Ply thickness of the unit cell 2 and 4 [mm]

0.05 ≤ Tk 3 ≤ 0.45

0.05 mm

0 ≤ DV_9 ≤ 1

0.125

Ply thickness of the unit cell 2 and 4 [mm]

0.05 ≤ Tk 4 ≤ 0.45

0.05 mm

0 ≤ DV_10 ≤ 1

0.125

As example, the following figures show, respectively, the geometry of the composite sandwich panel for δ 1 = δ 2 = 10°, l1 = l 2 = 2 mm (Fig. 7a) and δ 1 = δ 2 = 60°, l1 = l 2 = 2 mm (Fig. 7b).

646

G. Marannano et al.

Fig. 7. ANSYS model for different geometric characteristics: (a) δ 1 = δ 2 = 10°, l 1 = l 2 = 2 mm; (b) δ 1 = δ 2 = 60°, l 1 = l 2 = 2 mm

At the end of each analysis, in order to determine if the composite sandwich panel reaches the failure condition, an appropriate macro calculates the failure index I F . As already made in a previous work [21], the failure index I F was determined by using the Tsai-Wu criterion. In more detail, the failure is predicted when I F ≥ 1 in at least one node of the numerical model. 4.1 Optimization Study As input parameters, 3 producers, 3 scroungers and 3 rangers were defined as members of the structural optimization problem. The first analysis involves the execution of random simulations in order to initialize the first feasible members. In more detail, an APDL routine generates random design variables (see Table 3) that allow the definition of the numerical model of the composite sandwich panel. The 10 design variables are stored in a row of a specific array. Each row is defined as a “member” of the analysis. The ANSYS macros automatically apply load and constraints on the model (with the same load configuration shown in the Fig. 5) determining the Tsai-Wu failure index I F , the stiffness and the total weight of the composite. The achievement of a Tsai-Wu failure index I F ≤ 1was selected as State Variable SV, while the achievement of the minimum weight was selected as Objective Function Value (OFV ) of the optimization problem. At the end of the random analysis, the routine sorts the feasible members according to the increasing OFV defining the first optimal geometry of the composite sandwich panel (first best member). In subsequent iteration cycles, the operations carried out during the execution of the GSO Algorithm are described below. 1) For each producer member, the routine scans at zero degree and then scan laterally by using Eqs. 1 to 3. If the weight of the new geometry is less than the optimal one, the macro generates the sandwich composite model and then calculates the stiffness and the Tsai-Wu failure index I F . This member will be considered as “feasible” only when I F ≤ 1. In this case, this current member has a better resource than the optimal one and, therefore, the new best member was found. Otherwise it will stay

Numerical Optimization of a Composite Sandwich Panel

647

in its current position and turn its head to a new angle using Eq. 7. If the producer cannot find a better area after “a” iterations, it will turn its head back to zero degree initializing the head angle vector. 2) As already mentioned, for each producer member, the analysis considers three scroungers. Each scrounger member performs a movement toward the producer using Eq. 8. If the algorithm computes a better weight of the composite than the optimal one, the software automatically generates the model verifying the compliance with the condition I F ≤ 1. In this case, the algorithm updates the position of producer and scrounger redefining the new best member. 3) For each producer member, three rangers perform random searches as already shown in the Sect. 3.3 (methodology c). As shown in the Fig. 8, the GSO algorithm needs less than 80 iterations to converge to the optimal solution. In particular, Fig. 8 shows the convergence analysis for 10 independent run of the algorithm. The analysis time is approximately 1.45 h on a Windows-based workstation equipped with Xeon E5-2630 2.4 GHz CPU and 32 GB of RAM. The minimum weight of the composite sandwich panel is equal to W* = 32.67 g with Tsai-Wu failure index I F = 0.998. The optimal design variables are shown in Table 4. Table 5 shows, instead, the optimal values of the main geometric parameters of the novel bi-directional corrugated core.

Fig. 8. Convergence analysis

Table 4. Best design variables of the composite sandwich panel DV_1

DV_2

DV_3

DV_4

DV_5

DV_6

DV_7

DV_8

DV_9

DV_10

0.8992

0.9679

0.6546

0.4852

0.0130

0.8423

0.0809

0.7285

0.9352

0.7317

648

G. Marannano et al. Table 5. Best geometric parameters of the composite sandwich panel

δ1

δ2

l1 [mm]

l2 [mm]

α2

α4

Tk1 [mm]

Tk2 [mm]

Tk3 [mm]

Tk4 [mm]

55°

58°

4.5

3.5



152°

0.05

0.3

0.4

0.3

The Fig. 9(a) shows the optimal geometry of the core. The Fig. 9(b) shows a detail of the discretization realized on the model.

Fig. 9. Optimal geometry of the corrugated core; (b) detail of the mesh realized on the model.

The maximum deflection of the composite sandwich panel is equal to zmax = 1.21 mm (see Fig. 10) while the corresponding stiffness is equal to K = 826.5 N/mm.

Fig. 10. Maximum deflection of the optimal geometry of the composite sandwich panel

Numerical Optimization of a Composite Sandwich Panel

649

A specific study was also conducted to verify the influence of the random analyses in the determination of the optimal result. In particular, 5000 independent random analyses were started recording, in a specific array, only the unique results obtained (see Fig. 11).

Fig. 11. Totality of random analyses

In particular, out of 5000 random iterations, 1103 unique results were recorded. The best random result is equal to 37.61 g that is greater than about 13% of the best result obtained by using the GSO algorithm. Moreover, the random analyses were concluded after 82.6 h, which are about 57 times greater than the time needed to complete the numerical simulations using the GSO algorithm.

5 Conclusions The Group Search Optimizer (GSO) is an optimization algorithm inspired by animal behavior used to solve highly constrained problems. In the literature study, the algorithm is used to analyze systems in dominant subject areas, i.e. engineering, computer science, robotic, etc. confirming to be one of the most promising methods providing, in some cases, higher performance than other important heuristic methods. However, until now, the algorithm was never implemented in a finite element solver. In this work, the GSO algorithm is implemented in ANSYS APDL environment in order to determine the optimal geometric parameters of a novel bi-directional corrugated core used to realize a CFRP composite sandwich panel. In particular, the optimization study involves the determination of the minimum weight of the analyzed model subject to three-point bending loading. The study shows that using the methodology of analysis presented here it is possible to optimize the structure of the core with high accuracy (up to 13% if we compare the achieved optimal result with that obtained from only random analyses), with significantly reduced analysis time (about 57 times lower) and satisfactory repeatability of results.

650

G. Marannano et al.

In conclusion, GSO algorithm can be considered an interesting alternative method for solving complex optimization problems. The implementation in an ANSYS environment of the presented code has a perspective of extending to optimization of large scale models, to analyze structures with complicated geometries (considering linear or nonlinear effects) and to determine material properties where analytical solutions cannot be easily obtained.

References 1. Rejab, M.R.M., Cantwell, W.: The mechanical behaviour of corrugated-core sandwich panels. Compos. Part B: Eng. 47, 267–277 (2013) 2. Yang, J.-S., Liu, Z.-D., Schmidt, R., Schröder, K.-U., Ma, L., Wu, L.-Z.: Vibration-based damage diagnosis of composite sandwich panels with bi-directional corrugated lattice cores. Compos. Part A: Appl. Sci. Manuf. 131, article number 105781 (2020) 3. Marannano, G., Mariotti, G.V.: Structural optimization and experimental analysis of composite material panels for naval use. Meccanica 43(2), 251–262 (2008) 4. Marannano, G., Parrinello, F., Giallanza, A.: Effects of the indentation process on fatigue life of drilled specimens: optimization of the distance between adjacent holes. J. Mech. Sci. Technol. 30(3), 1119–1127 (2016) 5. Ingrassia, T., Nalbone, L., Nigrelli, V., Pisciotta, D., Ricotta, V.: Influence of the metaphysis positioning in a new reverse shoulder prosthesis. In: Eynard, B., Nigrelli, V., Oliveri, S., Peris-Fajarnes, G., Rizzuti, S. (eds.) Advances on Mechanics, Design Engineering and Manufacturing . Lecture Notes in Mechanical Engineering, pp. 469–478. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-45781-9_47 6. Marannano, G., Pasta, A., Parrinello, F., Giallanza, A.: Effect of the indentation process on fatigue life of drilled specimens. J. Mech. Sci. Technol. 29(7), 2847–2856 (2015) 7. Giallanza, A., Marannano, G., Pasta, A.: Structural optimization of innovative rudder for HSC. In: NAV International Conference on Ship and Shipping Research (2012) 8. Martí, R., Pardalos, P.M., Resende, M.G.C.: Handbook of Heuristics. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-07124-4 9. Lindfield, G., Penny, J.: Introduction to Nature-Inspired Optimization. Academic Press Elsevier, Cambridge (2017) 10. Kirkpatrick, S., Gelatt Jr., C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983) 11. Grossberg, S.: Nonlinear neural networks: principles, mechanisms, and architectures. Neural Netw. 1(1), 17–61 (1988) 12. Holland, J.H.: Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. University of Michigan Press, Ann Arbor (1975) 13. Shi, Y., Eberhart, R.C.: Parameter selection in particle swarm optimization. In: Porto, V.W., Saravanan, N., Waagen, D., Eiben, A.E. (eds.) EP 1998. LNCS, vol. 1447, pp. 591–600. Springer, Berlin, Heidelberg (1998). https://doi.org/10.1007/BFb0040810 14. Dorigo, M., Birattari, M., Stützle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006) 15. Biswas, A., Dasgupta, S., Das, S., Abraham, A.: Synergy of PSO and bacterial foraging optimization—a comparative study on numerical benchmarks. In: Corchado, E., Corchado, J.M., Abraham, A. (eds.) Innovations in Hybrid Intelligent Systems. Advances in Soft Computing, vol. 44, pp. 255–263. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-749721_34

Numerical Optimization of a Composite Sandwich Panel

651

16. Bhandari, A.K., Singh, V.K., Kumar, A., Singh, G.K.: Cuckoo search algorithm and wind driven optimization based study of satellite image segmentation for multilevel thresholding using Kapur’s entropy. Expert Syst. Appl. 41(7), 3538–3560 (2014) 17. Gandomi, A.H., Yang, X.S., Alavi, A.H., Talatahari, S.: Bat algorithm for constrained optimization tasks. Neural Comput. Appl. 22(6), 1239–1255 (2013) 18. La Scalia, G., Micale, R., Giallanza, A., Marannano, G.: Firefly algorithm based upon slicing structure encoding for unequal facility layout problem. Int. J. Ind. Eng. Comput. 10, 349–360 (2019) 19. Micale, R., Marannano, G., Giallanza, A., Miglietta, P.P., Agnusdei, G.P., La Scalia, G.: Sustainable vehicle routing based on firefly algorithm and TOPSIS methodology. Sustain. Futures 1, 100001 (2019) 20. Marannano, G., Ricotta, V.: Firefly algorithm for structural optimization using ANSYS. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. Lecture Notes in Mechanical Engineering, pp. 593–604. Springer, Cham (2022). https://doi. org/10.1007/978-3-030-91234-5_59 21. He, S., Wu, Q.H., Saunders, J.R.: A novel group search optimizer inspired by animal behavioural ecology. In: 2006 IEEE Congress on Evolutionary Computation, CEC 2006, Vancouver (2006) 22. Li, L.-J., Xu, X.-T., Liu, F., Wu, Q.H.: The group search optimizer and its application to truss structure design. Adv. Struct. Eng. 13(1), 43–51 (2010) 23. Technical data sheet of CYCOM® 5320-1 Prepreg (2020). https://www.solvay.com 24. Ricotta, V., Campbell, R., Ingrassia, T., Nigrelli, V.: Additively manufactured textiles and parametric modelling by generative algorithms in orthopaedic applications. Rapid Prototyp. J. 26(5), 827–834 (2020) 25. Ingrassia, T., Mancuso, A.: Virtual prototyping of a new intramedullary nail for tibial fractures. Int. J. Interact. Des. Manuf. 7(3), 159–169 (2013) 26. Mirulla, A.I., Bragonzoni, L., Zaffagnini, S., Bontempi, M., Nigrelli, V., Ingrassia, T.: Virtual simulation of an osseointegrated trans-humeral prosthesis: a falling scenario. Injury 49(4), 784–791 (2018) 27. Cerniglia, D., Ingrassia, T., D’Acquisto, L., Saporito, M., Tumino, D.: Contact between the components of a knee prosthesis: numerical and experimental study. Frattura ed Integrita Strutturale 22, 56–68 (2012) 28. Cappello, F., Ingrassia, T., Mancuso, A., Nigrelli, V.: Methodical redesign of a semitrailer. WIT Trans. Built Environ. 80, 359–369 (2005) 29. Barbero E.J.: Finite Element Analysis of Composite Materials Using ANSYS®, 2nd edn. CRC Press, Boca Raton (2014)

A Topology Optimization Approach to Design of a Low Cost 3D Printable Monolithic Compliant Mechanism for FWMAV’s Wing Actuation Giovanni Carollo(B)

, Tommaso Ingrassia , Antonio Pantano , Vincenzo Nigrelli, and Marco Cosimo Tripoli Università degli Studi di Palermo, Palermo, Italy [email protected]

Abstract. Compliant Mechanisms, Topology Optimization and low-cost 3D printing technologies have been exploited in a combined design approach aimed at the development of a Flapping Wing Micro Air Vehicle’s wing actuation mechanism. A series of topology optimization analysis was implemented to explore four different design domains, each with a specific supports’ positioning. Subsequently, the obtained topologies were geometrically remodeled and tailored to comply with the 3D printing process parameters, resulting in several monolithic Compliant Mechanisms. The different remodeled mechanisms were finally compared in terms of stress and range of movement, through non-linear transient Fem analysis. Although the designed compliant mechanisms move at high rotation frequencies (about 25 Hz) and undergo large deflections, the obtained results are interesting with regard to maximum stresses and rotation angle amplitudes, paving the way to a future design improvement both deepening fatigue issues and implementing size and shape optimization. Keywords: Compliant Mechanisms · Topology optimization · Flapping Wing Micro Air Vehicles · Additive manufacturing

1 Introduction Compliant Mechanisms (CMs) are jointless monolithic structures exploiting elastic deformation to transmit motion and forces, unlike rigid-body mechanisms, which accomplish their task through rigid elements and joints, assembled in a multi-component structure. The ability to design increasingly complex and performing compliant mechanisms has undergone continuous improvements, making it possible to apply this technology in a wide range of applications. Literature indeed provides examples of CMs applications in areas ranging from surgery [1–3] and neurosurgery [4] to micro/nano-positioning stages [5, 6], stroke amplification of piezoelectric actuators [7], lamina-emergent mechanisms [8] lamina emergent compliant kaleidocycles [9], variable-diameter wheels [10] and bio-inspired compliant wrist [11]. Recently, CMs’ flexibility has also been applied to © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 652–663, 2023. https://doi.org/10.1007/978-3-031-15928-2_57

A Topology Optimization Approach

653

tackle the Covid19’s diagnosis issue by the use of a robotized swab sampling integrating sensors and passive compliant mechanisms [12]. Being able to store part of the input energy in the form of elastic deformation energy, compliant mechanisms represent a remarkable opportunity to improve efficiency mostly in application areas such as unmanned robotic vehicles. In particular, the field of unmanned air vehicles (UAVs) could take advantage from the reduction of weight, friction and part count provided by compliant mechanisms since energy losses and excessive weight negatively affect flight autonomy. Flapping Wing Micro Air Vehicles (FWMAVs) [13, 14], whose flying mechanism is inspired by insects and birds, represent some of the most interesting UAVs thanks to their reduced dimensions and their abilities to fly successfully in confined environments. The lift necessary for FWMAVs’ flight is commonly generated by the alternate movement of their wings around a vertical axis. Since FWMAVs waste a substantial amount of input power to reverse wings’ alternate motion and withstand inertial forces, a lower energy consumption could be achieved by replacing rigid multi-component transmissions systems with flexible-monolithic compliant mechanism transmission systems. Unfortunately, due to its complexity, the design of CMs for FWMAVs could be a very difficult and time-consuming task. For this reason, several studies [15–18] have proposed to apply a combined approach involving both structural optimization [19– 23] or pseudo-rigid-body-modeling and mechanism synthesis method to design high performance compliant mechanisms. In particular, among structural optimization methods, Topology Optimization (TO) can represent an effective tool when facing challenging compliant mechanisms design and can provide the designer with innovative configurations that would not have been intuitively easy to model. Considering the abovementioned observations, in the present study it has been investigated the use of topology optimization to design 3D printable compliant mechanisms suitable for wing actuation of FWMAVs. Four different optimization domains were implemented with TO, each one having a different positioning of supports. The topologies resulting from the optimization process were then geometrically remodeled and tailored so as to comply with Additive Manufacturing (AM) process parameters.

2 Materials and Methods The choice of material and 3D printing technology to be used strictly affects CMs performance, since the monolithic structures should be able to integrate both rigid areas to avoid bucking instabilities and highly flexible areas to ensure wide output rotations. In this paper, Fused Deposition Modeling (FDM) process has been selected taking advantage from his cheaper feasibility, from his wide choice of materials and since it allows producing good quality components [24]. According to the maximum overall dimension (70 mm × 60 mm) of the layouts which will be discussed later, and also basing on the previous Compliant Mechanisms’ FDM printing experience [25] it was decided to limit the minimum thickness of flexible areas to 1 mm. This owing both to the inherent technological limit of FDM to not be able to print reliably under millimeter dimensions and also to the demanding operating conditions undergone by the thin flexible areas with high frequency movement. Preliminarily, some of the most common FDM materials such as

654

G. Carollo et al.

ABS, PLA, Nylon 645, Ninjaflex, FlexiFil and Onyx have been tested through FEM analysis so as to identify the appropriate balancing between flexibility and stiffness of the structure. Also taking into account a previous study on similar compliant FWMAV’s wing actuation mechanism [25], the choice fell on FlexiFil material [26], which is a partially bio-based Thermoplastic co-Polyester (TPC) whose main mechanical properties are shown in Table 1. Table 1. FlexiFil™: mechanical properties

FlexiFil™

Specific gravity

Young’s modulus

Tensile strength

Elongation at break

Hardness

1.14 [c/cc]

95 [MPa]

24 [MPa]

530%

45 D

Topology optimization method has been preferred over other structural optimization methods because of the complex motion transformation required to the compliant mechanism, namely the transformation of a continuous rotary input motion, from the electric motor, in an alternating symmetrical rotational output movement of the two wings. In particular, the Solid Isotropic Material with Penalization (SIMP) has been used [27, 28] because of its computational efficiency and conceptual simplicity. The purpose of transforming the continuous rotary input motion of an electric motor in an alternating symmetrical rotational output movement of the wings necessitated of a way of somehow decoupling the two motions despite using a continuous monolithic structure. Authors thus proposed a possible solution conceived by ideally splitting the continuous rotary input motion in two separate and alternating orthogonal linear motions. This partitioning was useful both in order to define a proper initial optimization domain and to configure the preliminary Fem analyses to be used in the topology optimization process. Based on similar existing FWMAVs prototypes, it was decided to design a mechanism performing a complete flapping cycle with a frequency of 25 Hz. Four different initial design domains, each having a different supports’ positioning, have been investigated. 2.1 Setup of FEM Analyses for Topology Optimization The four design domains’ meshes are reported in Fig. 1 discretized with around 36,000 elements having 0.5 mm minimum dimensions. All layouts have a uniform thickness of 2.5 mm, the layouts a, b and c have a maximum overall planar dimension of 60 mm × 60 mm, while the layout d is slightly wider (70 mm × 60 mm) because of the positioning of low friction bearings on the direct proximity of the two wings fastening areas. In Fig. 1, along with CM initial design domains, two steel low-friction bearings are depicted for each layout, having an internal and external diameter respectively of 2 mm and 5 mm, and a thickness of 2.5 mm. As mentioned before, with the intent of ideally splitting the continuous rotary input motion and decoupling it from the alternating symmetrical rotational output movement of the wings, a rod-like rectangular area (10 × 30 mm) is added on the lower part of the model.

A Topology Optimization Approach

655

Fig. 1. Design domain meshes with main dimensions [mm]

Since large deformations occur, non-linear FEM analyses [29, 30] have been performed also to include the effects of the structure’s stiffness changes. Three different load cases have been imposed to simulate the rotary input motion of the electric motor: one load case with the input displacement directed upwards along Y axis, one directed downwards along Y axis and one directed laterally on the X axis. Figure 2 depicts the three load cases boundary conditions for the design domain “a”, showing the displacement directions. Cylindrical supports were added in the internal surfaces of the bearings (green colored) so as to allow the rotation around Z axis and prevent displacements in X and Y directions. Since a complete crank’s rotation has a total duration of 0.04 s, each of the three load cases in Fig. 2 has a duration of 0.01 s, representing the ideally splitting of the continuous rotary input motion in two separate and alternating orthogonal linear input motions in X and Y axis (Fig. 3). The reason of this partitioning is related to the intent of decoupling the input motion along the X direction from the output wing rotation, in such a way of transmitting only the Y direction input motion to the wings.

Fig. 2. Boundary conditions applied in the three load cases for design domain “a”: upward input displacement (A), downward input displacement (B), sideward input displacement (C)

656

G. Carollo et al.

Fig. 3. Ideal splitting of the continuous rotary input motion in two separate and alternating orthogonal linear motions in X and Y axis

As depicted in Fig. 4, taking as example the design domain “a”, different topology optimization regions were defined. The maximum displacement allowed along Z axis was set equal to zero in two extreme faces of the model (Fig. 4a) parallel to XY plane, in order to prevent buckling effects. Some displacement constraints were also imposed in the two wing’s areas (Fig. 4b). Their displacement was set equal to zero in load cases C (Fig. 2C), in order to decouple the input motion along the X direction from the output wing rotation. In this way, during the crank rotation, exclusively the input displacement in Y direction is conveyed to the wings, generating the alternating symmetrical rotational output movement, required to mimicking flight’s kinematics of flapping wing insects.

Fig. 4. Topology regions: Z displacement prevented (a), wings’ output displacement (b), excluded from optimization (c)

This motion decoupling is relevant since flapping angles symmetry of the two wings is required during FWMAVs’ flight, so that the two rotating wings are ideally creating the same angle instantaneously with the X axis during a complete flapping cycle. Since it was not possible to require a determined output rotation amplitude using optimization constraints, input displacements were gradually increased up to a maximum value of 8 mm and output displacement constraints were imposed to the two wing’s areas in load cases A and B (Figs. 2A and 2B). The values of the imposed input displacement and the output displacement constraints are summarized in Table 2.

A Topology Optimization Approach

657

Table 2. Input and output displacement constraints in each design domain (load cases A and B) Design domain

a

Input 2 displacement [mm]

b 3

4

5

2

c 4

5

4

d 6

8

Output −5 −7.5 −10 −12.5 −5 −10 −16 −10 −15 −1.5 displacement constraint Y axis [mm] load case A Output +5 displacement constraint Y axis [mm] load case B

7.5

+10

+12.5

+5

+10

+16

+10

+15

4

6

8

−10 −15 −17.5

+17.5 +10

+15

+17.5

Figure 4c displays the areas excluded from the topology optimization since they are intended to be the connection of the CM with wings and low friction bearings. A stress constraint was also added, setting the maximum admissible stress equal to 70% of tensile strength of the material. Lastly, fabrication constraints were also included, such as the symmetry of the optimized topologies about YZ plane, the additive manufacturing building direction along Z axis and the minimum allowed geometry size of 2 mm, in order to globally avoid excessively thin regions before the topology remodeling. Topology optimization analyses were carried out for each input displacement value of each design domain by setting the mass minimization as the objective function.

3 Results Topologies resulting from each domain optimization are reported in Fig. 5. Gradually increasing the input displacement, geometrical differences are progressively noticeable within each of the four design domain. Considering these differences and properly combining optimization results, four final CM layouts were remodeled, also taking into account that using FDM Additive Manufacturing technology requires a limit of 1 mm on the minimum thickness allowable for flexible areas. The performances of the four remodeled layouts were investigated through non-linear transient Fem analysis in terms of stresses and wing rotation angles.

658

G. Carollo et al.

Fig. 5. Topology densities results: design domain a, b, c, d with each input displacement

Figure 6 contains an overview of the remodeled layouts a, b, c, d each one in the configuration of maximum (Figs. 6a1–d1) and minimum (Figs. 6a3–d3) wing rotation angle and with the wings in central position (Figs. 6a2–d2). It’s worth noting that also inertial effects have been taken into account in the numerical simulations. During meshing process, a minimum number of three elements along the smallest thicknesses was

A Topology Optimization Approach

659

provided. The crank connected to the electric motor (not shown) has a dimension of 5 mm for layout “a” and 8 mm for layouts “b”, “c” and “d”. The maximum values of the Von Mises stress for the different layouts were compared throughout an entire flapping cycle (Fig. 7). In all the transient analysis, the input displacement carried out through the crank rotation was divided in five load steps (marked with different colors in Fig. 8), each of 0.01 s. The first load step occurs only at the beginning of the first flapping cycle, and it was necessarily added in order to make the rotation starting from the configuration of maximum wing angle (see Figs. 6a1–d1), namely when the crank reaches the lowest point in Y axis (as outlined in Fig. 8). A comprehensive comparison of the four CM layouts performances is provided in Table 3.

Fig. 6. Von Mises stresses [MPa] of the four optimized Compliant Mechanisms layouts a, b, c, d (each in a column) in three main configurations: maximum wing rotation angle (Figs. a1–d1), min-imum wing rotation angle (Figs. a3–d3) and central position wings configuration (Figs. a2–d2)

Inertial effects due to the high frequency displacement of the CM’s mass are evident from Fig. 7, since a wavelike stress trend there occurs. As expected, the highest stress peaks arise near the configurations of maximum rotation angle (as also shown in Figs. 6a1–d1 and Figs. 6a3–d3) while significantly lower stresses occur when wings are in 0° angle central position, that is when crank reaches the highest point along X axis.

660

G. Carollo et al.

Fig. 7. Maximum Von Mises Stress evolution during a complete flapping cycle

Layout a Layout b Layout c Layout d

Mesh total elements 21000 46700 25000 45000

Minimum element dimension [mm] 0.3 0.3 0.25 0.2

Fig. 8. Transient analysis: load steps details and mesh general informations

Table 3. Layouts performances comparison: weight, rotation angle and stress Weight [g]

Weight reduction%

Overall wing rotation angle [°]

Maximum wings’ angle deviation [°]

Maximum Von Mises Stress [MPa]

Stress at maximum rotation angle [MPa]

Stress at minimum rotation angle [MPa]

Stress in central position [MPa]

Layout a

2.56

44.7

103

13

23.4

22.7

22.6

8

Layout b

1.95

57.8

106

21

10.3

9.6

8.6

2.7

Layout c

2.32

49.9

83

28

21

21.6

11.2

6.9

Layout d

2.10

58

85

23

18.6

19.3

15.7

3.3

The effectiveness of decoupling the continuous rotary input motion of an electric motor in an alternating symmetrical rotational output movement is enough reached.

A Topology Optimization Approach

661

Despite this aspect to be further deepen by adding modal analyses, the overall wing rotation achieved is significant in all layouts, revealing the potentiality of such approach.

4 Conclusion In this study topology optimization and low-cost 3D printing technologies have been used to design a compliant mechanism for a Flapping Wing Micro Air Vehicle’s wing actuation mechanism. Four different design domains were explored implementing a series of topology optimization analyses, whose results allowed the design of four monolithic compliant mechanisms layouts. A critical issue also tackled with the present approach is to design monolithic CMs able to transform the continuous rotary input motion of an electric motor in an alternating symmetrical rotational output movement, required to mimicking flight’s kinematics of flapping wing insects. Outlined results are meaningful since the four planar CMs designed all reach overall wing rotation angles above 80°, they do not undergo to buckling phenomena despite the uniform thickness of 2.5 mm and their stress trend stays below the material tensile strength. Furthermore, a weight reduction from a minimum of 44% to a maximum of 58% is achieved, referring to the respective initial design domains. The performances of layout b are in particular noteworthy both for the lowest stress trend throughout a complete flapping cycle and for the highest wing overall rotation angle (106°). Further research will be focused to explore the effect of the use of different FDM materials on the final performances. These promising results are aimed to paving the way to a future design improvement of FWMAVs’ compliant actuation mechanisms, both deepening fatigue issues, introducing modal analyses inside the topology optimization design process and implementing size and shape optimization, in order to fine-tune the optimized topologies and simplify their manufacturing through low-cost 3D printing technologies.

References 1. Sargent, B., et al.: An origami-based medical support system to mitigate flexible shaft buckling. J. Mech. Robot. 12(4), 1–16 (2020) 2. Chandrasekaran, K., Thondiyath, A.: Design of a two degree-of-freedom compliant tool tip for a handheld powered surgical tool. J. Med. Dev. Trans. ASME 11(1) (2017) 3. Sun, Y., Lueth, T.C.: Cruciate-ligament-inspired compliant joints: application to 3D-printed continuum surgical robots. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 4645–4648 (2021). 4. Zeng, W., Yan, J., Yan, K., Huang, X., Wang, X., Cheng, S.S.: Modeling a symmetricallynotched continuum neurosurgical robot with non-constant curvature and superelastic property. IEEE Robot. Autom. Lett. 6(4), 6489–6496 (2021) 5. Wang, K., Wang, D.H., Zhao, J.Y., Hou, S.: A novel piezoelectric-actuated microgripper simultaneously integrated microassembly force, gripping force and jaw-displacement sensors: Design, simulation and experimental investigation. Smart Mater. Struct. 31(1) (2022) 6. Wu, H., Lai, L., Zhang, L., Zhu, L.: A novel compliant XY micro-positioning stage using bridge-type displacement amplifier embedded with Scott-Russell mechanism. Precis. Eng. 73, 284–295 (2022)

662

G. Carollo et al.

7. Li, H., et al.: Design and modeling of a compact compliant stroke amplification mechanism with completely distributed compliance for ground-mounted actuators. Mech. Mach. Theory, 167 (2022) 8. Pehrson, N.A., Bilancia, P., Magleby, S., Howell, L.: Load-displacement characterization in three degrees-of-freedom for general lamina emergent torsion arrays. J. Mech. Des. Trans. ASME 142(9), 1–11 (2020) 9. Zhang, H., Zhang, X., Zhu, B., Wang, R., Li, H.: Design and analysis of corrugated flexurebased lamina emergent spatial joints for symmetrical compliant kaleidocycles. Mech. Mach. Theory 167, 104525 (2022) 10. Zeng, W., Gao, F., Jiang, H., Huang, C., Liu, J., Li, H.: Design and analysis of a compliant variable-diameter mechanism used in variable-diameter wheels for lunar rover. Mech. Mach. Theory 125, 240–258 (2018) 11. Bilancia, P., Baggetta, M., Berselli, G., Bruzzone, L., Fanghella, P.: Design of a bio-inspired contact-aided compliant wrist. Robot. Comput.-Integr. Manuf. 67 (2021) 12. Tang, R., Zheng, J., Wang, S.: Design of novel end-effectors for robot-assisted swab sampling to combat respiratory infectious diseases. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 4757–4760 (2021) 13. Zhang, C., Rossi, C.: Effects of elastic hinges on input torque requirements for a motorized indirect-driven flapping-wing compliant transmission mechanism. IEEE Access 7, 13068– 13077 (2019) 14. Zhang, C., Rossi, C.: A review of compliant transmission mechanisms for bio-inspired flapping-wing micro air vehicles. Bioinspir. Biomim. 12(2) (2017) 15. Stanford, B., Kapania, R., Beran, P.: Conceptual design of compliant mechanisms for flapping wings with topology optimization. AIAA J. 49(4), 855–867 (2011) 16. Stanford, B., Beran, P.: Optimal compliant flapping mechanism topologies with multiple load cases. J. Mech. Des. Trans. ASME 134(5), 1–10 (2012) 17. Stanford, B.: Three-dimensional topologies of compliant flapping mechanisms. J. Aerosp. Eng. 27(4), 06014001 (2014) 18. Chattaraj, N., Ananthasuresh, G.K., Ganguli, R.: Design of a distributed compliant mechanism using spring-lever model and topology optimization for piezoelectrically actuated flapping wings. Mech. Adv. Mater. Struct. 28(2), 118–126 (2021) 19. Marannano, G., Mariotti, G.V.: Structural optimization and experimental analysis of composite material panels for naval use. Meccanica 43(2), 251–262 (2008) 20. Marannano, G., Parrinello, F., Giallanza, A.: Effects of the indentation process on fatigue life of drilled specimens: optimization of the distance between adjacent holes. J. Mech. Sci. Technol. 30(3), 1119–1127 (2016) 21. Marannano, G., Ricotta, V.: Firefly algorithm for structural optimization using ANSYS. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. Lecture Notes in Mechanical Engineering, pp. 593–604. Springer, Cham (2022). https://doi. org/10.1007/978-3-030-91234-5_59 22. Ingrassia, T., Nigrelli, V., Buttitta, R.: A comparison of simplex and simulated annealing for optimization of a new rear underrun protective device. Eng. Comput. 29(3), 345–358 (2013) 23. Giallanza, A., Marannano, G.V., Pasta, A.: Structural optimization of innovative rudder for HSC. In: 17th International Conference on Ships and Shipping Research, NAV (2012) 24. Ricotta, V., Campbell, R.I., Ingrassia, T., Nigrelli, V.: Additively manufactured textiles and parametric modelling by generative algorithms in orthopaedic applications. Rapid Prototyp. J. 26(5), 827–834 (2020) 25. Carollo, G., Ingrassia, T., Pantano, A.: Design of a low cost 3D printable single-component compliant mechanism for FWMAV’s wing actuation. In: Rizzi, C., Campana, F., Bici, M.,

A Topology Optimization Approach

26. 27.

28.

29. 30.

663

Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. Lecture Notes in Mechanical Engineering, pp. 39–49. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-912 34-5_4 https://www.formfutura.com/downloads Ingrassia, T., Nigrelli, V.: Design optimization and analysis of a new rear underrun protective device for truck. In: Proceedings of the 8th International Symposium on Tools and Methods of Competitive Engineering, TMCE 2010, vol. 2, pp. 713–725 (2010) Bendsøe, M.P., Sigmund, O.: Topology Optimization—Theory, Methods, and Applications. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-662-05086-6. ISBN 978-3-64207698-5 Ingrassia, T., Mancuso, A.: Virtual prototyping of a new intramedullary nail for tibial fractures. Int. J. Interact. Des. Manuf. 7(3), 159–169 (2013) Restivo, G., Marannano, G., Isaicu, G.A.: Three-dimensional strain analysis of single-lap bolted joints in thick composites using fibre-optic gauges and the finite-element method. J. Strain Anal. Eng. Des. 45(7), 523–534 (2010)

Computer-Aided Prognostics and Health Management Using Incipient Failure Detection Cecilia Gattino1 , Elia Ottonello1 , Mario Baggetta1 , Jacek Stecki2 , Roberto Razzoli1 , and Giovanni Berselli1(B) 1 DIME, University of Genova, Via Opera Pia 15/a, 16145 Genova, Italy {mario.baggetta,roberto.razzoli,giovanni.berselli}@unige.it 2 PHM Technology, North Fitzroy, VIC 3068, Australia [email protected]

Abstract. One of the main objectives of Industry 4.0 is to build up Smart Factories with improved performance as for productivity, together with lowered maintenance times and costs. In this perspective, Prognostic and Health Management (PHM) is a proactive method to industrial services enhancing maintenance according to the health of the system. PHM entails diagnostic and prognostic engineering tools to recognize the health of the system, and then to choose the prime maintenance actions. The diagnostic tool has to be capable to handle a sizeable volume of data and determine, by means of processing algorithms, the proper set needed for the analysis. The software named MADe can be used as a helpful utility to engineers; it is a model-based toolkit for Reliability, Availability, Maintainability and Safety (RAMS) analysis, capable of optimizing maintenance activities based on the information given by the software, relating to sensor choice and to maintenance strategies. In the PHM framework, the detection of incipient failures is central task of the monitoring the health status of systems that include components sensitive to fatigue or aging. In fact, timely diagnosis allows to schedule maintenance reducing the impact on production outcomes. Based on these considerations, the present paper explains a technique for detecting incipient failures in fatigue sensitive parts, by means of an Equivalent Damage Index (EDI), that can be calculated from the measured signals on the real plant. This procedure is validated, as well as other cutting-edge techniques, to prove its accuracy in detecting incipient breakdowns. Keywords: Anomaly detection · Incipient failure · Residual Life Estimation · CBM · PHM · MADe · Industry 4.0 · Smart factory

1 Introduction Industry 4.0 is a novel industrial view centering on inter-operability of cyber-physical systems and, in a general way, on the creation of smart factories. Specifically, in process industry, many overheads do not produce added value at all. These costs are chiefly associated to machinery interruption and safety/quality inspection; their impact (for management, too) ought to be belittled. In this context, more and more importance is © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 664–674, 2023. https://doi.org/10.1007/978-3-031-15928-2_58

Computer-Aided Prognostics and Health Management

665

given to the choice of maintenance actions. For example, Condition Based Maintenance (CBM) is a maintenance approach based on the monitoring of systems and parts health conditions, aiming at settling appropriate actions in a successful timing [1]. CBM effectiveness is strongly influenced on when it is spotted an alteration in the system’s behavior, because an early discovery grants the timetabling of maintenance activities which curtail the effect on the production outcome. Such a maintenance strategy calls for a deep system analysis recognizing potential failures and selecting (and placing) an appropriate set of sensors for system monitoring, leading to significant expenses. Nevertheless, the cost reduction resulting to downtime/failures minimization, accompanied by a boosting in maintenance management, elects CBM appropriate for an upward quantity of sectors and production areas. The way for effectively using CBM is explained in Fig. 1 and involves, in the introductory stage, a modeling of system failures (to improve sensor locating) and, in the working period, the gathering of sensor data and processing, in order to evaluate the finest maintenance policy.

Fig. 1. Workflow of Condition Based Maintenance.

The design Phase, shown in Fig. 1, abstracts the stage where the plant designer takes decisions on how looking over the system (by exploiting a set of distributed sensors inside the same system). This sensors set might be improved and/or revised following the plant set-up (redesign phase). In the suggested context, the sensor placement is made according to the analysis of the plant relying on its functional model (model creation). Taking advantage of a functional model, the designer can produce the failure diagram, i.e., a diagram for each component that recaps all the foremost failures which may occur to it. In this situation, the Software MADe, Maintenance Aware Design environment, by PHM Technology, is absolutely appropriate for this aim [2]. The software offers a lot of modules allowing the users to build functional models, execute risk assessment, relate various maintenance plans, and accomplish a sensor analysis to grant adequate handling of breakdowns. In the design phase, as the software employs functional model-based block diagrams, it permits the recognizing of critical parts by means of different criticality analyses and their transfer to a CAD environment in the preliminary stage of the plant design. Each item performing a function is likely to fail, so it is essential to assign every part and sub-system a failure diagram. A failure diagram, a basic example being shown in Fig. 2.a, expresses any fault that can direct to a functional failure still sketching out the possible origins and mechanisms of failure. Afterwards, MADe runs a sensor position analysis pushed by the model and unambiguously signals the damaged parts in the CAD environment as shown in Fig. 2.b. MADe has a customizable archive containing various types of sensors, characterized by technical, dimensional, and economical features, as

666

C. Gattino et al.

Fig. 2. (a) Example of failure diagram for contact fatigue mechanism (b) Critical parts clearly highlighted in CAD tool, after MADe analysis.

well. The technical descriptions of a sensor are as sociated to the PoD (Probability of Detection) of the true positives about the irregularities, and specifically, the ratio of spotted true positives related to all the warnings. A sensor may be put on a flow or on a malfunction sign as, e.g., the juddering of a spinning part. The software generates the logical diagnostic directions able to pinpoint a failure by means of the available sensors set and the propagation board. Lastly, MADe offers a tool allowing the users to rivaling diverse maintenance policies, like scheduled maintenance or Condition Based Maintenance, via a Maintenance Cost Estimation. As an add-on to the actual MADe environment, the detailed report of which can be found in [3], the present work centers on the recognition of incipient failure for parts under fatigue cycles or aging via the identification of the so named Equivalent Damage Index.

2 Basic Background Fatigue is a damaging progression which occurs when a material is exposed to periodic load conditions, that are usually reshaped as addition of different repeated stress conditions. Fatigue occurrences, producing around 90% of all mechanical breakdowns [4] are commonly divided in: i) High Cycle Fatigue (HCF), when the deformation is so trivial to let the phenomenon be basically described in elastic terms, is normally acceptable for over 1000 cycles; ii) Low Cycle Fatigue (LCF), when the anelastic portion of the strain in specific regions of the material is a noteworthy percentage of the entire one. As for HCF, the breakdown is enduring and progresses across three phases: i) Crack Nucleation, specifically the lengthiest stage whose extent varies for form factors (e.g. coarseness or cuts) and physical reasons (e.g. superficial hardness); ii) Crack Propagation, when the crack develops due to sharpening effect and, hence, local stress increases; iii) Final Failure, when a brittle breakage happens because the area has become too scant to resist the applied force. In the second period, the crack advances and rests, depending on the stress state, creating the distinctive beach marks on the fractured areas. It is quite easy to

Computer-Aided Prognostics and Health Management

667

ascertain the signs of this failure as the cross-section results as significantly decreased. The broken surface is very dissimilar respect to a standard brittle rupture as displays two diverse regions: the first one is a polished zone evidencing beach marks denoting the crack spread in the next period, the second appears irregular and dazzling for the brittle break in the last period. As before reminded, this paper considers a methodology, based on Residual Life Estimation, involving a scrutiny of the whole record of the part subjected to HCF so as to foresee its breakdown.

3 Data Processing To properly build a Diagnostic Tool a prerequisite is the Data Processing; it involves the data collection step (see Fig. 1) which has to be performed on the recorded time history of the experimental measurements about a part that may fail as a consequence of HCF. Data Processing algorithms operate on a collection of data to decrease the information difficulty and consolidate a data-set for the symptomatic tool. Particularly, two data processing methods have been examined for early failure recognition: the STatistical Analog Monitor (STAM) [5] and the Rainflow method [6]. The latter one has been definitively preferred as it is well befitting fatigue analysis and provides high-quality resolution. In practice, the Rainflow method is a totaling algorithm used for fatigue analysis: it converts a sequence of variable stress into an equivalent set of constant amplitude stress reversals. The Rainflow procedure was presented in 1968 by Matsuishi and Endo and was an opening technique for extracting closed loading cycles. The “Rainflow” is also known as pagoda roof method from comparing it to the effect rain falling on a pagoda’s roof and dropping downward. Table 1. Example of simple rainflow processed data, from Lee and Tjhung [7]. Amplitude

Average

Cycles

9

−0.5

1

4

−1

1

7

−0.5

1

3

0.5

1

This data processing technique implicates signal filtering by obtaining a succession of periodic waves described by the reversals, the mean value, and the amplitude. In fatigue analysis, these reversals denote the material’s hysteresis cycles. Beginning from the assumptions of a major casual signal, that is a quota of the signal representing the behavior of the part, the method chooses crests and valleys denoting the limits of the semi- cycles completed by the part; next, it sums all the complete cycles, or even the semicycles, producing a record (analogous to the example illustrated in Table 1) from the basic signal data given in Fig. 3. Generally, for real signals, the average and amplitude ranges are split into sections to integrate all the reversals. The procedure leds to the shaping of the chart in Fig. 4 and streamlines the further stage by standardizing the cycles in

668

C. Gattino et al.

Fig. 3. (a) Example of loads for Rainflow method (b) Totaling technique [7].

Fig. 4. Example of histogram for Rainflow method.

certain areas [8]. The Rainflow technique is based on cycles and delivers high-quality resolutions, also for extremely little amplitude oscillations but misses information on steady load conditions. Due to this motive, the method is broadly adopted for fatigue analysis and for vibration sensors whenever the needed knowledge is about the reversals.

4 Residual Life Estimation Residual Life Estimation is an method that grants to predict a breakdown in a part subjected to fatigue and wishes at structuring a Cumulative Damage Index (CDI), i.e., a feature that portrays the part’s exploitation. The CDI fluctuates from 0 to 1, 0 signifying a fresh component while 1 denotes the breakdown manifestation. Such technique is classically adopted when data arrive by force sensors, though usual and elementary sensors, such as vibration sensor, can be employed to construct a roundabout measure and later associate it with the related load. Anyhow, the method is powerful although it needs the identification of many details about the part, being highly demanding for standard on-field monitoring.

Computer-Aided Prognostics and Health Management

669

Fig. 5. Wöhler curve.

The way to work up a valuation comprises several steps. Initially, it is necessary to characterize the part to sketch a Wöhler diagram, i.e., a logarithmic behavior with three distinctive regions as in Fig. 5. The first region, nearly horizontal, involves LCF that appears with sizeable plastic strain, to be possibly prevented in the design stage; the third region, again virtually horizontal, relates to endless fatigue life; the middle region, being of interest for the present investigation, is about HCF. The Wöhler diagram for a nonspecific part can be formed, and later suitably exploited, by knowing: • Employed Material. The figures of tensile,σr , and fatigue, σD , strengths are essential. • Kind of stress and configuration. It is mandatory to reckon what kind of load is concerned and the constrain configuration to link strain signal and actual load. Moreover, the knowledge on the load shape allows to adopt a correlation between actual stress and the test force. • Additional part’s information. In a common part, three parameters may downgrade the fatigue strength, σD of the material, i.e., notch factor, size factor, surface factor. The below formula applies: 

σD = 

Ks KD σD KF

(1)



where σD is corrected fatigue strength (σD < σD ), KF is the Notch Factor, KD is the Size Factor, Ks is the Surface Factor. σr and σD can be exploited to describe the Wöhler diagram of the part, as illustrated in Fig. 5. Later, after the said data treatment and (possibly) a deformation signal is transformed in a stress figure, it is essential to calculate an equivalent stress amplitude: σaeq =

σa (1 − σm /σr )

(2)

being σaeq is the equivalent stress amplitude, σa is the stress amplitude, σm is the average stress; σr is the tensile strength.

670

C. Gattino et al.

Finally, the Palmgren–Miner Rule [10] is adopted to total the weight of all the cyclic signal packets accumulated by the Rainflow method, so as to at last figure the CDI out: CDI =

k  ni Ni

(3)

i=1

where k is the number of batches, ni is the number of reversals of the ith batch, Ni is the number of cycles to breakdown of the ith batch complying to the Wöhler curve. The advantage of the method is the possibility of estimating the fatigue damage accumulated by the component according to the data given by the sensors; obviously, given the large extent of data required, it is quite tough to obtain a trustworthy CDI unless highly resources’ consumption. Thus, a matching approach is looked-for, specifically the Incipient fault Identification technique termed henceforward, which analyzes alternative attitudes of the same part evading all the aforementioned hypotheses.

5 Incipient Fault Identification This chapter portrays a method for the discovery of emergent failure in parts undergoing fatigue. A trait of the signal named Equivalent Damage Index (EDI) is employed to compare various operative behaviors to detect a early breakdown or a abnormal condition. This procedure is alike the formerly discussed one, however as it is centered on the relationships of different working reversals, it does not require all the data and suppositions expected for the Residual Life Estimation technique. 5.1 Data Set The procedure has been employed on a test device allowing to investigate the fatigue behavior of ball bearings. The data set was produced by the IMS center for Intelligent Maintenance Systems [9] and is about bearing run-till-breakdown tests executed under routine load states on a specifically designed test device. As illustrated in Fig. 6, the test

Fig. 6. Bearing test rig and sensor positioning [9].

Computer-Aided Prognostics and Health Management

671

device comprises four bearings endowed with 2 High Sensitivity accelerometers located on a shaft which is being rotated at steady 2000 RPM by an AC motor. A radial force of 6000 lbs is placed to the shaft by the two middle bearings by means of a spring. Data acquisition is made by a 4 channels DAQCard-6062E, one for each bearing, with a sampling rate of 20 kHz for 20,480 points in 10 min intervals till breakdown occurs. The data-set contains 984 one-second batches every 10 min letting to trace the whole endurance of the bearings which is about a week. 5.2 Build-Up of Equivalent Damage Index The practice to be fulfilled to get an Equivalent Damage Index derives from the Rainflow technique; it requires the construction of a logarithmic diagram through the same model as the Wohler’s one in which representative points are r rather than σr , and d rather than  σD . Outputs for r and d have been taken to let the analysis be accomplished with adequate results. Nevertheless, the dependence of the conclusive results on these parameters is contained in Sect. 5.3; the following ones are selected for r and d as for, respectively, amplitude and number of reversals: r = {102 ; 103 }. d = {10; 106 }. Likewise, to Eq. 2, the corresponding amplitude, aeq , for every group of data attained via the Rainflow technique is: aeq =

a (1 − m/ra )

(4)

where aeq is the equivalent amplitude, a is the amplitude of the cyclic wave, m is the average value of the cyclic wave, ra is the amplitude of point r, fixed to 10. By means of a logarithmic graph (comparably to the Wöhler diagram) and the equivalent amplitude figure, aeq , the maximum number of cycles Ni at breakdown can be discovered and used within the Palmgren–Miner rule (as in Eq. 3). Lastly, the calculated EDI is normalized on the initial 100 batches, so to get similar and noise-free outcomes. 5.3 Results The intended results are achieved by analyzing the x-axis of the channel 1, i.e., that associated to a flawed bearing. The result is matched with the at disposal results over a straightforward gathering of the higher values for every batch. The computation time for performing the EDI analysis although is 33% higher has to be deemed sound since 984 batches, equivalent to 7 days, have been inspected in around two minutes. The charts about the outcomes do not reveal the initial 400 batches zooming in on the emerging breakdown. The initial comparison interests the bearing #1: the damaged one. The data examined by the maximum value routine are displayed in Fig. 7.a, in which it is evident the emerging flaw beginning from batch 700 and the breakdown commencing approximately at batch 900. With regards to the EDI procedure, outcomes are indicated in Fig. 7.b, in which three behaviors are noticeably emphasized:

672

C. Gattino et al.

• Normal behavior: described by EDI about equal to 1; • Incipient failure: nearby batch 533 the EDI raises in two batches by 50% and arrives to about 20 starting from batch 700; • Breakdown: in this phase the EDI extend to 1900 from batch 900. An additional significant aspect is portrayed in Fig. 9, in which the channel #4, related to a good bearing, confirms the influence of the early flaw which is detected in channel #1. This is helpful, since the inconsistency identified in channel #4 can validate the measure made by the sensor assigned on channel #1, thus excluding false positives and considerably enhancing the sensors reliability. Because this method is comparative, some parameters have been taken in an arbitrary way; consequently, the sensitivity of the method is questioned taking into account these parameters: • Value ra in Eq. 4; • Amplitude of r and d in the EDI-linked Wöhler diagram. Regarding the sensitivity analysis on ra , the values 10, 50, 100 have been tried. Table 2 displays the outcomes about the percentage difference of the EDI values: totals and subtotals both in the Incipient fault and in Failure segments. Outcomes reveal virtually no sensitivity in the early flaw sector and greater sensitivity in the breakdown portion, still conserving a satisfactory value. The following sensitivity analysis involves the amplitude of r. This analysis brings to light a great impact of r on the EDI estimate with a change near to 50%, Nonetheless, as presented in Fig. 10, using only one of the parameters does not have an impact on the method effectiveness allowing to identify the incipient failure nearby the batch 533. It is worthwhile noticing that decreasing r, the EDI rises still implicating a bigger noise. Similar conclusion concerns the third sensitivity analysis, pertaining the amplitude of d ; correspondingly to the aforementioned analysis, the EDI ì extremely depends on d (i.e., too much boosting d brings to an extreme noise surge), while not disturbing the ability to identify the failure at batch 533 (Fig. 8). Table 2. Sensitivity of the method to ra : average difference (%) ra value

Total

Incipient

Failure

50

0.16

1.26

0.17

100

0.19

1.41

0.20

Computer-Aided Prognostics and Health Management

673

Fig. 7. (a) Plot of maximum value for first channel (b) Plot of EDI of first channel.

Fig. 8. Detail of the impact of incipient failure on a healthy channel.

Fig. 9. Sensitivity of the method to the parameter r.

6 Conclusion Early failure detection has been verified by matching the outcomes from an “Equivalent Damage Index (EDI)” to those given by a different statistical method, i.e., the highest value technique, proving that an EDI-centered study allows to find an incipient failure in advance with respect to the cited statistical feature. Furthermore, the method has demonstrated a marginal sensitiveness, at least in terms of functionality, to the a priori

674

C. Gattino et al.

set parameters. Even though its performance is preserved, there are connections of the EDI and the related noise to the factor defining the logarithmic function. By using an EDI and matching signals collected from parallelly set sensors in systems having redundant parts, it is feasible to ascertain the breakdown signal, to examine the consequences, and accordingly improve the sensor reliability. As future developments, the following activities will be carried out: i) comparative evaluation of EDI with other state-of-the art indexes other than the Rainflow method [11]; ii) trials on other data series, already available within the MADe tool, in order to confirm its effectiveness.

References 1. Wee, D., et al.: Industry 4.0 - how to navigate digitization of the manufacturing sector. McKinsey Co. 58, 7–11 (2015) 2. Lindsey, N.J., Mahdi, A., Gallo, D.: Reliability analysis of complex NASA systems with model-based engineering. In: IEEE Annual Reliability and Maintainability Symposium (RAMS) (2020) 3. Hess, A., Stecki, J., Rudov-Clark, D.: The maintenance aware design environment: development of an aerospace PHM software tool. In: Proceedings of the PHM 2008 (2008) 4. Campbell, F.C.: Elements of Metallurgy and Engineering Alloys. ASM International, Novelty (2008) 5. Decker, R. L.: The mark V STAM-a microcomputer for acquiring field data. SAE Trans. 88, 1915–1926 (1979) 6. Matsuishi, M., Endo, T.: Fatigue of metals subjected to varying stress. Jpn. Soc. Mech. Eng. Fukuoka Jpn 68(2), 37–40 (1968) 7. Lee, Y., and Tjhung, T.: Rainflow cycle counting techniques. In: Metal Fatigue Analysis Handbook: Practical Problem-Solving Techniques for Computer-Aided Engineering, p. 89 (2011) 8. Nieslony, A.: Rainflow counting algorithm. MATLAB Central File Exchange (2010). Accessed July 2020 9. Qiu, H., et al.: Wavelet filter-based weak signature detection method and its application on rolling element bearing prognostics. J. Sound Vib. 289(4–5), 1066–1090 (2006) 10. Juvinall, R. C., Marshek, K.: Fundamentals of Machine Component Design. Wiley, Hoboken (2020) 11. Cosenza, E., Manfredi, G.: Damage indices and damage measures. Prog. Struct. Mat. Eng. 1(2), 50–59 (2000)

Geometric Modelling and Analysis

On the Adoption of the STEP Neutral Format for Procedural Mechanical CAD Models Pedro Company1

, Jorge D. Camba2(B) , Stanislao Patalano3 and Antonio Lanzotti3

,

1 Universitat Jaume I, Castellón de la Plana, Castellon, Spain

[email protected]

2 Purdue University, West Lafayette, IN 47907, USA

[email protected]

3 Università degli Studi di Napoli Federico II, Naples, Italy

{patalano,antlanzo}@unina.it

Abstract. In the Model-Based Enterprise (MBE) paradigm, digital product models are the primary source of information to support the design, analysis, and manufacturing of products. The master geometry of these product models is generally built procedurally in a CAD system, typically as history-based parametric geometry. However, some model conversion and exchange processes imply losing procedural information (i.e. the internal structure that represents how the model was built). In this position paper, we review the state of the art in procedural CAD model exchange and discuss an approach for conveying procedural information during model exchange. Although not available in practice, the approach is currently theoretically feasible, as the exchange of procedural information is already supported in modern neutral formats. Finally, we develop our position by advocating for defining Conformance Classes to facilitate the practical development of the ISO 10303 AP 242 standard, thus enabling official implementations that can provide full exchange capabilities for procedural CAD models. Keywords: Procedural CAD models · Neutral formats · STEP

1 Introduction In a Model-Based Enterprise (MBE), annotated digital product models are used as the authoritative source of information for all the activities throughout the product’s lifecycle. The paradigm is commonly used in relationship to products with a strong mechanical basis, for which mechanical CAD applications (MCAD), usually parametric and historybased, are used [1]. Parametric geometry enables model reusability, facilitates geometric changes and variability, and provides a mechanism to embed design intent information within the geometric model [2]. Despite these advantages, explicit B-Rep models are the common output in many processes such as reverse engineering geometric shapes [3] and 3D data exchange [4]. Reconstructing procedural information from explicit CAD models is still an open problem [5]. The persistent naming problem is a well-known example of some of the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 677–685, 2023. https://doi.org/10.1007/978-3-031-15928-2_59

678

P. Company et al.

challenges involved in the process [6, 7]. Although some solutions have been proposed both academically and commercially [8–10] and current technology can already generate basic procedural models from explicit geometry [11] the reconstruction is in many cases imprecise. Reconstruction technology such as the so-called “smart translators” provide analysis tools to generate a modeling procedure that is compatible with the original “dumb” geometry. The resulting model tree is viable, but usually different from the original. Most quality aspects [2, 12] of the original model get lost, as the output history-based model does not guarantee consistency, conciseness, or clarity. Additionally, most of the original design intent of the part will also be missed. In this position paper, we contend that the use of neutral formats for procedural CAD models, such as STEP, seems to be the obvious solution to this type of interoperability problem. We address the question of why after several years since its publication, there are still no commercial implementations of AP 242 of ISO 10303 standard (STEP). We argue our position that the excessive complexity of the standard discourages full implementations, while partial implementations are not allowed. We advocate for defining Conformance Classes that allow the practical development of the AP 242, aimed at enabling full exchange of procedural CAD models. The paper is organized as follows: first, we review representation formats and the approaches used to exchange information between them and revisit the main features of the ISO 10303 standard (STEP). Next, we discuss a simple experiment that demonstrates that reconstructed CAD models lack the required quality to enable reusability in an MBE context. We conclude by summarizing our position in favor of defining Conformance Classes that ease the practical development of the ISO 10303 AP 242.

2 Representation Formats Representation formats are data structures used to store and process the data related to building, manipulating, and sharing CAD models. From a geometric modeling standpoint, CAD model representations can have low semantic level (if they do not support editing or information that conveys design intent) or high semantic level (if they allow editing and contain information that conveys design intent). In procedural models, design intent is conveyed through features and the modeling sequence (represented as a tree structure). Procedural models are built by defining spatial and topological relationships between geometric elements called parent–child relationships. Alternatively, models without history are called explicit (or “dumb”) because they convey geometric information but have lost the design intent and the ability to be edited through the manipulation of the original features and modeling tree [4]. Exchanging procedural models is challenging as the entire modeling sequence, highlevel feature information, and the model’s design intent must be converted from the source model to the target representation. Converting an explicit model to a procedural model is particularly difficult, as this information is not available in the source model. In this regard, both direct and neutral translators have limitations that produce incomplete and inaccurate translations. An alternative strategy involves the use of “smart” translators which first guarantee the accurate conversion of the explicit geometry, and subsequently supplement it with

On the Adoption of the STEP Neutral Format

679

the recognition of high semantic level information. These types of translators work in three stages: • Translate the “dumb” geometry of the original model. • Fix possible geometric errors in the target format. • Parse the target model to recognize the modeling operations (features) and create a procedure (model tree). The translators, however, are inherently inefficient, as the history of the final model is generally not the same as the original, thus the design intent is lost. Nevertheless, a model tree is generated which enables the editing of model features. In Sect. 4, we discuss an experiment to determine how smart translators fail to replicate the original procedures of the original model.

3 The STEP Format In our view, neutral formats such as STEP provide an effective alternative for exchanging procedural CAD models. However, despite three consecutive Application Protocols for 3D CAD models, there are no complete implementations of AP 242 of the ISO 10303 standard (STEP) [13]. Instead, commercial implementations have focused on the perspective of model-based definition, including geometric dimensioning and tolerancing [14]. We speculate that until the advent of Model-Based Enterprises there has not been a strong interest in these developments because procedural models often contain valuable know-how design information that organizations are reluctant to share, as some studies have confirmed [15, 16]. The problem could arguably be solved by providing control mechanisms to include or exclude the procedural information on demand during the translation process. An additional implementation challenge of the STEP protocol is related to the absence of partial conformance classes, which forces the implementation of the full application protocol. STEP files are complex to process, and developers must adhere to an implementation method that is extensive, strict, and includes many variants. 3.1 STEP Protocols for 3D CAD The protocols for 3D models commonly implemented as neutral formats in commercial 3D CAD applications are AP 203, AP 214, and AP 242. The AP 242 protocol is the most modern. It encompasses the previous two and adds support for the following: • 2D parametric sections (explicit geometry supporting different types of geometric, numerical, and algebraic constraints). • History-based modeling information (sequence of modeling operations used to build a model, and operators to create, query, and edit geometric entities). • 3D parametric assemblies (including assembly parametric constraints between assembly components).

680

P. Company et al.

• Annotated models. According to the Computer-Aided-“x” (Design, Manufacturing, Inspection) Implementor Forum (CAx-IF) [17], there are currently no operational implementations of the entire AP 242 protocol. There are, however, practical implementations that focus mainly on annotated models, such as the quality information framework (QIF), a neutral format aimed at promoting the exchange of product manufacturing information (PMI) [18]. As an additional example, since 2017 SolidWorks’ MBD module implements a command to write STEP “242” files that produces B-Rep models with PMI. Arguably, if conformance classes allowed official partial implementations, a 242-implementation limited to providing neutral storage for procedural CAD models could likewise be developed. 3.2 STEP Complexity STEP is a set of international standards built around an architecture of separately published parts, developed at different times. Only high-level parts completely cover a domain or area. In this paper, we are interested in the section of AP 242 that encompasses the entire three-dimensional design of parts and assemblies. Mid-level parts link lower-level parts to each other and specialize them. For example, part AP 203 is built from part AM 403 and specializes in CAD models for the aeronautical industry. The low-level parts are wrappers for basic concepts that are used repeatedly. They are defined in Interpreted Application Constructions (AIC), or Integrated Generic Resources (IGR). An incomplete but representative illustration of this hierarchical structure is shown in Fig. 1. The relationships between the different elements are not strictly hierarchical, as there are many mutual dependencies, usually complex.

Fig. 1. Example of the hierarchical structure of STEP elements

It is important to note that there are different levels of implementation, or conformance classes (CC), associated with each AP. Conformance classes are subsets of an AP that can be implemented “meaningfully” within that application domain without having to implement all aspects of the AP. From a practical standpoint, CAD users must know which AP conformance classes are implemented in each particular CAD application (see Fig. 2).

On the Adoption of the STEP Neutral Format

681

Fig. 2. SolidWorks 2022 help page indicating the supported STEP conformance classes

It is not accurate to claim that a provider has a STEP or AP 2xx translator, because most vendors only implement a subset. More precisely, Conformance Classes (CCs) specify several selected groups of entities (different subsets of the total AP content) that the software must fully implement. For an implementation to conform to a specific CC means that the implementation must support all the entities grouped within that CC.

4 Experimental Evaluation of the Quality of Reconstructed CAD Models As part of this paper, we conducted a study to demonstrate that current CAD systems can produce procedural models from explicit CAD models but fail to recover the original design intent as well as other quality criteria. We used three widely used professional 3D CAD software systems (SolidWorks 2021, CATIA v5, and Autodesk Inventor Professional 2021) to model the simple part shown in Fig. 3. Other CAD systems, most notably Siemens NX, were excluded because we were unable to identify feature recognition capabilities for B-Rep models. We selected a simple example in an attempt to demonstrate that the original design intent that is lost when a procedural model becomes a dumb model is nearly impossible to restore even in the simplest cases. The part in our experiment was modeled according to the following strategy: 1) extrude the base from a rectangle sketched on the horizontal reference plane, 2) extrude a cylinder from a circle sketched on a datum plane defined on top of the base, and 3) extrude a hole from a circle sketched on a datum plane defined on top of the cylinder. The modeling sequence is illustrated in Fig. 3.

Fig. 3. Hand drawn axonometric sketch of the part used in our study (left) and modeling sequence of the procedural CAD model (right): extrusion of initial sketch (1), extrusion of cylindrical feature from circle (2), and extrusion of hole (3).

The explicit geometry of the model is then saved as a STEP format. Next, we reopened the model and used the available tools in each system to reconstruct the corresponding

682

P. Company et al.

procedural CAD model. The reconstructed models were evaluated based on the six dimensions of CAD quality described by Company et al. [19]. Our results are summarized in Table 1. Table 1. Comparing the quality of reconstructed CAD models. CAD system SolidWorks FeatureWorks Dimensions of quality

Inventor Feature recognition

CATIA PartDesign feature recognition

Validity

The imported The imported model was valid. model contained Information about internal repairs errors, which were was not provided resolved using the Import Diagnostic tool. The final model was valid

Completeness

Arguably not complete. The size of the imported model was not controlled by explicit restrictions or dimensions

Consistency

Not consistent. The imported model did not keep alignment with the global reference system and contained under-constrained sketches

Conciseness

The imported model was less The imported model concise than the original because it was concise added intermediate datums

Clarity

Not clear. The original model’s feature labels were lost in the imported model

Design intent

The original design intent of the model was altered. The design tree of the imported model was different from the design tree of the original model

Not consistent. The imported model kept alignment with the global reference system but contained under-constrained sketches

Based on the results of our tests, the loss of CAD quality is clear in all cases. First, the imported model tends to be incomplete, in a sense that small errors often appear in the imported B-Rep model. Many of these errors are a consequence of the different strategies used by each CAD system to handle the geometric incoherencies that arise from imprecisions in the numerical calculations. Reconstruction software often run a “repair” or “diagnose” process where the B-Rep model is recalculated to fit with the calculation procedures of the destination CAD.

On the Adoption of the STEP Neutral Format

683

Next, the user must specify the reconstruction settings to search or ignore different types of features (form features like extrusions, manufacturing features like drills, etc.). The catalog is limited, and the final result is highly influenced by the tuning. We observe that the output model is complete in all the cases, as it matches the shape and size of the original model (although some small errors may persist [12]). However, the models are not consistent since the profiles are no longer fully constrained. They are sometimes not concise either, since more than three modeling features are sometimes generated. Regarding the clarity of the output model, all the labels for the items in the original model tree are missing, as shown in Fig. 4.

Fig. 4. Feature tree of the original CATIA model (left) and tree reconstructed from the corresponding STEP file (right).

Finally, the design intent is generally not maintained, as the original features (i.e., three extrusions) are commonly replaced by similar features (e.g., revolutions). Feature recognition tools implement a hierarchy to organize features based on their semantic level. For example, CATIA automatically identifies the third feature of the model as a “hole” instead of an extruded cut. This functionality is generally desired as features with higher semantic level are usually preferred over features with low semantic level, but they certainly affect how design intent is conveyed. Similarly, the hole is represented from the bottom to the top of the part, instead of from top to bottom, as in the original model. In addition, implicit datums are sometimes replaced by explicit ones, and viceversa. The original dimensions are often replaced by others, thus producing an undesired transfer of dimensions, which would dramatically affect any subsequent GDT strategy. We also note that the editing capabilities of the model are compromised after lowlevel translations and reconstructions. It is obvious that modifying the height of the base in the original model (e.g., from 25 to 40 mm) would result in a new model with a taller base, the same cylinder on top of the base, and a circular hole that becomes blind, since its length (60 mm) no longer should fit the total height of the shape (40 + 35 mm). However, most/all the reconstructed models incorrectly assumed that the hole was thru. Therefore, the redesign of the base does not output a shape with a blind hole.

684

P. Company et al.

5 Conclusions In our view, intelligent translators do not yet properly convey the quality of the original CAD models, which limits their usefulness in MBE scenarios. The STEP standard is considered the most capable neutral file format in terms of exchange and storage of product data. However, writing a STEP file is complex, because developers must adhere to an implementation method that is extensive, strict, and includes many variants. Despite no apparent design flaws in the specification, the implementation of the protocol may be difficult primarily because of a lack of partial conformance classes, which enforces the implementation of the full application protocol. We also posed the hypothesis that there might not be significant industrial interest in developing the protocol, possibly because organizations prefer not to share their knowhow and their procedural models. We have proposed a solution based on providing control to include or exclude the procedural information on demand. A third consideration was the emergence of intelligent translators that infer the modeling sequence from explicit models. These translators are particularly useful for producing procedural models from inputs with low semantic levels such as 3D scanned data acquired through reverse engineering. However, our study revealed that these tools fail to replicate the original procedures that are lost after a basic format translation. We advocate for developing practical implementations of AP 242 with at least basic support for procedural CAD models, which would enable the exchange of high semantic CAD models. The strategy could be enforced if particular conformance classes were defined for the application protocol, thus enabling official implementations.

References 1. Frechette, S.P.: Model based enterprise for manufacturing. In: Proceedings of the 44th CIRP International Conference on Manufacturing Systems, Madison, WI (2011) 2. Otey, J.M., Company, P., Contero, M., Camba, J.D.: A review of the design intent concept in the context of CAD model quality metrics. In: 2014 ASEE Annual Conference & Exposition, Indianapolis, Indiana (2014) 3. Várady, T., Martin, R.R., Cox, J.: Reverse engineering of geometric models—an introduction. Comput.-Aided Des. 29(4), 255–268 (1997) 4. Kim, J., Pratt, M.J., Iyer, R.G., Sriram, R.D.: Standardized data exchange of CAD models with design intent. Comput.-Aided Des. 40(7), 760–777 (2008) 5. Bianconi, F., Conti, P., Di Angelo, L.: Interoperability among CAD/CAM/CAE systems: a review of current research trends. In: Geometric Modeling and Imaging New Trends, pp. 82– 89 (2006). 1648749 6. Bidarra, R., Bronsvoort, W.F.: Persistent naming through persistent entities. In: Proceedings - Geometric Modeling and Processing: Theory and Applications, GMP 2002, pp. 233–240 (2002). 1027515 7. Marcheix, D., Pierra, G.: A survey of the persistent naming problem. In: Proceedings of the Symposium on Solid Modeling and Applications, pp. 13–22 (2002) 8. Hoffmann, C.M., Juan, R.: EREP, an editable, high-level representation for geometric design and analysis. In: Wilson, P., Wozny, M., Pratt, M. (eds.) Geometric and Product Modeling, pp. 129–164. North Holland (1993)

On the Adoption of the STEP Neutral Format

685

9. Shih, C.H., Anderson, B.: A design/constraint model to capture design intent. In: Proceedings of the 4th ACM Symposium on Solid Modeling and Applications, pp. 255–264 (1997) 10. Choi, G.H., Mun, D.H., Han, S.H.: Exchange of CAD part models based on the macroparametric approach. Int. J. CAD/CAM 2(1), 13–21 (2002) 11. Shi, Y., Zhang, Y., Xia, K., Harik, R.: A critical review of feature recognition techniques. Comput.-Aided Des. Appl. 17(5), 861–899 (2020) 12. González-Lluch, C., Company, P., Contero, M., Camba, J.D., Plumed, R.: A survey on 3D CAD model quality assurance and testing tools. Comput.-Aided Des. 83, 64–79 (2017) 13. Safdar, M., et al.: Feature-based translation of CAD models with macro-parametric approach: issues of feature mapping, persistent naming, and constraint translation. J. Comput. Des. Eng. 7(5), 603–614 (2020) 14. Feeney, A.B., Frechette, S.P., Srinivasan, V.: A portrait of an ISO STEP tolerancing standard as an enabler of smart manufacturing systems. J. Comput. Inf. Sci. Eng. 15(2), 021001 (2015) 15. Cai, X., Li, W., He, F., Li, X.: Customized encryption of computer aided design models for collaboration in cloud manufacturing environment. ASME. J. Manuf. Sci. Eng. 137(4), 040905 (2015). https://doi.org/10.1115/1.4030592 16. Cai, X.T., Wang, S., Lu, X., Li, W.D., Liang, Y.W.: Parametric and adaptive encryption of feature-based computer-aided design models for cloud-based collaboration. Integr. Comput.Aided Eng. 24(2), 129–142 (2017) 17. Cax Introperability Forum: AP242 Implementation Coverage (2021). https://www.cax-if.org/ cax/vendor_info.php. Accessed 15 Dec 2021 18. Quality Information Framework (QIF). https://qifstandards.org. Accessed 15 Dec 2021 19. Company, P., Contero, M., Otey, J., Plumed, R.: Approach for developing coordinated rubrics to convey quality criteria in MCAD training. Comput.-Aided Des. 63, 101–117 (2015)

Generative Design in Bicycle Frames: A Review Valentin Gomez-Jauregui1(B)

, Carlos Vigil-Fernández2 , Cristina Manchado1 and César Otero1

,

1 R&D EgiCAD, School of Civil Engineering, Universidad de Cantabria,

Avda. de los Castros 44, 39005 Santander, Cantabria, Spain [email protected] 2 School of Industrial Engineering, Universidad de Cantabria, Avda. de los Castros 46, 39005 Santander, Cantabria, Spain

Abstract. This paper addresses the analysis of the projects related to generative design (GD) applied to bicycle frames. GD constitutes an innovative working methodology by providing designers with computer algorithms capable of generating potentially efficient designs and, in turn, responding to established design objectives and constraints. Its application to the design of different types of mechanical components is growing and a clear example of this is bicycle frames. The main reason why GD stands out among other work procedures is the resolution and generation of complex digital geometric models. Furthermore, the case studies analyzed reflect a clear convergence towards the achievement of two common goals thanks to the use of GD, which are: the efficient use of materials and, consequently, the reduction of costs associated with the production process. However, there are not many scientific communications about GD applied to bicycle frames and just a few projects dealing with this subject. In short, the GD can contribute, thanks to additive manufacturing (AM), as a turning point in the process of design and construction of components with high performance and reduced costs compared to traditional methods. Keywords: Generative design · Bicycle frame · Additive manufacturing · Topological optimization · Lattice structures

1 Introduction 1.1 Main Concepts Generative design (GD) is an iterative design process in which the designer adopts as a starting point the definition of a series of geometric and mechanical constraints or limitations to which the final solution must respond [1]. These restrictions can be, for example, the conservation of a certain geometry, guaranteeing resistance to certain stresses, the use of the least possible material, among others. The resolution of these technical conditions is based on the use of design and analysis software with the capacity to contemplate a wide range of possible scenarios, but keeping in all results within the range of the initially established restrictions using algorithms. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 686–697, 2023. https://doi.org/10.1007/978-3-031-15928-2_60

Generative Design in Bicycle Frames: A Review

687

Topology optimization (TO) is a process that gives the material an optimal distribution in a given volume, minimalizing a chosen objective, and following a series of constraints. It creates designs with minimal mass and maximal stiffness (Fig. 1.a). The main difference between GD and TO is that, while the latter focuses on removing unnecessary material of an original shape to be lightened, the former does not use an initial shape, but searches for the optimal manner of growing the structure smartly, as organic systems do it in Nature (bones, plants, etc.). Another method to lighten the parts is by converting them to lattice structures (LS): three-dimensional frameworks composed of consecutively and repeatedly arranged interconnected cells (Fig. 1.b), which can also be understood as a porous material structure composed of interconnected struts and nodes in three-dimensional space [2]. However, these techniques are usually considered so similar that many projects that are based only on TO or LS are considered to be optimized by means of GD. Therefore, in this communication, there will not be a special mention to TO or LS unless it is explicitly specified. This process is on the rise thanks to technological advances and the application of artificial intelligence (AI) algorithms that facilitate and optimize the process, on account of the training of models and the exploration of a wider range of permutations around the same solution. Ultimately, GD focuses on solving complex component design challenges, with the aim of minimizing material usage and consequently weight and manufacturing costs, which translates into performance optimization. In other words, GD is synonymous with optimum results with minimum effort, in engineering terms.

Fig. 1. The same mechanical part optimized by a) TO using Autodesk Generative Design [3] (left) and b) lattice structure using Rhino and Grasshopper.

It is now possible to translate the results of GD into reality thanks to additive manufacturing (AM). This processing method creates a 3D object by stacking materials (mainly polymers, ceramics and metals) layer by layer from a 3D model. This method is expected to improve the performance of structural parts because it can create more complicated shapes than conventional manufacturing methods, such as machining, milling and casting [4]. This innovative manufacturing method serves as a post-design tool to GD, enabling the fabrication of complex models, which previously were not feasible to manufacture. Due to the complex possibilities and the vast state of the arte related to AM, this communication will focus on GD and not on AM, although some proposals have been carried out to reality thanks to this fabrication process.

688

V. Gomez-Jauregui et al.

1.2 Literature Review The GD is an innovative methodology so, at present, the literature associated with its application to real projects related to bicycle frame design is not very widespread. In fact, databases like Web of Science, Scopus, Engineering Village or IeeeXplore do not host any scientific paper focused on GD for bike frames. Therefore, for the development of the present document, a review of a series of real projects focused on GD has been carried out. As the GD methodology is still in a development phase and benefits from AM to fabricate elaborated designs, a review on design for additive manufacturing (DFAM) of general application will be carried out as well in future communications. In addition, the authors have carried out a search for other types of communications and projects related to the analysis and optimization of bicycle frames using the aforementioned technologies. The results of this research are developed in Sect. 3.

2 Different Types of GD Within the GD we can find five different types, based on the internal logic or language of algorithms used to solve the problem: shape grammars, L-systems, cellular automata, genetic algorithms, and swarm intelligence. A brief explanation of each one is given below. 2.1 Shape Grammars (SG) The SG type is based on the definition of a series of shape rules whose application results in a set of geometric shapes, generally two or three-dimensional. The SG rules seek to define or establish how a previously created geometry can undergo modifications. Based on this, the shape grammar is defined considering a minimum group or set of three shape rules. This group consists of an initiation rule, a transformation rule, and an ending rule [5]. The shape grammar stands out for the associated power in relation to its application as an analysis tool for existing models or as a design tool for the generation of new model design languages [6]. 2.2 L-Systems (LS) The LS type, also known as Lindenmayer system, encompasses a series of mathematical algorithms that are characterized by their application in biology [6]. Specifically, its application is focused on the representation of the biological growth of plants; however, this type of design has been used for modeling the morphology of a diversity of organisms. This type of GD is based on a series of production rules that are applied recursively through string rewriting; that is, the application of different string rewriting mechanisms makes it possible to generate a representation of the design or model. Another of its applications is the generation of fractal forms with self-similarity, since the nature of these systems is recursive, like iterative function systems.

Generative Design in Bicycle Frames: A Review

689

2.3 Cellular Automata (CA) Another type of GD is the so-called CA, which consists of a collection of cells organized on the basis of a grid with a given shape with a living character, i.e. the shape of the cells evolves over time with respect to the spatial guidelines set by the neighboring cells, which makes the process context-sensitive [6]. Since the cells are enclosed by a grid (e.g., Cartesian), this type of algorithm is limited to the grid, so the operations are limited to the type of grid used in the design. In short, CA is characterized as a dynamic system that evolves in discrete steps. Its scope of application is oriented to the modeling of natural systems that constitute a massive collection of simple objects that interact with each other locally. 2.4 Genetic Algorithms (GA) GAs are another type of algorithms used for GD and are characterized by offering high quality solutions to optimization problems. This type of algorithms is based on biological operators such as mutation, crossover, or selection [7]; it is an evolutionary process of the candidate solutions themselves. That is to say, each of the solutions is characterized by a set of properties that vary, and these are generally represented in binary form as chains of 0 and 1. This type of algorithms uses evolutionary operators on a population of states around a search space in order to find certain states that optimize the function [6]. This search space is characterized by being formed by a series of character strings of a given alphabet and with a fixed or variable length. 2.5 Swarm Intelligence (SI) The last type of GD that can be used is an agent-based model that is defined as a software system with the ability to execute or take action autonomously based on its principles, which is called SI [6]. SI constitutes a basis for the exploration of collective or distributed solutions to different problems that lack centralized control or the provision of a global model. Therefore, its application is focused on the study of social or collective behaviors. In other words, they are algorithms that constitute a population of simple agents or bodies that interact with each other and with their environment. These agents are governed by simple behavioral guidelines that lead to an intelligent group behavior unknown by the individuals.

3 Design of a Bicycle Frame As mentioned at the beginning of the paper, the purpose of this document is to analyze the application of GD to bicycle frames as opposed to traditional design procedures and methodologies. The frame is the central and biggest component of a bicycle and its dimensions (length and bar angles) have a direct impact on the comfort of the user and the efficiency of the assembly.

690

V. Gomez-Jauregui et al.

The frame of a road bike is both elegant and minimalist at the same time. It is designed to use the minimum amount of material, yet be stiff and strong. Its traditional diamond shape has not been improved in a hundred years, even though some minor changes have been implemented recently. No better solution has been found to transmit the compression and traction loads of the ground and the rider. Its tubular components keep the bike frame flat even when a thousand watts (the maximum power of a Tour de France sprinter) tries to twist the frame [8]. It should be noted that the geometric and material characteristics of these tubular members benefited the resurgence of companies dedicated to the manufacture of weapons at the end of World War II. The decline in the production of guns and rifles was a turning point, which gave rise to the innovative idea of finding a new functionality for the metal barrels of firearms. A clear example of this is the company BH Bikes [9]. Despite the current improvement in bicycle frame design, its design emerged without the advantages of CAD, software simulation or GD. This design followed a series of attempts and tests that led to the current form (Fig. 2), varying both the geometry and the materials used (steel, aluminum, carbon fiber…).

Fig. 2. Current design of a bicycle frame, diamond tubular [8]

Attempts to make a more efficient bicycle frame have been numerous. Some have resulted in unique and extravagant structures, but to date no design seems to have achieved the same efficiency as the diamond tubular frame. The GD now raises the question of whether the current bicycle frame design is really the optimal solution, whether all possible designs have been explored, or whether human imagination and creativity have simply reached its limits. The GD has the potential to explore design possibilities without prejudice or limitation, regardless of aesthetics, symmetry of the design, or feasibility of being manufactured by conventional methods. Its sole objective is to find an optimal solution and because it is computerized, it can subject a myriad of possibilities to discernment tests.

Generative Design in Bicycle Frames: A Review

691

4 Study Cases Practical applications of GD are becoming more and more frequent, and this allows progress in the development of the design methodology. The algorithms used to determine the different solutions are evolving to be more efficient and to find the desired solution using less time and resources. The application of GD can be limited to the design phase or the production phase of a component or element. Today, GD translates as both a business challenge and a research opportunity. GD has made it possible to go further in the design of this type of component, focusing on the search for a balance between good ergonomics and low driving resistance. In short, a frame that is lighter and more maneuverable for the rider, while at the same time offering greater resistance to the forces exerted on it. The aesthetics of the frame would be of secondary importance. There are currently no articles on GD applied to bicycle frame design in the most widely used databases such as Scopus, Web of Science, Engineering Village, IeeeXplore… However, there are projects related to this subject, as will be seen below. 4.1 Preliminary Projects One of the first works was developed by Kenny Cornett and Michal Musiol in 2017 [10], in which they explored the use of GD with Autodesk to make a bicycle frame. The project focused on the study of the forces on the frame leaving aside the aesthetics, which resulted in the most peculiar shapes that remained in mere previous concepts without contrasting, as shown in Fig. 3.

Fig. 3. Prototype of the bicycle frame applying GD focusing on material savings [10].

4.2 A New Bicycle Frame Through GD The sports company Decathlon, originally from France (where one of the world’s biggest cycling events, the Tour de France, takes place), has been the first to opt professionally for a new bicycle frame design using GD [11]. The company’s team of designers and engineers, led by Charles Cambianic, are responsible for driving the use of this technology within the company as part of a program called “Vision Projects”, which was born as a result of increased demand for high-performance bicycles in 2019.

692

V. Gomez-Jauregui et al.

Although the frame design still exists only as a rendering of a design, their main objective has been to understand how the GD could be used to improve aesthetics and, secondly, to explore the performance gain of time trial bike operation along the lines of the aerodynamics of time trial bikes.

Fig. 4. Decathlon bicycle frame using GD from Autodesk [11]

For the development of this innovative project, Decathlon made use of Autodesk tools and technology, using software such as Fusion 360 with intelligent algorithms and computational geometry. This technology allowed the team to design the frame (Fig. 3) for use on a road bike focused on time trials. Wind resistance heavily influences this discipline, so one of the goals was to reduce it as much as possible. As was expected, Decathlon’s GD frame departs from the traditional geometry, i.e., tubes, and approaches the design from the hand of new forms of attachment such as tendrils (filiform or branched organs that serve as plant attachments), also used in bone growth algorithms. However, not only the shapes change, but also the connections. As can be seen in Fig. 4, on the one hand the upper seat stay connects to the seat tube at its center, and not at its upper end as in traditional designs. On the other hand, since these are preliminary concepts, certain issues such as the connection with the seat post or handlebars have not been studied and analyzed in depth, and will be studied in later phases of the project. Although maximum torsional loads are not developed in a time trial bicycle, bicycle manufacturers consider aerodynamics to be of vital importance. A bare GD frame can be an aerodynamic disaster with streamlines that are unlikely to develop good airflow through its twisted geometry. It could have a larger projected area and have more surface area and more superficial drag. This is probably what led Decathlon to enclose GD with a smooth transparent cover over the fork and the front of the frame, as shown in Fig. 5 [11].

Generative Design in Bicycle Frames: A Review

693

Fig. 5. Decathlon’s GD bicycle frame covered by a transparent envelope.

The last part of the study carried out by Decathlon mentions how the use of 3D printed aluminum parts is more environmentally friendly than hydrocarbon-based plastic, the material used in carbon fiber bicycle frames. 4.3 The Fastest Bicycle in the World The aim of this project, led by Mike Burrows together with Glen Thompson and Barney Townsend from the London South Bank University [12], is to optimize Mike’s initial design by lightening the weight of the vehicle through GD, in an attempt to break the world bicycle speed record. The project focuses on the rider’s cockpit, more specifically the frame (Fig. 6).

Fig. 6. Comparison between the initial design and the final design with GD [12]

4.4 Other Bicycle Designs Brought to Reality Through Additive Manufacturing (AM) The tool that currently gives viability to the models created with GD, TO or LS, when transforming them into reality is AM. Thanks to this technique, it has been possible to

694

V. Gomez-Jauregui et al.

carry out numerous alternative designs of bicycle frames, although these designs are not necessary created through GD. Some examples are shown below: The SOLID Bike This model carried out jointly between the design company Industry and bicycle frame manufacturers Ti Cycles [13] has been created using titanium as the input material for the 3D printer (Fig. 7). The methodology process was the following: • • • • •

Taking photographs of life-size sketches and importing them into Fusion 360 Creating the 3D models with specialized software 3D printing of the plastic components and testing of the components 3D printing of titanium components Welding of the components and final finishing.

Fig. 7. The SOLID bike final model [13]

Arc Bike II Amsterdam-based MX3D [14] has used AM and proprietary custom software that controls a robotic arm to create the frame and fork of this unique design, made of aluminum (Fig. 8). The design is produced by GD software to match the rider’s body proportions. The MX3D software then translates it into the robot’s instructions, allowing the frame to be 3D printed within 24 h [14].

Generative Design in Bicycle Frames: A Review

695

Fig. 8. The Arc Bike II [14]

Arc Bicycle A team of industrial design students at the Delft University of Technology has designed and produced a fully functional bicycle 3D printed in stainless steel [15]. This process was carried out together with the company MX3D and took approximately three months. This bicycle was designed as part of a research project focusing on the feasibility of 3D printing metal using welding processes carried out at Delft University of Technology and the 3D construction field laboratory (Fig. 9).

Fig. 9. Arc Bicycle created by Delft University of Technology students [15]

There are some other examples, like Arevo’s ‘Emery One’ eBike, the first 3D printed carbon fiber unibody bike frame in 2018 (Fig. 10.a). In contrast to current composite frames that consist of many parts glued together, the Emery One’s frame is made as a

696

V. Gomez-Jauregui et al.

single part. Previously, the first metal 3D printed bicycle frame was manufactured by Renishaw for Empire Cycles in 2014 (Fig. 10.b), using TO for obtaining a frame 33% lighter than the original. Another project is the Fix3D bike frame, by James Novak (2015), using customizable pieces that may be more lightweight and stronger than traditional frames by means of complex lattice structures typically seen in nature (Fig. 10.b).

Fig. 10. a) Arevo’s Emery One bike frame (left). b) Metal AM bycicle frame by Empire Cycles (center). c) Fix3D bike (right) by James Novak.

5 Conclusions The appearance of a bicycle is no longer limited to traditional styling. Innovative research and development have led experts toward ergonomic designs with low riding resistance by designing lightweight frames with increased strength and improved handling to meet user needs. GD is an emerging philosophy of DFAM that, as captured throughout the article, enables commercial AM results in scenarios that would otherwise be technically or economically intractable. This paper shows some study cases of bike frames designed by means of GD, OT, LS and AM. However, there is still a lack of information about algorithms, weights, efficiency, resistance, aerodynamics or any other performance about them. This fact points out the importance of developing a serious analysis and comparison about GD of bike frames, which is the main goal of our current line of research. The shift towards this design philosophy allows GD software to develop parametric models that are automatically reconstructed within given parameters, using shapes and structural configurations observed in nature, which are stronger and lighter than traditional human designs. Therefore, thanks to GD, the design phase of a prototype can become more efficient and faster thanks to the use of tools with AI algorithms, computational geometry, etc. compared to traditional mass customized design procedures. In other words, GD breaks down barriers and solves major technical challenges linked to the automation and digitization of the component design process.

Generative Design in Bicycle Frames: A Review

697

6 Future Directions Driven by consumers’ or end-users’ desire for customized products that meet their unique needs, GD along with AM have a promising future ahead of them. It is possible to customize a design using a person’s biometric data; however, today’s high-end CAD tools require advanced skills to even begin to apply basic control over a design, which is certainly beyond the capabilities of an end user [16]. To keep up with the demands of designers, the tools needed to design and test the new shapes made possible by AM technology need to be integrated into CAD software, growing along with 3D printing. Ultimately, it follows that GD, hand in hand with AM, will significantly improve the ability to customize a bicycle frame to suit the morphology and needs of each rider.

References 1. Leary, M.: Design for Additive Manufacturing, p. 358. Elsevier, Melbourne (2020) 2. Pan, C., Han, Y., Lu, J.: Design and optimization of lattice structures. Appl. Sci. 10, 6374 (2020) 3. Fransoy, P.: Bloquetech, Ingeniería y diseño, 03 junio 2020. https://www.bloquetech.com/eldiseno-generativo/. Accessed 27 enero 2022 4. Takao Miki, S.N.: Topology optimization of the support structure for heat dissipation in additive manufacturing. Finite Elem. Anal. Des. 203, 103708 (2022) 5. Giao, P.N.Q., Han, S.-H.: Reconstructive analyses on architectural design performance for the spatial sustainability by interactive shape grammars. Int. J. Adv. Mech. Civil Eng. 4(5) (2017). ISSN: 2394-2827 6. Singh, V., Gu, N.: Towards an integrated generative design framework. Des. Stud. 33, 185–207 (2011). https://doi.org/10.1016/j.destud.2011.06.001 7. Yang, X.-S.: Genetic algorithms. In: Nature-Inspired Optimization Algorithms, pp. 77–87. Elsevier (2014) 8. Tara, R.: Engineering.com, 31 diciembre 2018. https://www.engineering.com/story/the-gen erative-design-challenge-build-a-better-bicycle-frame. Accessed 27 enero 2022 9. BH, Beistegui Hermanos, BH Bikes (2022). https://www.bhbikes.com/es_ES/acerca-de/qui enes-somos/mas-de-100-anos-de-historia. Accessed 27 enero 2022 10. Michal Musial, K.C.: Autodesk University (2017). https://www.autodesk.com/autodeskuniversity/es/forge-content/au_class-urn%3Aadsk.content%3Acontent%3A1e0bc95b-67c74702-bff0-a90f29612814. Accessed 27 enero 2022 11. Tara, R.: Engineering.com (2021). https://www.engineering.com/story/can-a-generativelydesigned-bike-please-get-better. Accessed 27 enero 2022 12. Burrows, M., Thompson, G., Townsend, B.: Aim93: Designing the Fastest Bicycle in the World. Autodesk University (2018). https://www.autodesk.com/autodesk-university/es/ forge-content/au_class-urn%3Aadsk.content%3Acontent%3Af81145cf-38e4-44c2-a91857fa3a47db45#presentation 13. Hurley, S.: Autodesk Blogs (2019). https://autodesk.blogs.com/between_the_lines/2019/06/ 3d-printed-titanium-solid-bike-in-portland.html 14. Arthur, D.: Tech news (2019). https://road.cc/content/tech-news/266380-meet-arc-bike-ii-rad ical-3d-printed-aluminium-bike-amsterdam 15. Boruslawski, P.: Designboom (2016). https://www.designboom.com/technology/tu-delft-arcbicycle-mx3d-02-04-2016/ 16. Novak, J.I.: A Study of Bicycle Frame Customization Through the Use of Additive Manufacturing Technology. Queensland College of Art, Griffith University, Australia (2015)

Geometric Analysis of Product CAD Models to Support Design for Assembly Brigida Bonino1

, Franca Giannini2 , Marina Monti2 and Giovanni Berselli1

, Roberto Raffaeli3(B)

,

1 DIME, Università degli Studi di Genova, Genova, Italy 2 IMATI, Consiglio Nazionale delle Ricerche, Genova, Italy 3 INTERMECH-DISMI, Università degli Studi di Modena e Reggio Emilia, Reggio Emilia,

Italy [email protected]

Abstract. Design For Assembly (DFA) aims at improving product design facilitating assembly phases via the application of evaluation metrics and design guidelines. However, DFA analyses are usually performed manually and the adoption of supporting tool is poor. This paper investigates the application of algorithms allowing to extract from CAD assembly models the required data to perform automated DFA analyses, thus providing a tool to support designers’ everyday works. In particular, attributes from geometric feature recognition algorithms, solids properties and assembly parts’ semantics are leveraged and mapped to the parameters required to accomplish DFA evaluations. The proposed approach is illustrated on a 3D printer for home use. At first, a manual DFA analysis has been performed on the product identifying product BOM, components properties, assembly cycle and times according to models in the literature. Then, the CAD model of the printer has been processed with some geometric algorithms to verify the possibility to extract the required data to be used as input to the DFA analysis. The test case has demonstrated the feasibility of the approach, even if some design considerations and improvement directions still need the critical evaluation of the designer. Keywords: Design For Assembly · Assembly semantics · Part recognition · Feature recognition

1 Introduction Market pressures force companies to design products characterized by higher performances and reduced costs. Design For Manufacturing and Assembly is a set of approaches that aim at optimizing product’s shapes and structures to facilitate manufacturing process, shorten production cycles and reach better quality [1]. In particular, Design For Assembly (DFA) aims at facilitating product assembly phases. Research approaches, such as those proposed by Boothroyd and Dewhurst (B&D) [2], in Lucas and Hull [3], and in Hitachi [4] are well established on the industrial practice and have shown great capability of reducing overall costs. One major goal is to limit the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 698–710, 2023. https://doi.org/10.1007/978-3-031-15928-2_61

Geometric Analysis of Product CAD Models

699

number of parts composing a product, so as to reduce the number of operations in the assembly phases, and to adopt solutions allowing to shorten the time spent to locate and fix parts in relation to each other. However, applying such methods for the evaluation of novel, possibly optimal, design solutions, is based on manual procedures supported by spreadsheets or software systems. The software support is mainly limited to data collection on the product structure and on the type, the geometric shape, attributes and symmetry information of the constituting components. Then, software helps to compute indices summarizing the assembly performance of the solution. Such approach is time consuming, thus preventing its extensive application in the design departments. As a matter of facts, DFA approaches are applied as optimization tool only for mass production products, where the quantities being produced justify the investments in terms of time and consultancy services for elaborating optimized design solutions of affirmed products on the market. In this context, the here presented research aims at speeding up assemblies’ evaluations by identifying, in an automated manner, many of the geometric characteristics of a product useful to calculate assembly performance indices, detect weak points in the assembly process and possibly suggest improvements in the solution. Such an analysis exploits 3D CAD representation, geometric feature recognition and part classification approaches, in order to provide descriptions of assemblies from a semantic point of view [5, 6]. The proposed approach includes the standard parts and fastener recognition, the analysis of mounting conditions, the detection of the allowed relative displacements between parts, and of the components accessibility for mounting or disassembly operations. The combination of the extracted semantic data allows the automatic calculation of DFA indices and the identification of possible design improvements, according to the DFA guidelines. The rest of the paper is organized as follows. Section 2 provides a short recall on DFA approaches and required relevant data providing the motivation to the work, then, in Sect. 3, a framework is presented to derive these data from CAD models. The approach is exemplified on a test case in Sect. 4 and, finally, some conclusions and future directions are discussed in the conclusion Section.

2 Background and Motivations DFA is a systematic procedure aiming at improving design quality and reducing assembly cost. The goal is reached by the minimization of assembly time and the efficiency in the application of the approach is captured by the renowned DFA Index [2], expressing the ratio between an ideal assembly time based on a minimum number of components N min , a minimum theoretical assembly time per component t min , and the actual assembly time TA. DFA Index =

tmin ∗ Nmin TA

(1)

A higher DFA index reflects an improved design solution and can be clearly met by both reducing the number of components and improving the execution time of the single assembly task.

700

B. Bonino et al.

Regarding the first point, a limited number of parts is beneficial since it significantly limits numerous company activities which need to be potentially performed for each single part code: design, drafting, CAE simulation, coding, acquisition, manufacturing planning, logistic chain organization, maintenance organization, etc.. In particular, B&D methodology [2] strongly pushes on the reduction of the number of components. The goal is pursued in a twofold manner. At first, parts are eliminated whose function can be realized in an alternative, more efficient, manner. Typically, this refers to fasteners such as screws, nuts, washers, studs, etc., which are substituted by solutions requiring much lower assembly times, such as snap fits. Secondly, separated parts are consolidated in single components whenever possible, subject to three specific conditions: – Parts are realized with the same material; – Parts are not subjected to relative displacement; – Parts are not required to be separated for assembly or disassembly operations. As second strategy, assembly time is reduced by design strategies which leverage improved part geometries and arrangement. It refers to part symmetries, unambiguous contact conditions, wide visibility on the parts to be assembled. In a similar way, in Lucas and Hull [3] approach the efforts required for assembly tasks has been investigated, dividing the required time between handling and fitting operations. Handling refers to tasks required for feeding and manipulating parts. It depends on size and weight of the part, manipulation difficulties and orientation ambiguities. Fitting includes the time to insert and secure each part. Here, again, the main drivers to consider are the number of components and the method used to fix them. From this brief overview of DFA strategies, it is evident how the geometry of the parts and their relative position in the assembly are the most important aspects considered by the majority of the analysis and consequent actions to improve the product design. It is important to observe that the reported approaches have been mainly developed for low volume production batches. This is the case where manual assembly is normally adopted and most significant results in terms of gained efficiency can be pursued by an efficient application of DFA. However, the reported analysis is basically manually operated. The scientific literature lacks of methods for the automation of the required product analysis. Also, from an industrial point of view, few commercial tools exist on the market to support DFA analysis. Most of the software, such as DFMA® from BOOTHROYD DEWHURST Inc. [7] are basically supporting systems requiring a manual input of the data regarding the characteristics of the parts of the product. Some other tools leverage the shape analysis of parts and the geometric feature recognition algorithms, such as DFMPro ® [8]. However, in the last case the scope of the analysis is mostly limited to a single part in a Design for Manufacturing perspective rather than an entire assembly investigation. In this context, it is evident that significant benefits can be obtained if DFA supporting tools are fed by data automatically extracted from the product CAD models [5]. This, would allow a wider adoption of the DFA strategies, leading to improved product quality and cost, thus better addressing the current compressed time constraints in the product development process.

Geometric Analysis of Product CAD Models

701

3 The Proposed Approach The goal of this paper is to present a framework to leverage the information in CAD models of product assemblies to drive automated DFA analysis. The input is represented by analytic solid models of parts, i.e. boundary representations of parts, organized in multilevel assembly structures. To guarantee a wider applicability, the input is given by the standard representation of products in mechanical CAD systems also reflected in standard formats for file exchange, such as STEP files. Considering this standard product representation, three levels of information can be identified, as described in the following paragraphs. 3.1 Assembly Analysis At this level relative positioning of parts in the assembly are analyzed to evaluate useful data (Fig. 1) including contact relations among parts [9], assembly sequences [10, 11] and to detect accessibility issues [12]. A further analysis within the hierarchical structure of the product, i.e. the BOM, regards the identification of standard parts from their geometry and their use in the assembly [13]. In fact, parts whose shape is dictated by standard norms, such as screws, nuts, pins, washers, circlips, etc. are relevant to DFA analysis. In fact, they can be typically related to fasteners, thus requiring particular attention by designers, who should be stimulated to find alternative, faster fixing solutions.

Fig. 1. Information relevant to a DFA analysis that can be extracted from a CAD model at assembly level.

3.2 Component Analysis A second level of analysis is to be conducted on the single component (Fig. 2). At first, custom designed parts, i.e. parts which are not recognizable as standard parts, can be

702

B. Bonino et al.

classified in major categories, which then helps to further describe the geometry in a more detailed manner. Such categories include: – Axisymmetric solids, whose geometry is mainly characterized as a profile rotated around an axis; – Blocks, whose geometry resembles or is drawn from a milled parallelepiped as for plates; – Sheetmetal parts, which are obtained by folding and stamping from metal foils whose thickness is constant and much lower than the other two sizes; – Beams which are obtained by cutting, bending and carving of standardized profiles such as rods, tubes, plates, and all other shapes such as IPE, HEA, HEB, L, etc.; – Other shapes, which include all the shapes which do not fall in the previous categories and which are usually obtained by foundry, stamping, injection molding, advanced milling or additive manufacturing.

Fig. 2. Component level analysis to extract information relevant to a DFA analysis.

Once the category of the component has been identified, further geometric attributes can be extracted, such as weight, overall sizes, symmetry along principal inertial axes. Other non-geometric attributes, as material or surface treatments, could be retrieved from the CAD model and effectively employed for the assembly process estimation. Indeed, thickness distribution as well as the slenderness of a part, defined as the ratio between the mean thickness of the solid and its overall dimension, are important to describe the attitude of the component to deformation. This aspect strongly influences the time required in handling and locating the part in the assembly phases, as it clearly emerges from the DFA approaches recalled in the previous section. 3.3 Component Features Analysis Finally, the analysis can be extended to detect characteristics related to single portions of the geometry, usually referred as geometric features (Fig. 3). The literature presents

Geometric Analysis of Product CAD Models

703

several methods to identify features from the representation of a single part [14], that includes geometry and topology of the solid, given the boundary representation in terms of faces, edges and vertices. Here, feature recognition algorithms can be applied to identify features relevant for specific mounting solutions or conditions. In the context of DFA analysis, most relevant examples are: – holes, usually used as mating entities for screws, studs and pins to locate and secure parts each other; – circular grooves, which are specially created to host O-Rings or circlips; – keyways, added to the shape to accommodate keys to transmit couples between shafts and other mechanical organs.

Fig. 3. Geometric feature recognition to leverage information related to mating condition with adjacent components.

3.4 Geometrical Attributes Mapping As emerges from the previous paragraphs, many geometric and non-geometric attributes can be extracted from the analysis of CAD models. The majority of the required algorithms are available in literature or already implemented as functionalities of commercial CAD systems. Some other attributes require specific formulations and the required computational efforts may be considerable. To evaluate the possibility of performing an automated DFA analysis, the parameters necessary for estimating assembly times and applying the design guidelines were mapped with respect to the attributes extractable from the geometry. Table 1 represents a summary of the work done aimed at identifying which attributes can be exploited to evaluate a certain aspect and what level of analysis is required for this purpose.

704

B. Bonino et al.

Table 1. Mapping of geometric and non-geometric attributes extracted from CAD models to DFA parameters. Phase

Aspect relevant to DFA

Level of the geometrical analysis

Related attributes

General

Identification of components to be eliminated

Component

Standard part type

General

Foster the reduction of standard parts types

Component

Standard part type and dimensions

General

Foster use of standard parts instead of custom designed ones

Component

Similar parts retrieval from repositories

General

Identification of components to be integrated with others (same material, no relative displacement, no need of separation for assembly/disassembly)

Component Assembly

Material Type of liaison Assembly sequence

Handling

Easiness to grasping, manipulating, handling

Assembly Component

Type of liaison, Product structure Accessibility Largest bounding box size, Material, Min and mean thickness

Handling

Thickness

Component

Min and mean thickness

Handling

Size

Component

Largest bounding box size

Handling and Insertion

Symmetry angle (alpha + Component beta) [2]

Symmetry and principal inertial axes Angle to repeat orientation along main axes [2]

Handling

Need for tweezer or special tools for grasping and manipulating

Component Assembly

Largest bounding box size, Material Component accessibility

Handling

Need for optical magnification

Component

Largest bounding box size

Handling

Additional grasping difficulties: sticky, delicate, slippery

Component

Material, Surface roughness, Surface treatments

Handling

Parts severely nest or tangle

Component

Material, Min thickness, Slenderness

Handling

Heaviness of the part

Component

Handling

Two person, two hands or Component mechanical assistance

Weight Weight, Largest bounding box size

(continued)

Geometric Analysis of Product CAD Models

705

Table 1. (continued) Phase

Aspect relevant to DFA

Level of the geometrical analysis

Related attributes

Insertion

Holding down required to Assembly maintain orientation and location after assembly

Product structure, Stable subassemblies, Type of liaison

Insertion

Easiness to align and position

Component Feature

Angles to repeat orientation along principal inertial axes Strongly asymmetric features Locating references, Presence of chamfers

Insertion

Resistance to insertion

Feature

Mating faces GD&T Mating faces roughness, Mating faces surface treatments

Assembly

Type of liaison

Insertion

Part, tool and hands can Assembly reach the desired location

Product structure

Insertion

Operator has a restricted view on the desired location

Product structure

Insertion

Fastening process type

Assembly

Accessibility Visibility map

Assembly

Type of liaison

Component

Material Standard part type, Custom designed part type

Feature

Presence of features to accommodate fasteners Typical pattern of features

Insertion

Screw tightening required Assembly after insertion

Type of liaison Aligned threaded entities

Feature

Threaded holes

Insertion

Plastic deformation required after insertion

Component Assembly

Material Type of liaison

Insertion

Riveting required after insertion

Assembly

Type of liaison

4 Test Case The approach is demonstrated through a test case represented by a 3D printer for home use, the Original Prusa i3 MK3S + model [15]. Such a product is developed in an open source context and is optionally assembled by the final end user. Therefore, assembly is

706

B. Bonino et al.

not industrialized for large batch production and constitutes an ideal test case to identify possible directions of design improvements following the DFA suggestions. Table 2. Portion of DFA analysis of the YZ Frame subassembly according to the B&D method (Rp, i: Part multiplicity; MH Code: Manual handling code; TH, i: Handling time; MI Code: Manual insertion code; TI, i: Insertion time; TA, i: Operation time; TMP: Estimation for theoretical minimum parts [0 - removable, 1 - non removable]) Part

Rp,i MH TH,i [sec] MI Code Code

TI,i [sec] TA,i [sec] Ci [e] TMP Design notes

Frame

1

95

4

06

5.5

9.5

0.12

1

Longer extrusions

2

00

1.13

08

6.5

15.26

0.19

1

Shorter extrusions

2

00

1.13

08

6.5

15.26

0.19

0

Consider to use 2 aluminum extrusions instead of 4. Extrusion design should be modified accordingly by possibly increasing the section from 30x30 to 30x60 millimeters

16

10

1.5

06–38-92 16.5

288

3.60

0

To be eliminated (see previous note)

1

30

1.95

06

7.45

0.09

1

Screw M5 × 16

Front plate

5.5

Need for equipment to keep the frame vertical during the assembly operations of aluminum extrusions

(continued)

Geometric Analysis of Product CAD Models

707

Table 2. (continued) Part

Rp,i MH TH,i [sec] MI Code Code

Nut M3nE (PSU)

2

Back plate

1

30

1.95

06

Anti-vibration feet

4

20

1.8

11

Nut M3n

2

07

2.65

03

Nut nylon M3nN

1

17

3.06

Bearing housing 623h

1

11

Screw M32 × 18

1

Y-belt-idler



30

1.95

03

TI,i [sec] TA,i [sec] Ci [e] TMP Design notes 3.5

10.9

0.14

1

5.5

7.45

0.09

1

5

27.2

0.34

1

3.5

12.3

0.15

0

03

3.5

6.56

0.08

1

1.8

06

5.5

7.3

0.09

1

10

1.5

38

6

7.3

0.09

1

1

30

1.95

08

6.5

8.45

0.11

0

















To be eliminated if PSU holding system to the frame is redesigned. It is really necessary the power supply to be connected to the Frame?

To be eliminated if threaded holes are provided in the front plate

To be integrated in a redesigned front plate

In the Prusa website [15], the CAD models of the single parts of the machine are accessible for autonomous printing as tessellated geometry. Thus, solid B-Rep models have been modelled and assembled together in order to have a CAD model of the whole printer, as available for industrial products (Fig. 4). Indeed, assembly instructions can be accessed for users who want to build their 3D printer autonomously.

708

B. Bonino et al.

Fig. 4. Original Prusa i3 MK3S + 3D printer. CAD model of the printer assembly rebuilt in Autodesk Inventor 2022 from the material available in the website [15].

In the meanwhile, a manual DFA analysis of the product assembly process has been accomplished. Table 2 shows an example of the tables build in MS Excel applying the B&D method on the sub-assemblies of the printer, and then, to the final assembly. The table also reports the procedures for computing DFA indices. Moreover, suggestions for modifications of the design solution emerge to improve the assembly process and relative time. In case of large batches, industrialized assembly processes can be applied requiring the use of appropriate assembly times retrieved from a database built on purpose. The algorithms required by the proposed approach have been implemented in a prototypal software module developed in collaboration with Hyperlean Srl, a software house active in the field of the product costing management. The software leverages existing capability of the company geometric kernel to process solid geometric representations by applying feature recognition algorithms, part types identification approaches and contacts analysis [13]. The module has been experimented on the 3D CAD model of the Prusa printer to automatically recognize the categories of the parts in the assembly model and the connections among them (Fig. 5). The software allows the geometric information to be processed, the assembly sequence to be analyzed, and the required assembly time to be computed according to DFA approaches (Fig. 6). The process proved to be fast and usable for the designer. Indeed, the simulated assembly time has been compared to the real mounting times experimentally evaluated in the construction of the same physical printer. For the analyzed test case, the measured times have a good correspondence to the calculated times and confirm the feasibility of the approach, reconfirming the validity of the DFA approach.

Geometric Analysis of Product CAD Models

709

Fig. 5. Prototypal software to extract geometrical data for DFA from CAD models, such as standards parts and relative dimensions.

Fig. 6. Pareto analysis of the assembly times required for the parts of the “YZ Frame” subassembly of the 3D printer. Data reported in the table refers to the whole 3D Printer.

5 Conclusions The paper has explored the possibility of automatically deriving data from 3D CAD models of products to conduct DFA analyses. DFA requires heterogeneous data as input that are partly easy to derive and partly require more sophisticated and original geometry processing by combining existing algorithms available in the literature. Therefore, a mapping of the required models’ attributes has been conducted against the necessary DFA parameters to drive further research facing the implementation of tools capable of an efficient application of DFA guidelines in product design. Future steps include the implementation of prototypal tools to extract the identified geometric attributes and possibly support the designer works with alerts and suggestions.

710

B. Bonino et al.

References 1. Mandolini, M., Campi, F., Favi, C., Germani, M., Raffaeli, R.: A framework for analytical cost estimation of mechanical components based on manufacturing knowledge representation. Int. J. Adv. Manuf. Technol. 107(3–4), 1131–1151 (2020). https://doi.org/10.1007/s00170-02005068-5 2. Boothroyd, G., Dewhurst, P., Knight, W.: Product Design for Manufacture and Assembly, 3rd edn. CRC Press, Boca Raton (2010) 3. Lucas Engineering Systems Ltd: Design For Manufacture and Assembly Practitioners Manual Version 10. University Of Hull (1993) 4. Miyakawa, S., Ohashi, T., Iwata, M.: The Hitachi new assemblability evaluation method. In: Transactions of the North American Manufacturing Research, Institution (NAMRI) of the SME, the NAMR Conference XVIII, 23–25 May 1990, Pennsylvania State University, Dearborn, USA (1990) 5. Campi, F., Favi, C., Germani, M., Mandolini, M.: CAD-integrated design for manufacturing and assembly in mechanical design. Int. J. Comput. Integr. Manuf. 100, 235–240 (2021) 6. Raffaeli, R., Cicconi, P., Mengoni, M., Germani, M.: Modular product configuration: an automatic tool for eliciting design knowledge from parametric cad models. In: Proceedings of the ASME Design Engineering Technical Conference, 1 (PARTS A AND B), pp. 207–218 (2010) 7. Boothroyd & Dewhurst software tools webpage. https://www.dfma.com/software/dfma.asp. Accessed 02 Feb 2022 8. DFMPro Homepage. https://dfmpro.com/. Accessed 01 Sep 2022 9. Ou, L.M., Xu, X.: Relationship matrix based automatic assembly sequence generation from a CAD model. Comput. Aided Des. 45(7), 1053–1067 (2013) 10. De Fazio, T.L., Rhee, S.J., Whitney, D.E.: Design-specific approach to design for assembly (DFA) for complex mechanical assemblies. IEEE Trans. Robot. Autom. 15(5), 869–881 (1999) 11. Bahubalendruni, M.R., Biswal, B.B.: A review on assembly sequence generation and its automation. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 230(5), 824–838 (2016) 12. Spitz, N.S., Requicha, A.G.A.: Accessibility analysis using computer graphics hardware. IEEE Trans. Visual Comput. Graphics 6(3), 208–219 (2000) 13. Bonino, B., Giannini, F., Monti, M., Raffaeli, R.: Enhancing product semantics understanding through automatic part type recognition in CAD assembly models. Comput. Aided Des. Appl. 19(5), 896–912 (2022) 14. Han, J.H., Pratt, M., Regli, W.C.: Manufacturing feature recognition from solid models: a status report. IEEE Trans. Robot. Autom. 16(6), 782–796 (2000) 15. Original Prusa i3 MK3S+ web page. https://www.prusa3d.com/product/original-prusa-i3mk3s-3d-printer-3/. Accessed 02 Feb 2022

Image-Driven Manufacturing of Graded Lattices by Fused Deposition Modeling Luca Grigolato1,2(B) , Stefano Rosso2 , Enrico Bernardo1 Gianmaria Concheri2 , and Gianpaolo Savio2

,

1 DII Department, University of Padova, Via Marzolo 9, 35131 Padova, Italy 2 DICEA Department, University of Padova, Via Venezia, 1, 35131 Padova, Italy

[email protected]

Abstract. Lattice structures are receiving a renewed interest in many areas such as biomedical and industrial fields, due to the capabilities of additive manufacturing technologies which allow for the fabrication of very complex shapes. Currently, several methods and tools are described in the scientific literature and some commercial software are introducing dedicated packages to reduce designer efforts for lattice structure design and optimization. However, by using commercial CAD/CAM tools in the fabrication of components filled by lattice structure, several critical issues remain and need to be taken into consideration. This work aims at manufacturing variable-density lattice structures via fused deposition modeling deriving the density map from a grayscale or color image. In the proposed approach, the shell-based lattice model is not achieved by CAD tools, but only during the CAM process, while the lattice relative density is computed by editing the G-code, modifying the extrusion flow according to the local grayscale of a volumetric CAD model, defined from an image. The main advantages are related to the absence of a graded lattice geometric model and the consistency of the toolpath. The method is tested on various images and patterns, and can find applications in artworks, embedded information on components, and functional 3D printed parts, such as the replication of the density map of a bone derived from a DICOM grayscale image. Keywords: Additive manufacturing · Graded lattices · Heterogeneous objects · Fused deposition modeling

1 Introduction Bridging the gap between geometric model complexity and manufacturing constraints is one of the main challenges to fully exploit the additive manufacturing technologies capabilities. Even if virtual geometric models suitable for powder bed fusion additive manufacturing technologies can be obtained, ensuring a good compromise among design speed, model complexity, and technologies, due to the manufacturing constraints, other technologies such as material extrusion (MEX) shows difficulties in managing the model complexity related to lattice structures and much more when dealing with shell-based lattice structures with variable thickness and/or variable density infill. Computer-aided © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 711–721, 2023. https://doi.org/10.1007/978-3-031-15928-2_62

712

L. Grigolato et al.

design (CAD) software tools are now increasing the capabilities of realizing these types of geometric models, but designer expertise is typically required [1, 2]. In addition, the resulting polygonized surface file can be extremely huge in size, compromising the data exchange between designer and manufacturer in the 3D printing workflow. Indeed, computer-aided manufacturing (CAM) software firstly provide the division of the object in layers, parallel to the printing direction, and then generate the printing toolpath for each layer, computing the machine CNC programming language, i.e., the G-code file. Usually, this process is automatically made by several slicing software, but in complex shapes, the CAM of lattice structures, especially based on thin walls, can generate inconsistent toolpaths, unable to reproduce the CAD model. This work aims at manufacturing variable-density lattice structures via fused deposition modeling (FDM) deriving the density map from a grayscale image. As mentioned, using commercial CAD/CAM tools this result is difficult or even unobtainable. To overcome this limit, the shell-based lattice structure is not achieved by CAD tools, but only during the CAM process, adopting different infill strategies, while the graded relative density is obtained by editing the G-code, modifying the extrusion flow according to the local grayscale, defined in a volumetric geometric model. The main advantages are related to the absence of the lattice geometric model and the consistency of the toolpath, overcoming the previously highlighted limitations. The two main proposals are to avoid geometric model manipulation and inconsistency in the toolpath. Thus, the objective is to interact between CAD and CAM processes, to take advantage of both.

2 Background Standard workflow in AM starts from a CAD model and finishes with a 3D printed physical object of the CAD model. In between there are various phases involving CAD modeling, polygonization of the CAD model, slicing operations, and toolpath generation, resulting in a CNC file usually called G-code file [3, 4]. Several approaches for the geometric modeling of lattices, based on different representation schemes, are available in the literature [5–7]. Among these, some approaches present advantages and have been employed more frequently than others. This is the case of direct polygonal mesh modeling and implicit (or function-based) modeling approaches. An example of the first one is given by Savio et al. [8]; as described it is possible to model beam and shell-like structures using low-poly meshes and if needed use a subdivision algorithm to smooth the whole mesh. In this manner minimal surfaces lattices can be also generated by successive approximations and the approach can be extended to graded lattices [9]. On the other hand, implicit modeling represents a flexible solution for 3D modeling, especially for lattice structures [10]. Several studies have shown the potentiality of implicit modeling for generating relative density graded, cell size graded and multi-morphology lattices [11–13]. From an AM point of view, the principal drawback of this approach consists in the conversion, that is needed at some point of the AM workflow, from the function representation into an auxiliary B-rep, such as a mesh (e.g. isosurface polygonization), for subsequent slicing operations. To avoid the auxiliary polygonized representation, the main solutions are the conversion into a stack of raster images, representing each layer in the Z direction, at the machine resolution (which is an acceptable input for some

Image-Driven Manufacturing of Graded Lattices

713

AM technologies) or directly calculating the machine toolpath from the implicit model [10, 14, 15]. For this reason, some authors investigated the possibility of modeling the machine toolpath directly by using geometric entities different from meshes, such as points and curves, especially in the case of MEX technologies [5]. To create lattice-based functional objects, it is possible to operate on several levels of complexity, and properties related for instance to geometry, material, or process characteristics. The geometric characteristic can be tuned by operating on the unit cell shape, the unit cell size, or the unit cell relative density. The material characteristic can be tuned by modifying the material microstructure or by using more than one material, thus changing the composition over the volume. At last, the process can influence the final object by tuning the manufacturing parameters, for instance, temperature or speed, which influence several characteristics such as actual flow rate, the accuracy of the part, adhesion among layers, and anisotropy. Above all other geometric characteristics, the relative density can be considered the primary design variable for AM lattices [13]. Since one of the main objectives is the determination of structure-property relationships to control the functionality of a component, the physical properties of lattices are influenced mainly by their relative density, which is reflected in the well-known scaling laws of Gibson and Ashby [16]. Relative density is also equivalent to solid volume fraction, which represents the solid volume of the lattice with respect to the overall volume of the space it occupies. Several software tools have been developed to allow precise control of the lattice modeling and the AM process for specific applications from bioprinting to ceramics, or to translate vector graphics into toolpath and to post-process G-code [4, 8, 9, 13]. Among all the software tools, a similar idea to the one proposed in this work was described by Moetazedian et al. [17]. The work is based on the concept of avoiding CAD modeling, subsequent slicing, and using continuous deposition paths with varying extrusion width as opposed to fragmented ones with constant extrusion width, to improve the quality of manufactured parts by eradicating voids and defects. To demonstrate the approach in [17], the same research team developed and published a free open-source software called FullControl GCode Designer [4]. In this tool, the user defines every segment of the toolpath along with all printing parameters, which may be related to geometric and nongeometric factors, at all points along the toolpath. Similarly, Silkworm [18] is an open project released as a plugin in Grasshopper/Rhinoceros (McNeel & Associates, USA) capable to translate non-solid geometry, i.e., surfaces, curves, and points and further, into G-code for 3d printing purposes. A tool acting directly on the G-code file by postprocessing operations is Velocity Painting [19]. It’s a free open-source script written in perl language that, when given an image file and a G-code file, post-processes the G-code to change print speeds according to the intensity of the image pixels, projecting the image onto the skin of the physical model. This tool was used in [17] to generate graded patterns. Suggested by the weaknesses and the advantages of approaches and software for lattice modeling and manufacturing found in the literature, the proposed method tries to catch the best of both worlds with an ‘in between’ solution: CAD modeling operations are used for the B-rep macro-shape design, CAM software are used for the infill modeling allowing an easy lattice design, directly on the G-code, and a CAD volumetric approaches

714

L. Grigolato et al.

are used for the G-code editing. In this way, high complex graded lattice geometric modeling is bypassed avoiding intricate implicit or B-rep representation.

3 Modeling Method The proposed method consists of several steps, as Fig. 1 depicts. Firstly, a bulk object is geometrically modeled by a commercial CAD tool. After that, the 3D model is used for two operations: the CNC file (G-code) computation and the discrete volumetric model definition, i.e., a virtual model in which each voxel has a scalar value. On one side, the bulk object is sliced using an open-source CAM software, such as Slic3r, in which, together with process parameters, the infill pattern and density are also defined. Among the infill patterns, gyroid infill or other 3D shell-based shapes are chosen. Infill density, on the other hand, defines the relative density of the pattern and directly influences the unit cell size of the chosen pattern. By controlling these two parameters the shell-based lattice can be computed, in the CAM software directly in the toolpath. As a result, the file containing the CNC programming language for additive manufacturing (G-code file) is generated. On the other side, a volumetric map of scalar values is modeled, which is used to drive the ‘thicknesses’ of the shell lattice. In this work, the map comes from the colors of a 2D image, which are firstly extracted from the grayscale version of the image and then ‘projected’ into the volumetric array. This process is accomplished in Grasshopper, a Rhinoceros (McNeel & Associates) plugin. The third step consists in modifying the extrusion parameter ‘E’ of the infill path in the G-code file. The ‘E’ values are computed as a function of the volumetric map. A script has been developed in Python language to realize this set of operations. The developed script parses the G-code file, finds the ‘E’ values of the infill, which represents the amount of extruded material along the toolpath, calculates the new ‘E’ values based on the image color data, replace the new ‘E’ values, and finally reassembles the modified G-code file. Finally, the part can be manufactured using a desktop FDM 3D printer with a direct drive configuration to reduce the delay [17]. The calculation of the new ‘E’ values is based on a trilinear interpolation among the color values stored in the volumetric model, considering the centroid coordinates of the voxels and the actual ‘E’ values ‘stored’ at each starting point of the segmented toolpath. To summarize, the following steps are implemented to obtain an image embedded in the part infill: 1. CAD modeling: model the bulk object. 2. CAM computation: generate toolpath and the G-code file. 3. Volumetric modeling: the implementation converts a grayscale image into a voxelbased volumetric model (each voxel has a relative density value in the range 0–1 derived from the projected image). 4. G-code editing: modify the extrude material based on the volumetric model density map. 5. Manufacturing.

Image-Driven Manufacturing of Graded Lattices

715

Process parameters G-code computation Editing G-code

CAD modeling

Manufacturing

Volumetric modeling

Fig. 1. Flowchart with the macro phases of the proposed method.

3.1 G-code Computation The G-code is computed by the Slic3r 1.30 software, using the process parameters summarized in Table 1. As infill pattern, the gyroid (sheet-based) was selected due to its quasi-isotropic mechanical behavior [20]. The infill constant density was selected to allow a unitary relative density after the G-code editing process and was established by empirical experiments based on the other process parameters (layer thickness, infill speed, nozzle diameter, extrusion temperature). Table 1. Constant process parameters Layer thickness

Infill pattern

Infill speed

Nozzle diameter

Filament diameter

Bed temperature

Extrusion temperature

mm

-

mm/s

mm

mm

°C

°C

0.2

Gyroid

25

0.4

1.75

55

215

3.2 Volumetric Modeling The volumetric model is constructed starting from a single (2D) image. The RGB image is converted into a vector of normalized grayscale values in the range 0–1 that can be interpreted as a relative density map. This vector is obtained by sampling the image with a regular 2D grid of points (sampling array). The number of sampling points nX and nY in the 2D grid can correspond to the image resolution, so in the x direction, nX is the image resolution width and in the y direction, nY is the image resolution height. nX and nY are also the number of voxels in x and y directions. The voxel size (vS) is then set, for instance, based on the bounding box of the B-rep CAD model (Fig. 2). The number of voxels in the z direction, nZ, is calculated according to the volumetric model height. The volumetric representation is obtained by repeating the normalized grayscale vector nZ times (Fig. 3). Finally, the volumetric model is aligned with the B-rep CAD model.

716

L. Grigolato et al.

Fig. 2. The grayscale values visualized as points sampled from the color image in the top left, and a particular of the grid with voxel parameters reported (vS = vSx = vSy).

Fig. 3. 3D array visualization with nZ = 4. Single projection a) on first layer and b) on last layer; multiple projections c) 2 mid-layers and d) completely filled array (repeated nZ times).

3.3 Editing G-code A G-code file contains the instructions as a list of lines that are read sequentially by a CNC or an FDM machine. Most of the lines in a G-code are motion instructions and are expressed by the following syntax: G1 X# Y# Z# E# F#. “X, Y, and Z” identify the cartesian coordinates of the next point to reach. “E” is the extrusion parameter; it defines the amount of filament to be fed into the extruder, moving between two consecutive points of the toolpath. By simple consideration, it is possible to demonstrate that, for each i-th position of the toolpath in the infill portion: E e,i = E s,i ·

ρe,i ρs

(1)

Image-Driven Manufacturing of Graded Lattices

717

where ΔE e,i is the amount of material needed (i.e. the value of E to substitute in the G-code during the editing phase at the i-th point of the toolpath) ΔE s,i is the amount of material calculated during the slicing process, ρ s is the infill density adopted in the slicing process and ρ e,i is the required relative density at the i-th point obtained by a trilinear interpolation of the volumetric model. Furthermore, in ΔE e,i computation it is necessary to consider the minimum relative density ρ emin allowing for a consistent part, taking into account the layer thickness. Again, this value was empirically estimated by experiments and as first attempt calculated to obtain an extrusion width Ew equal to the layer thickness. To simplify the G-code editing, to avoid the calculation Es,i , relative E coordinates instead of the absolute coordinates in the G-code are used. To edit the G-code a python script was implemented. Here is the pseudocode:

read the G-code lines for each line: if available, extract X, Y, Z, E if line is infill: calculate ρ e,i from the volumetric model by a trilinear interpolation on the X, Y, Z coordinates if ρ e,i < ρ emin: ρ e,i = ρ emin compute new_E by eq. 1 substitute new_E in the G-code line write the_G-code

4 Case Studies and Discussion In the following section, two case studies are presented. Starting from a 2D image, it is projected into a volumetric array; thus, we can realize a 2.5D volumetric ‘pattern’ by repeating the same image along the Z-axis or using two or more images. In this sense, the extension to 3D is relatively simple, for instance by volumetric modeling approaches it is possible to blend different images along, for instance, the z direction. Alternatively, the volumetric model can be derived from a stack of medical images obtained from a CT scan. 4.1 Single Image The image used has a resolution of 142 × 142 pixels. The bulk object is a box 80 × 80 × 2.4 mm. The voxel size is 0.56 mm (isotropic). nX = 142, nY = 142 and nZ = 4. The grayscale values for a single layer are 20164, so 80656 values are needed to completely fill the array. The values are projected in the 4 layers of the array (Fig. 3d). The infill pattern (i.e., the lattice structure) is a gyroid with a cell size of 3.25 mm (ρs = 0.35) for an Ew range of 0.3–1.6 mm (Fig. 4a, Ew,min = black color, Ew,max = white color). Other process parameters are: printing temperature 215 °C, printing infill speed 25 mm/s, 3 external perimeters, no bottom and top layers; the printed object is shown in Fig. 4b, and white PLA material was adopted.

718

L. Grigolato et al.

Fig. 4. Main process phases visualization: a) particular of the infill pattern lines with extrusion width values visualized (Ew,min = black color, Ew,max = white color); b) printed component.

4.2 Multiple Images The two images (Fig. 5a) used in this case study have different resolutions. The “drop” image of the previous example is 142 × 142 pixels, whereas the symbol image (known as “Triskelion”) is 232 × 232. The sampling array is obtained using nX = 142, nY = 142. The bulk object is a box 80 × 80 × 2.4 mm. By using the resolution of the first image, the voxel size is 0.56 mm (isotropic). nZ is consequently equal to 4. The grayscale values of the drop image are projected in the 4th layer of the array, while the Triskelion values are projected on the 2nd layer, while the remaining layers are filled with 0 values The grayscale values are inverted from the previous example. The other parameters remain unchanged from the previous case study. The printed object is shown in Fig. 5b, c in which it is possible to appreciate both images, one on the top side and the other one on the bottom.

Fig. 5. a) Images used in this case study; b) printed component top side and c) bottom side.

Image-Driven Manufacturing of Graded Lattices

719

4.3 Discussion The proposed approach is effective in the introduction of images and embedded information in graded lattices and can be easily extended to other functional requirements, such as structural or thermal optimized infill. The delay between the signal and the actual extrusion represents the main issue at this moment, as visible in Fig. 4b and Fig. 5b, c for adjacent paths in which the toolhead travels in opposite directions; the variation in extrusion width occurs after some length/time, causing the pattern to be out of phase. This is due to the intricate relations between process parameters and material flow [21, 22]. Even if the direct-drive configuration of the printer permits relatively low delay if compared to a bowden configuration, the problem is visible and can cause a discrepancy between the designed model and the final object. As stated in the introductory section, avoiding the generation of a CAD model can be useful since modeling operations can be difficult even for an experienced designer. For example, boolean operations are not always performed successfully. Once the model is obtained, the slicing operations are carried out. It is worth noting that for a lattice CAD model, and especially in the case of graded lattice structures, this step is not trivial and can lead to the absence of portions of the toolpath, depending on the specific slicer software used. Based on the slicer software implementation, several attempts may be required to achieve acceptable results by carefully modifying the process parameters. Slicer software also show other limits such as the constant extrusion width, so only discrete density variations in MEX are allowed. For these reasons, in the proposed method, slicing is performed as the initial operation, therefore the generated toolpath which models the lattice structure is always consistent because it is not related to a lattice mesh model but to the process parameters. As a disadvantage, conformal lattice structures and lattices of variable size and topology cannot be obtained without other complex implementations. Moreover, since there is no CAD model, no analyses and simulations can be performed.

5 Conclusions In this work, a new method for modeling and fabricating variable density lattices is proposed, trying to go beyond the main issues highlighted in the literature related to the design for AM and traditional CAD/CAM software. In this approach, only a B-rep model of the object is needed. The main tasks take place directly on the G-code file as post-processing operations. By operating on the infill parameters, the lattice structure is generated and controlled. While applying a volumetric model generated from one or more images, the thickness of the infill lattice is locally tuned. The proposed method presents several advantages. First, the lattice is modeled directly as an infill pattern during slicing operations, thus the geometric model of the lattice is not needed. In such a way, the G-code is more certainly obtained, and uncoherent toolpaths are avoided. Second, the number of files and phases required from the design to fabrication of a functional lattice and the modeling efforts are lower than in standard workflows, thus avoiding some of the file exchange issues.

720

L. Grigolato et al.

The proposed method can find several applications from a modeling standpoint: in artworks, embedded information on components, and functional 3D printed parts, especially in the biomedical field, e.g., the replication of the density map of a bone derived from a DICOM grayscale image to study bone-related diseases. Future works will be aimed at testing the method on other patterns, identifying strategies allowing for conformal infill. Acknowledgments. This work was partially funded by grant ‘BIRD 190850’ by the Department of Civil, Environmental, and Architectural Engineering, University of Padova.

References 1. Tang, Y., Zhao, Y.F.: A survey of the design methods for additive manufacturing to improve functional performance (2016). https://doi.org/10.1108/RPJ-01-2015-0011 2. Nazir, A., Abate, K.M., Kumar, A., Jeng, J.-Y.: A state-of-the-art review on types, design, optimization, and additive manufacturing of cellular structures. Int. J. Adv. Manuf. Technol. 104(9–12), 3489–3510 (2019). https://doi.org/10.1007/s00170-019-04085-3 3. Gibson, I., Rosen, D., Stucker, B., Khorasani, M.: Additive Manufacturing Technologies. Springer International Publishing, Cham (2021). https://doi.org/10.1007/978-3-030-56127-7 4. Gleadall, A.: FullControl GCode designer: Open-source software for unconstrained design in additive manufacturing. Addit. Manuf. 46, 102109 (2021). https://doi.org/10.1016/j.addma. 2021.102109 5. Giannitelli, S.M., Accoto, D., Trombetta, M., Rainer, A.: Current trends in the design of scaffolds for computer-aided tissue engineering. Acta Biomater. 10, 580–594 (2014). https:// doi.org/10.1016/j.actbio.2013.10.024 6. Savio, G., Rosso, S., Meneghello, R., Concheri, G.: Geometric modeling of cellular materials for additive manufacturing in biomedical field: A review. Appl. Bionics Biomech. 1–14 (2018). https://doi.org/10.1155/2018/1654782 7. Feng, J., Fu, J., Lin, Z., Shang, C., Li, B.: A review of the design methods of complex topology structures for 3D printing. Vis. Comput. Ind. Biomed. Art 1(1), 1–16 (2018). https://doi.org/ 10.1186/s42492-018-0004-3 8. Savio, G., Meneghello, R., Concheri, G.: Geometric modeling of lattice structures for additive manufacturing. Rapid Prototyp. J. 24, 351–360 (2018). https://doi.org/10.1108/RPJ-07-20160122 9. Savio, G., Meneghello, R., Concheri, G.: Design of variable thickness triply periodic surfaces for additive manufacturing. Prog. Addit. Manuf. 4(3), 281–290 (2019). https://doi.org/10. 1007/s40964-019-00073-x 10. Pasko, A., Fryazinov, O., Vilbrandt, T., Fayolle, P.-A., Adzhiev, V.: Procedural function-based modelling of volumetric microstructures. Graph. Models. 73, 165–181 (2011). https://doi.org/ 10.1016/j.gmod.2011.03.001 11. Al-Ketan, O., Lee, D.-W., Rowshan, R., Abu Al-Rub, R.K.: Functionally graded and multimorphology sheet TPMS lattices: Design, manufacturing, and mechanical properties. J. Mech. Behav. Biomed. Mater. 102, 103520 (2020). https://doi.org/10.1016/j.jmbbm.2019.103520 12. Al-Ketan, O., Abu Al-Rub, R.K.: MSLattice: A free software for generating uniform and graded lattices based on triply periodic minimal surfaces. Mater. Des. Process. Commun. 3, e205 (2021). https://doi.org/10.1002/mdp2.205 13. Maskery, I., Parry, L.A., Padrão, D., Hague, R.J.M., Ashcroft, I.A.: FLatt pack: A researchfocussed lattice design program. Addit. Manuf. 49, 102510 (2022). https://doi.org/10.1016/j. addma.2021.102510

Image-Driven Manufacturing of Graded Lattices

721

14. Maltsev, E., Popov, D., Chugunov, S., Pasko, A., Akhatov, I.: An accelerated slicing algorithm for frep models. Appl. Sci. 11, 6767 (2021). https://doi.org/10.3390/app11156767 15. Popov, D., Maltsev, E., Fryazinov, O., Pasko, A., Akhatov, I.: Efficient contouring of functionally represented objects for additive manufacturing. Comput. Des. 129, 102917 (2020). https://doi.org/10.1016/j.cad.2020.102917 16. Gibson, L.J., Ashby, M.F.: Cellular Solids: Structure and Properties. Cambridge University Press, Cambridge (1997). https://doi.org/10.1017/CBO9781139878326 17. Moetazedian, A., Budisuharto, A.S., Silberschmidt, V.V., Gleadall, A.: CONVEX (CONtinuously Varied EXtrusion): A new scale of design for additive manufacturing. Addit. Manuf. 37, 101576 (2021). https://doi.org/10.1016/j.addma.2020.101576 18. Holloway, A., Mamou-Mani, A., Kjelstrup-Johnson, K.: Project Silkworm. https://projectsi lkworm.com/. Accessed 23 Feb 2021 19. Wheadon, M.: Velocity Painting. https://github.com/MarkWheadon/velocity-painting. Accessed 15 Dec 2021 20. Li, D., Liao, W., Dai, N., Xie, Y.M.: Comparison of mechanical properties and energy absorption of sheet-based and strut-based gyroid cellular structures with graded densities. Materials 12, 2183 (2019). https://doi.org/10.3390/ma12132183 21. Turner, B.N., Strong, R., Gold, S.A.: A review of melt extrusion additive manufacturing processes: I. Process design and modeling. Rapid Prototyp. J. 20, 192–204 (2014). https:// doi.org/10.1108/RPJ-01-2013-0012 22. Turner, B.N., Gold, S.A.: A review of melt extrusion additive manufacturing processes: II. Materials, dimensional accuracy, and surface roughness. Rapid Prototyp. J. 21, 250–261 (2015). https://doi.org/10.1108/RPJ-02-2013-0017

Geometric Modeling of Lattice Structures with Selectively Activated Walls for Hydraulic Manifolds Stefano Rosso1(B)

, Luca Grigolato1,2 , Gianmaria Concheri1 and Gianpaolo Savio1

,

1 Department of Civil, Environmental and Architectural Engineering, University of Padova,

Padova, Italy [email protected] 2 Department of Industrial Engineering, University of Padova, Padova, Italy

Abstract. This work aims to propose a novel geometric modeling method to obtain lattice structures with internal walls and external skins that can be selectively activated. Internal walls can separate two adjacent cells, locally increase the stiffness of the component, and generate internal ducts; external walls are used to strengthen the entire structure and create a division from the outside. The proposed approach models a beam-based cellular structure with the introduction of internal walls according to an activation pattern that indicates whether a cell is communicating with the adjacent one through their connecting faces or not. The data structure describes the topology of the subdivision surface control polygon. The proposed method is then applied to a case study based on the hydraulic manifold applications. The possibility of building custom internal channels is exploited, with the advantage of obtaining smooth surfaces at the direction changes, with lower pressure drops, and a lightweight component due to the lattice structure that surrounds the channels. The resulting structure has a complex geometry that perfectly suits the manufacturing capabilities of additive manufacturing technologies. Keywords: Additive manufacturing · Geometric modeling · Lattice structures · Closed cells · Hydraulic manifold

1 Introduction Additive manufacturing (AM) technologies increased the freedom in creating parts with complex geometries, previously impossible to obtain with traditional manufacturing techniques. This is due to the AM approach of building the part layer-by-layer, adding material instead of removing it. Other than the unprecedented level of shape complexity, AM has several advantages if compared to conventional technologies; some of them are the production of highly customized objects, the creation of multi-material parts, the consolidation of assemblies through the reduction of the number of components, simpler supply chains, and shorter lead times [1, 2]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 722–732, 2023. https://doi.org/10.1007/978-3-031-15928-2_63

Geometric Modeling of Lattice Structures

723

Among the complex shapes, cellular solids, also known as lattice structures, received attention from both the academic and the industrial fields. They can be designed to have high specific stiffness or to absorb and dissipate energy by undergoing large deformations; depending on the case, the structures are respectively defined as stretchingdominated or bending-dominated [3]; more, lattice structures have good heat exchanging properties due to the large surface area with respect to the occupied volume, and a large number of internal pores guarantee acoustic insulation too [4]. Lattice structures can be made up of strut-like elements, shell-based elements, or hybrid configurations characterized by both beam and surface elements. The surfaces can act as skins and walls, as shown in Fig. 1; adding an external skin, i.e., a surface that connects the beams at the extremities of the structure, increases the stiffness of the part and isolates the part from the outside preventing the entrance of undesired material and/or the leakage of internal liquids [5]. Internal walls are defined as surfaces between two internal adjacent cells that hinder the communication between these cells and locally play the same role as the skin; for instance, internal walls can be arranged to create internal ducts in which one or more fluids can flow acting as heat exchangers or materials transporters; furthermore, introducing or removing walls along the component allows changing its stiffness properties.

Fig. 1. Example of external skins and internal wall in a simplified two-cell lattice structure.

The studies in the literature that deal with lattice structures containing both beams and skins are limited, and many of them only address the external skin integration at the boundary. Aremu et al. [5] implemented a geometric modeling method for conformal lattice structures based on voxels; the approach involves the tessellation and trimming of the unit cells and, as a drawback, produces hanging struts at the boundary; as a solution, a so-called “net-skin”, i.e., a 2D surface lattice structure connecting all the hanging elements is modeled. Similarly, Woodward and Fromen [6] proposed a design strategy for uniform lattice structures with an open and conformal skin at the boundary; the method simultaneously addresses the voxelization of the entire volume and the boundary, generates the internal structure and the skin, and finally combines them for the generation of the final mesh through a marching cube algorithm. Tang et al. [7, 8] proposed a method for modeling optimized lattice structures where functional volumes (FVs) and functional surfaces (FSs) are identified; FSs are surfaces that fulfill a functional requirement and FVs are volumes that combine FSs and assist them in accomplishing their objectives. FVs can remain solid or can be filled with lattice structures obtaining, as a final model,

724

S. Rosso et al.

a lattice with external skins. Liang et. al [9] developed a design algorithm based on the volumetric distance field (VDF) method able to construct conformal lattice models with external skins; the VDF consists of implicit modeling, is based on the computation of level sets of field functions, and avoid the complexity and issues associated with explicit solid modeling, such as Boolean operations [10]. As previously stated, the capability of adding internal walls could be implemented in all the applications in which internal channels are needed. Among them, hydraulic manifold blocks are found. Hydraulic blocks are manifolds used to distribute and regulate the flow between inlet and outlet valves in hydraulic systems; they simplify the design and assembly of the systems, make the systems more integrated, and improve their stability and precision [11]. They are mainly produced by machining from solid through traditional manufacturing technologies such as milling and drilling, so the shape of the internal ducts is characterized by sharp bends that lead to pressure drops and energy losses [12]. Several works identified the AM and its capabilities as a suitable manufacturing technique for enhancing the design of hydraulic manifolds. Liu et al. [12] performed computational fluid dynamics analyses on different typical passages in manifold showing that lofted connections with smooth transitions (only manufacturable by AM) can reduce pressure losses and energy consumption up to 50%. Olaf et al. [13] redesigned a hydraulic block exploiting the Design for AM guidelines obtaining a weight reduction of the 91% with respect to the original component. Alshare et al. [14] optimized an existing hydraulic manifold reaching an 84% mass reduction and a 21% reduction of pressure drop. In this context, the present work proposes a novel geometric modeling method based on subdivision surfaces that allows obtaining beam-based lattice structures with the possibility of selectively introducing internal walls in desired zones of the lattice, overcoming the lack of methods in the literature able to perform such tasks, and expanding the geometric design possibilities for cellular structures. Moreover, the proposed approach is applied to a case study inspired by the hydraulic manifold applications, where it is possible to obtain blocks with internal ducts characterized by smooth transitions at direction changes and surrounded by cellular structures offering lightweight parts.

2 Geometric Modeling Method for Lattice Structures with Internal Walls Figure 2 shows the workflow that describes the geometric modeling method. A design space is filled with a lattice structure based on the simple cubic unit cell; as inputs, the number of cells along the three directions, nx , ny , and nz , or, alternatively, the size of the single cell, and an activation pattern for the internal walls are given; the activation pattern depends on the specific application and since the present study deals with internal channels, lines describing the path are provided. The lines correspond to the axes of the channels if the component already exists or can be manually sketched inside the design space according to the geometry of the ducts in the case of a new component. A data structure is then created in which the lattice is organized as a 3D matrix (see Fig. 3) with each position containing the unit cells and their main information, such as the coordinates of the eight vertexes and six Boolean variables indicating whether the faces will be activated as internal walls. It is possible to move through the cells and

Geometric Modeling of Lattice Structures

725

Fig. 2. Workflow of the proposed method for geometric modeling lattice structures with selectively activated internal walls.

“visit” them by retrieving their position inside the 3D matrix; each cell is identified by three indexes, i, j, and k, that indicate the relative position along the three directions. The indexes are defined as in Eq. 1. 0 ≤ i ≤ nx , 0 ≤ j ≤ ny , 0 ≤ k ≤ nz

(1)

Fig. 3. Visualization of the data structure adopted by the proposed modeling method. The numbers represent the index/position of a cell.

“True” or “False” values describing the activation status are obtained by checking the intersection events between the path lines and the faces of the cells of the lattice. Figure 4 shows different cases in which path lines intersect the faces of a cell; for the sake of clarity, the surrounding cells are not represented. A “True” value indicates an

726

S. Rosso et al.

intersection thus no internal wall is going to be modeled on that particular face because the channel requires an open path. A “False” value, instead, indicates no intersection and a wall is going to be modeled. The cells in Fig. 4a, b, and c have respectively six, three, and two “True” values indicating intersections and no walls. Depending on the shape of the channels, it is possible to obtain a closed cell with activated walls in all the six faces, as in Fig. 4d. The resulting cell has a closed pore that can be maintained or removed depending on the AM technology adopted for the production; for instance, if using a fused filament fabrication method, the closed cell can be produced with no issues, but if a vat photopolymerization or a powder bed fusion technology is adopted, the material resin or powder would remain trapped inside. The effect of captured powder was proved to be beneficial allowing for increased structural damping [15], so a geometry presenting closed cells could be desired and used on purpose.

Fig. 4. Walls activation depending on the path-lines: a) no skins, i.e., all paths are open, b) and c) selectively activated skins, and d) closed cell with internal void.

The geometric modeling of the lattice structure is based on a boundary representation and is operated starting from the first unit cell and moving on to the next cells by iterating the i, j, and k indexes on the x-, y-, and z-directions, respectively. On every cell, a “look forward, look sideward, and look upward” (LFLSLU) approach is adopted, checking only three of the six adjacent cells sharing a face, as in Fig. 5. In this way, the doublechecking of the same situation on a couple of cells is avoided and computational time is reduced; for example, considering the forward movement, i.e., along the x-direction, the intersection with a path-line is only evaluated on the front face of the red cell, and not on the back face of the blue cell on the next iteration step since the results would be the same because the connecting faces/wall are shared. The implementation of the method simultaneously performs the modeling of the beams and the skins. Three beams described by four quad-faces are modeled for each internal cell, one aligned to the x-axis, one aligned to the y-axis, and one aligned to the z-axis, exploiting the coordinates of the nodes of the current cell and the adjacent ones according to the LFLSLU approach. Similarly, the internal walls are modeled by checking the Boolean value indicating the activation status of three of the six faces, and in the case of a “False” value for a cell face, two quad faces are modeled, closing the connecting faces of two adjacent cells. When a wall is present, four faces belonging to the beams are not modeled for avoiding a non-manifold structure.

Geometric Modeling of Lattice Structures

727

Fig. 5. Look forward, look sideward, and look upward (LFLSLU) approach: a) the current single cell (in red) and the three considered adjacent cells (in blue) inside a simplified lattice structure, b) Interaction between the cells.

The method is implemented in Rhinoceros 7 CAD 3D software (Robert McNeel & Associates) inside the visual programming environment Grasshopper in which an IronPython code is written. The vertexes, edges, and faces, i.e., the topology of the structure, are used as data for the definition of SubD surfaces, defined as Catmull-Clark subdivision surfaces [16] with the standard extensions to support creases and corners described by Biermann et al. [17]. Creases can be added to the edges to maintain sharp edges and avoid smoothing.

3 Case Study: Hydraulic Manifold The proposed geometric modeling method is applied to a case study inspired by the hydraulic manifold components. The block can be considered a technological demonstrator and includes internal ducts with different types of forks and splits. Figures 6a, b, and c show the geometry of the channels and the relative polylines used as the input to the workflow, whereas Fig. 6d shows the arrangement of the channels inside the manifold block. It is important to highlight that some of the geometries of the channels, i.e., the internal direction changes that do not flow towards the outside, cannot be directly realized with traditional manufacturing technologies but require drilling operations from the outside and, at a later time, are plugged. The design space of the manifold is a box with overall dimensions of 100 mm × 120 mm × 60 mm. The size of the internal channels is 15 mm. A traditional manifold block (Fig. 7a) is obtained by manual modeling through Boolean operations and maintaining the sharp edges of the channels visible in the section view of Fig. 7b, reflecting the angled junctions resulting from a traditional manufacturing production through machining from solid.

728

S. Rosso et al.

Fig. 6. Design of the channels adopted in the case study: a), b), and c) channels with polylines; d) channels arrangement inside the manifold block.

The manifold in Fig. 8a, instead, is modeled applying the proposed modeling method described in Sect. 2. The method automatically models the hydraulic manifold and its internal ducts, without the need for manual interventions of a CAD operator. As inputs of the workflow in Fig. 2, a 100 mm × 120 mm × 60 mm box is given as design space and a 20 mm cell size was selected for filling the space, thus obtaining 5 repetitions along the x-direction, 6 repetitions along the y-direction, and 3 repetitions along the z-direction; moreover the polylines describing the axes of the channels in Figs. 6a, 6b, and 6c are used for defining the activation pattern of the walls.

Fig. 7. Geometric modeling of the hydraulic manifold adopting standard Boolean operations: a) result of the manual modeling and b) section view.

At the inlets and outlets of the channels, creases at the edges are imposed to maintain sharp edges in order to be able to perform further machining operations such as tapping. Moreover, portions of solid material were preserved at the four corners on the right side of the block to tap screws for fixing the component.

Geometric Modeling of Lattice Structures

729

Fig. 8. Geometric modeling of the hydraulic manifold adopting the proposed method: a) result of the automatic modeling and b) section view.

The design in Fig. 8a and the section view in Fig. 8b show the advantages of the proposed method. The presence of a lattice structure that surrounds the channels allows for a lightweight component; in the current case study, the volume of the traditional block is 708.8 · 103 mm3 , whereas the volume of the cellular one is 380.3 · 103 mm3 , with a reduction of 46%. Moreover, the subdivision surface modeling approach is characterized by smooth surfaces and avoids the typical sharp edges in the internal channels of the hydraulic manifolds, ensuring fewer pressure drops at elbows, when direction changes occur. The continuous pressure losses of the two designs are assumed to be equivalent because the length of the channels is the same. The local pressure losses differ due to the presence of rounded corners instead of sharp ones. According to Idelchik [18], the local pressure drops h, in meters, can be estimated as in Eq. 2: h = ξ ·

v2 2·g

(2)

where ξ is a non-dimensional resistance coefficient, v is the average velocity, in m/s, and g is the acceleration due to gravity, in m/s2 . For sharp elbows, ξ = 1.19, whereas for elbows with rounded corners the coefficient depends on the ratio r/D between the filleting radius r and the diameter of the channel D; in the present case study D = 15 mm and the rounded shape resulting from the subdivision surface can be approximated to a filleting radius r ≈ 2.2 mm. So, r/D = 0.15 and consequently ξ = 0.55. As a final step, 3/4 -10 (UNC) thread features are applied at the inlets and outlets of the channels for further connections of the block with valves, pumps, and actuators, and four M16 thread features are applied at the four corners of the side face for the connection of the hydraulic manifold. Figure 9 shows the result. Supposing to produce the component by AM technologies, metal powder bed fusion techniques such as the selective laser melting process are suitable for the task due to the possibility of manufacturing parts both in stainless steel and in aluminum alloy, for example, the 316 stainless steel and AlSi10Mg aluminum. The geometry does not present massive areas and supposing an orientation of the part inspired by the orthogonal isometric axonometry, where two of the three dimensions of the block form a 54° 44’ angle with respect to the building plate, and the third dimension forms an angle of 35° 16’, no internal supports should be needed. Indeed, the literature showed the possibility

730

S. Rosso et al.

Fig. 9. Light-weighted hydraulic manifold with thread features.

of producing metal components that present overhanging surfaces down to 30° [19, 20] and self-supporting internal channels [21]. External support structures from the platform of the AM machine to the hydraulic manifold are necessary for anchoring the component, dissipating the heat, and preventing thermal warping during the process. These functions are internally accomplished by the cellular structure of the model.

4 Conclusions Additive manufacturing technologies enable the production of complex shapes. Among them, lattice structures have been thoroughly investigated due to their properties such as light-weighting, energy absorption, thermal exchanging, and acoustic insulation. In this work, a novel geometric modeling method for lattice structures was presented. The lattice structure is based on a beam-like structure and the main novelty consists of the possibility of automatically introducing internal walls able to separate two adjacent faces. Through Boolean values defined by activation patterns like path lines or random values, all the connections between the faces can be controlled and enabled or disabled depending on the application. More, the method exploits subdivision surface algorithms that allow obtaining smooth surfaces without sharp edges at nodal points, where several beams meet, and at the connections of two orthogonal walls. According to the works available in the literature, the proposed method expands the modeling possibilities of lattices offering the possibility of introducing both external skin and internal walls. This approach was then adopted in a case study inspired by hybrid manifolds. The comparison between a standard manifold block machined from solid and the same block, i.e., with the same internal channels, modeled by applying the proposed method showed the possibility of obtaining lighter components and ducts with smoother surfaces and transitions at the direction changes. A 46% of volume reduction was achieved and a

Geometric Modeling of Lattice Structures

731

preliminary calculation showed a halving of the local pressure losses when switching from sharp elbows to elbows with rounded corners. Moreover, AM-based production enables the design of channels that could not be obtained by traditional technologies; indeed, at every change in direction, the channel needs to be reached and machined from the outside, so no turns are allowed in the middle of the blocks, limiting the design possibilities. As future works, the geometric modeling method is going to be improved by introducing the possibility of modeling conformal lattice structures and filling the space with other types of unit cells. The modeling of fully dense cells will be implemented to obtain a higher local stiffness and structural resistance, and to control the position of the center of mass; this could be beneficial in applications where the dynamic behavior of the structure needs to be taken into consideration. More, it is planned to include the method in a wider workflow in which several types of activation patterns are investigated and used as input; for instance, the results of structural analysis or topology optimization could help to identify the areas in which activate the walls. Finally, the process planning and simulation are going to be performed to assess the manufacturability of the final part, and computational fluid dynamics analyses are going to be made to calculate the performance of the new geometry of the channels with smoothed corners and compare it to the one of the traditional geometry with sharp elbows.

References 1. Thompson, M.K., et al.: Design for additive manufacturing: Trends, opportunities, considerations, and constraints. CIRP Ann. 65, 737–760 (2016). https://doi.org/10.1016/j.cirp.2016. 05.004 2. Holmström, J., Partanen, J., Tuomi, J., Walter, M.: Rapid manufacturing in the spare parts supply chain. J. Manuf. Technol. Manag. 21, 687–697 (2010). https://doi.org/10.1108/174 10381011063996 3. Ashby, M.F.: The properties of foams and lattices. Philos. Trans. R. Soc. A. Math. Phys. Eng. Sci. 364, 15–30 (2006). https://doi.org/10.1098/rsta.2005.1678 4. Gibson, L.J., Ashby, M.F.: Cellular Solids: Structure and Properties. Cambridge University Press, Cambridge (1997). https://doi.org/10.1017/CBO9781139878326 5. Aremu, A.O., et al.: A voxel-based method of constructing and skinning conformal and functionally graded lattice structures suitable for additive manufacturing. Addit. Manuf. 13, 1–13 (2017). https://doi.org/10.1016/j.addma.2016.10.006 6. Woodward, I.R., Fromen, C.A.: Scalable, process-oriented beam lattices: Generation, characterization, and compensation for open cellular structures. Addit. Manuf. 48, 102386 (2021). https://doi.org/10.1016/j.addma.2021.102386 7. Tang, Y., Kurtz, A., Zhao, Y.F.: Bidirectional Evolutionary Structural Optimization (BESO) based design method for lattice structure to be fabricated by additive manufacturing. CAD Comput. Aided Des. 69, 91–101 (2015). https://doi.org/10.1016/j.cad.2015.06.001 8. Tang, Y., Yang, S., Zhao, Y.F.: Design method for conformal lattice-skin structure fabricated by AM technologies. In: Proceedings of the ASME Design Engineering Technical Conference. American Society of Mechanical Engineers Digital Collection (2016). https://doi.org/ 10.1115/DETC2016-59738 9. Liang, Y., Zhao, F., Yoo, D.J., Zheng, B.: Design of conformal lattice structures using the volumetric distance field based on parametric solid models. Rapid Prototyp. J. 26, 1005–1017 (2020). https://doi.org/10.1108/RPJ-04-2019-0114

732

S. Rosso et al.

10. Pasko, A., Fryazinov, O., Vilbrandt, T., Fayolle, P., Adzhiev, V.: Procedural function-based modelling of volumetric microstructures. Graph. Models. 73, 165–181 (2011). https://doi. org/10.1016/j.gmod.2011.03.001 11. Xiang, W., Chuen, C.W., Wong, C.M., Yam, L.H.: Feature representation and database schema of an object-oriented feature-based product model for hydraulic manifold blocks. Int. J. Adv. Manuf. Technol. 16, 182–188 (2000). https://doi.org/10.1007/s001700050025 12. Liu, G., Zhang, J., Xu, B.: Structure optimization for passages in hydraulic manifolds using metal additive manufacturing technology. In: Proceedings of the 2019 IEEE 8th International Conference on Fluid Power and Mechatronics (FPM), pp. 485–492. IEEE (2019). https://doi. org/10.1109/FPM45753.2019.9035877 13. Diegel, O., Schutte, J., Ferreira, A., Chan, Y.L.: Design for additive manufacturing process for a lightweight hydraulic manifold. Addit. Manuf. 36, 101446 (2020). https://doi.org/10. 1016/j.addma.2020.101446 14. Alshare, A.A., Calzone, F., Muzzupappa, M.: Hydraulic manifold design via additive manufacturing optimized with CFD and fluid-structure interaction simulations. Rapid Prototyp. J. 25, 1516–1524 (2019). https://doi.org/10.1108/RPJ-03-2018-0064 15. Schmitz, T., et al.: Damping and mode shape modification for additively manufactured walls with captured powder. Precis. Eng. 66, 110–124 (2020). https://doi.org/10.1016/j.precision eng.2020.07.002 16. Catmull, E., Clark, J.: Recursively generated B-spline surfaces on arbitrary topological meshes. Comput. Des. 10, 350–355 (1978). https://doi.org/10.1016/0010-4485(78)90110-0 17. Biermann, H., Levin, A., Zorin, D.: Piecewise smooth subdivision surfaces with normal control. In: Proceedings of the ACM SIGGRAPH Conference on Computer Graphics, pp. 113– 120. Association for Computing Machinery (ACM) (2000). https://doi.org/10.1145/344779. 344841 18. Idelchik, I.E.: Handbook of Hydraulic Resistance. The U.S. Atomic Energy Commission and The National Science Foundation, Washington, D.C. (1986) 19. Wang, D., Yang, Y., Yi, Z., Su, X.: Research on the fabricating quality optimization of the overhanging surface in SLM process. Int. J. Adv. Manuf. Technol. 65, 1471–1484 (2013). https://doi.org/10.1007/s00170-012-4271-4 20. Cloots, M., Zumofen, L., Spierings, A.B., Kirchheim, A., Wegener, K.: Approaches to minimize overhang angles of SLM parts. Rapid Prototyp. J. 23, 362–369 (2017). https://doi.org/ 10.1108/RPJ-05-2015-0061 21. Han, Q., Gu, H., Soe, S., Setchi, R., Lacan, F., Hill, J.: Manufacturability of AlSi10Mg overhang structures fabricated by laser powder bed fusion. Mater. Des. 160, 1080–1095 (2018). https://doi.org/10.1016/j.matdes.2018.10.043

Semi-automatic Point Clouds Registration for Upper Limb Anatomy Paolo Neri(B)

, Beatrice Aruanno , Sandro Barone , Alessandro Paoli , and Armando Viviano Razionale

University of Pisa, Largo L. Lazzarino 1, 56122 Pisa, Italy [email protected]

Abstract. In this paper, a semi-automatic procedure to perform point clouds registration is presented. The method was developed for upper limb 3D scanning. During the acquisition, several frames are acquired from different points of view, to obtain a full 360° acquisition of the arm. Each frame stores both the point clouds coordinates and the corresponding RGB image. During post-processing, the RGB image is elaborated through a neural network, to detect relevant key points of the hand, which are then projected to the point clouds. The corresponding key points detected from different acquisitions are then used to automatically obtain a rough 3D rotation that aligns the point clouds corresponding to different perspectives in a common reference frame. Finally, the registration is refined through an iterative closest point algorithm. The method was tested on actual arm acquisitions, and the registration results are compared with the conventional fully manual 3-2-1 registration procedure, showing promising results of the proposed method. Keywords: Semi-automatic registration · Upper limb 3D scan · Neural network

1 Introduction In recent years, a great interest in the field of upper limb rehabilitation arose around the possibility of producing custom medical devices [1–3]. The use of bespoke devices can drastically enhance the patient’s comfort and treatment effectiveness. This is a two-fold peculiarity of custom devices: the optimized design guarantees higher performances, and at the same time, the increased comfort enhances the patients’ engagement in the therapy. The first step in creating a bespoke device is usually the 3D scanning of the patient’s anatomy to obtain a CAD model that will guide the design process of the device. Nowadays, many 3D scanning techniques exist based on passive (such as photogrammetry) or active (such as structured light) methods [4]. Regardless of the adopted technique, acquiring complex 3D shapes (such as human anatomy) always requires acquiring several frames from different viewpoints to achieve a full 360° acquisition of the geometry. Since the raw acquired data (i.e., point clouds) for each frame are measured in the scanner reference frame, the acquisitions from different perspectives need to be aligned in a common reference frame to reproduce the whole geometry [2]. This process is generally © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 733–742, 2023. https://doi.org/10.1007/978-3-031-15928-2_64

734

P. Neri et al.

referred to as point clouds registration. In many applications, the scanning procedure is performed on a plaster reproduction of the studied geometry [5]. In this scenario, several robust approaches can be adopted to speed up and automatize the registration process, such as the use of rotary tables, which guarantee a precise and known placement of the object with respect to the scanning device. On the other hand, many practical situations do not allow for the use of plaster reproductions, which would be time-consuming and would negatively impact the patient’s comfort during the process. In these cases, real-time scanning devices are adopted to directly measure the patient’s anatomy since they can minimize the presence of artifacts due to involuntary movements. In this context, the introduction of consumer-grade depth cameras (RGB-D camera) [6] has led to a significant impulse in the definition of affordable hand-held 3D scanners for biomedical applications [7–9]. These devices are moved around the target anatomy to collect 3D data from different viewpoints, which are not generally known a-priori [10]. The alignment between the different acquisitions is then generally achieved through the so-called 3-2-1 registration process, which is based on a first manual registration phase: at least three corresponding points are manually selected on two partially overlapping point clouds, to obtain a rough registration. The registration is then refined through automatic algorithms, such as the Iterative Closest Point (ICP) algorithm, which are based on processing the overall point cloud data set [11]. The manual point selection process requires skilled users, it is time-consuming and can result tedious when many patients must be treated. Thus, this step is generally automatized by gluing some markers on the target object, which can then be automatically detected by a software tool to determine corresponding points between overlapping areas of adjacent scans. The main drawback of this approach is the difficulty of automatically recognizing the different markers, which are manually glued on unknown locations of the target surface. Additionally, in the case of direct human anatomy measurements, the use of markers can result invasive for some fragile patients, thus any contact with the patient’s body should be avoided. This paper aims at automatizing the registration process in the specific case of upper limb anatomy 3D scanning. The research, which was developed in the framework of PRIME-VR2 European Project, exploits the information retrieved from different sensors to obtain a first rough registration, thus replacing the manual selection of corresponding points on different point clouds. This is achieved by using a neural network, which automatically detects the position of relevant key points of the hand shown in the acquired images. These key points are then reprojected on the corresponding point clouds, to obtain the reference points which are required for the initial rough alignment. This rough alignment is then further refined through conventional ICP algorithms to obtain a global registration. The developed method was tested on actual patients scanned at a Living Lab and compared with the conventional fully manual registration process. The activity has shown promising results in terms of semi-automatic clouds registration.

Semi-automatic Point Clouds Registration for Upper Limb Anatomy

735

2 Scanning Equipment In this research, a hand-held 3D scanner is considered. The scanner was developed by exploiting D415 Intel RealSense sensors [12]. The overall cost of the adopted hardware was about 600 e, which is much lower with respect to other commercial devices for upper limb acquisition, as deeply detailed in [4]. Each sensor integrates one RGB camera to acquire the texture, two IR calibrated cameras to measure the depth, and an IR laser light source to project a pattern on the target surface. Three sensors, placed at 90°, are mounted on a three-layered polymethylmethacrylate (PMMA) circular frame, in order to obtain a larger field of view with a single shot. The sensors are placed with both a radial and an axial orientation, to allow for a frontal acquisition with respect to the patient. During the scanning, the two diametral sensors acquire a wide-angle of the arm, while the third sensor is oriented behind the thumb to avoid undercuts. A schematic view of the placement of the sensors is reported in Fig. 1: Fig. 1(a) shows the front view, while Fig. 1(b) shows the lateral view. On the other hand, Fig. 1(c) shows a picture of the device. D415

90°

Axial + Radial

(a)

(b)

Sensor protection case

(c) Fig. 1. Schematic layout of the scanning device: a) front view and b) lateral view. (c) Picture of the scanning device.

736

P. Neri et al.

As can be seen, the sensors are protected by 3D printed covers, while all the cables are managed in the middle layer of the PMMA structure. Two ergonomic handles were added to the structure to facilitate the grip of the device during acquisition. Finally, a pedal switch is connected to the USB hub of the device, acting as a trigger to save the frames during acquisition. The three sensors are preliminarily calibrated, using the calibration procedure described in [12], thus allowing to align the scans provided by each sensor in a common reference frame. The control software of the device was developed in Matlab® language, featuring a graphical user interface (GUI) that allows setting the main parameters, such as the total number of frames to be acquired. Additionally, a large area of the GUI shows in real-time the point clouds as acquired by the three sensors. An automatic algorithm computes the distance between the point cloud centroid and the device, highlighting with different colors if the distance is suitable for the acquisition, too far or too close. In particular, three colors are used for the point clouds depending on their distance with respect to the device: blue if the distance is lower than 450 mm (too close), green if the distance is between 450 and 550 mm (optimal), and red if the distance is larger than 550 mm (too far). This expedient enhances the usability of the device even for un-qualified users, such as medical operators who are not experts in 3D scanning. Practically, the protocol to obtain a full 360° scan of the patient’s arm requires: 1. asking the patient to stretch the arm, laterally or frontally depending on his comfort; 2. placing the scanning device in correspondence of the first location at the optimal distance; 3. acquiring the first frame; 4. moving the device around the arm to the next location, guaranteeing a field-of-view overlap with respect to the previous acquisition, and acquiring the corresponding frame; 5. repeating the previous step until all the required frames have been collected. The preliminary testing campaign proved that a number between six and ten different placements of the acquisition device is generally enough to obtain a full 360° scanning of the patient’s arm. The time required to complete the described procedure is in the range of 50–80 s, depending on the overall number of frames to be acquired. It is worth noting that each saved frame stores three distinct point clouds (in terms of 3D coordinates) and the corresponding RGB images. Additionally, since the RGB camera is calibrated with respect to the IR cameras, it is possible to correlate each 3D point with the corresponding pixel of the RGB image.

3 Conventional Manual Registration Procedure The described procedure allows the acquisition of a certain number of point clouds, each corresponding to a portion of the arm acquired from a different viewpoint. These point clouds must then be registered into a common reference frame, to obtain the final 3D model of the arm. This is generally performed through the manual 3-2-1 registration process. Given a pair of point clouds, the operator must select at least three corresponding points on each cloud. Since three points allow for the definition of a rigid rotation

Semi-automatic Point Clouds Registration for Upper Limb Anatomy

737

between two different reference systems, this preliminary procedure allows for the rough registration of the two clouds. It is worth noting that the presented approach was developed for rehabilitation patients, and all the clinical centres cooperating in the research agreed that the use of markers could reduce patients’ comfort and engagement in the process, and would increase the overall scanning time due to the preparation stage. Thus, since no markers are used on the patient to avoid discomfort, the manual selection of the corresponding points is based on geometrical and anatomical considerations of the operator, or on natural skin features. This subjective procedure, based on the experience of the operator, is cumbersome, time-consuming, unreliable, and not repeatable. For this reason, the manual alignment is usually followed by an automatic registration refinement, which is generally performed through an iterative closest point (ICP) algorithm. This algorithm automatically looks for corresponding points on overlapping regions of the clouds to be registered, iteratively adjusting the relative positioning to reduce the surface distance. While the described 3D scanning procedure can be completed in the range of one minute, this registration procedure can be time-consuming, taking from 5 to 10 min depending on the number of point clouds and on the extension of the overlapping regions. An example of the above-described process is shown in Fig. 2 in the case of two distinct viewpoints. Figure 2(a) shows the corresponding points as selected on the two clouds, Fig. 2(b) shows the resulting rough registration while Fig. 2(c) shows the final registration obtained through the ICP algorithm. Cloud 1

Cloud 2

(a)

(b)

(c)

Fig. 2. Manual 3-2-1 registration process: (a) selection of three corresponding points, (b) rough initial alignment and (c) fine registration through ICP algorithm.

4 Semi-automatic Registration Strategy The bottleneck of the described conventional 3-2-1 procedure is represented by the manual selection of corresponding points. Indeed, this selection is to be repeated for any couple of clouds, which must be selected, rotated, visually compared to identify corresponding points. Since no markers are present on the patient’s arm, the choice generally falls on easily recognizable locations, such as fingertips, phalanges, wrist bones, and

738

P. Neri et al.

elbow. Following this principle, the application of a neural network (NN) to detect key points on RGB images was found to have a great emphasis in the literature. Indeed, a properly trained NN could be able to automatically detect the aforementioned locations. Indeed, since the specific task (i.e. arm/hand acquisition) can be of interest in several different fields, several NNs are available, which detect some specific locations of the hand. In particular, the “MediaPipe Hands” NN was used [13]. The main advantages of this specific NN rely on being open-source and implemented also in Python. This allows for easy integration of the automatic key-points detection in the overall workflow. Additionally, the possibility to customize the output simplifies the subsequent data processing, ensuring that the same notation and key-points order is always adopted. The key points detected by the NN will then be used to obtain a rough alignment of the different viewpoints, which will be fine-registered through an ICP algorithm. 4.1 Key-Points Detection Through Neural Network The described NN requires as input an RGB image, and returns as output a list of the detected key points and a detection score. The results obtained on a sample hand image are reported in Fig. 3. As can be noted, in ideal conditions, the NN infers 21 landmarks on the hand, four for each finger plus one on the wrist. These key points are always stored in the same order. In the case that some points are not detected on a specific image, a NaN value is returned, so that the specific missing key point can be skipped avoiding the risk of erroneously assigning the numbering. This aspect is crucial since the correspondence between key points detected from different viewpoints is achieved by comparing the indexing value.

Fig. 3. Results of the NN on a sample hand image.

4.2 Key-Points Selection and Rough Cloud Alignment As stated, each frame corresponds to one point of view and is composed by the acquisition of the three D415 sensors, i.e. three point clouds (already registered exploiting the calibration stage) and three RGB images. Theoretically, each of these three images could

Semi-automatic Point Clouds Registration for Upper Limb Anatomy

739

be equivalently used to detect the key points, since the clouds are already registered. Nevertheless, this is not feasible in practice, since the success of the NN in detecting the 21 key points strongly depends on the hand orientation with respect to the camera and the visibility and pose of the fingers. For these reasons, the results obtained on the three images of the same frame can be largely different, since in some images a few key points could be missing or the whole identification could fail. It is worth noting that three points are sufficient to obtain a rigid rotation, thus not all 21 key-points need to be detected for all the poses. Nevertheless, a higher number of key points guarantees data redundancy and better registration performances, thus the best results are detected by multiplying the identification score provided by the NN by the number of detected key points to compute the total score. The image in the RGB triplet which has the maximum total score is selected for the registration, and the 2D pixel coordinates of the key points are projected on the corresponding 3D locations on the point clouds and stored for further processing. An overview of the procedure is schematized in Fig. 4.

Fig. 4. Schematic workflow of the algorithm.

In the unlikely event of NN failing in detecting the key points in all the three images of a specific pose, that pose is skipped and processed through a conventional manual 3-2-1 procedure. Anyway, in the performed trials, the NN was always able to detect at least three key points in at least one of the RGB images for all the acquired poses. Once each pose has been independently processed through the NN algorithm, the rough rigid transformations can be computed. To this extent, each pose is registered with respect to the previous one. Anyway, the key points detected by the NN for two different poses may not have the same indexing. For example, for pose n.1 all 21 key-points may be detected, while for pose n.2 key-points 5 and 6 may be missing, thus impairing the possibility to register the clouds. To solve this issue, only the intersection between the key points detected in the two poses is considered for the alignment. After the common key points are detected, the rough alignment is computed by finding the roto-translation

740

P. Neri et al.

which minimizes the mean square distance between corresponding points. After this procedure is completed for all the acquired poses, all the clouds are roughly aligned into a common reference frame. 4.3 Registration Refinement The final step of the alignment procedure is the registration refinement. The described algorithm allows obtaining a rough initial alignment, which is affected by errors due to the NN precision in detecting the key points. Moreover, only a maximum of 21 key points belonging to the hand are considered, which are a drastically reduced subset with respect to the thousands of points of the whole point cloud. The overall point clouds data set is then processed through an ICP algorithm to minimize the alignment error. This algorithm can automatically detect the overlapping regions between adjacent point clouds, and iteratively compute a roto-translation that minimizes the distance. The final result is the registered 3D full acquisition of the upper limb anatomy, which can be further processed through conventional point cloud elaboration pipelines (e.g. noise filtering, point sampling).

5 Results The described procedure was tested on actual measurements taken by a clinician on a patient’s arm, which are representative of realistic data in operational environment. The acquired data were both processed through the conventional 3-2-1 procedure and the presented algorithm, in order to compare the results. Figure 5(a) shows an example of the surface resulting from the conventional manual procedure. As can be noted, a complete 360° view of the arm was obtained, without holes. More noisy data were obtained in the fingers region, because of the undercut and the occlusion during the scanning process. On the other hand, Fig. 5(b) shows the surface resulting from the described automatic procedure. As can be seen, differences between the two geometries are not easily detectable at a first glance. Finally, Fig. 5(c) shows the deviations between the surfaces obtained with the manual procedure and with the proposed procedure. All the values are comprised in the range ±2 mm, and the main deviations are found in the correspondence of the hand region where more noisy data are expected. Nevertheless, the deviation histogram on the right of Fig. 5(c) evidences that most of the points deviate less than 0.1 mm between the two surfaces, thus demonstrating that the two procedures achieve equivalent results. It is worth noting that, in the described application, deviations in the millimetres order of magnitude are considered acceptable.

Semi-automatic Point Clouds Registration for Upper Limb Anatomy

(a)

(b)

741

mm 2 1 0.1 -0.1 -1 -2

(c) Fig. 5. Comparison between scanning elaboration: (a) manual procedure, (b) proposed procedure and (c) deviation map between manual and automatic procedure surfaces

6 Conclusions This work focuses on the registration of point clouds acquired through 3D scanning of upper limb anatomy. A Neural Network (NN) was applied to RGB images to define a semi-automatic registration procedure, thus overcoming the conventional and manual 3-2-1 registration procedure. The NN automatically detects up to 21 key points on the hand, which are then reprojected on the 3D point cloud and used to obtain a first rough registration. This step practically replaces the tedious and unreliable manual selection of corresponding points on different point clouds, substantially reducing the registration time. Data post-processing is then refined through a conventional pipeline, i.e. fine registration (Iterative Closest Point algorithm), denoising, surface extraction. The proposed procedure was then validated through experimental analysis of human arm acquisition. The same raw data were processed both using the conventional manual procedure and the proposed automatic procedure. The surfaces resulting from the two procedures are not distinguishable by the naked eye. Furthermore, a deviation analysis was performed, demonstrating that most of the deviations are in the range ±0.1 mm, while the worst cases are in the range ±2 mm, which is considered acceptable for the biomedical application. Finally, a further enhancement of the proposed approach can

742

P. Neri et al.

be achieved by extending the NN to the detection of additional keypoints on the arm, instead of only considering the hand, to better constraint the registration process. Acknowledgement. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 856998.

References 1. Buonamici, F., et al.: A practical methodology for computer-aided design of custom 3D printable casts for wrist fractures. Vis. Comput. 36(2), 375–390 (2019). https://doi.org/10. 1007/s00371-018-01624-z 2. Volonghi, P., Baronio, G., Signoroni, A.: 3D scanning and geometry processing techniques for customised hand orthotics: An experimental assessment. Virtual Phys Prototy 13, 105–116 (2018) 3. Asanovic, I., Millward, H., Lewis, A.: Development of a 3D scan posture-correction procedure to facilitate the direct-digital splinting approach. Virtual Phys. Prototy. 14, 92–103 (2019) 4. Paoli, A., Neri, P., Razionale, A.V., Tamburrino, F., Barone, S.: Sensor architectures and technologies for upper limb 3D surface reconstruction: A review. Sensors-Basel 20, 6584 (2020) 5. Barrios-Muriel, J., Romero-Sánchez, F., Alonso-Sánchez, F.J., Salgado, D.R.: Advances in orthotic and prosthetic manufacturing: A technology review. Materials 13(2), 295 (2020). https://doi.org/10.3390/ma13020295 6. Zollhofer, M., et al.: State of the art on 3D with RGB-D Cameras. Comput. Graph. Forum 37, 625–652 (2018) 7. Redaelli, D.F., Barsanti, S.G., Fraschini, P., Biffi, E., Colombo, G.: Low-cost 3D Devices and Laser Scanners Comparison for the Application in Orthopedic Centres. Int Arch Photogramm Remote Sens Spatial Inf Sci XLII–2, 953–960 (2018) 8. Oranges, C.M., et al.: Three-dimensional assessment of the breast: Validation of a novel, simple and inexpensive scanning process. In Vivo 33, 839–842 (2019) 9. Dessery, Y., Pallari, J.: Measurements agreement between low-cost and high-level handheld 3D scanners to scan the knee for designing a 3D printed knee brace. PLoS ONE 13, e0190585 (2018) 10. Li, J., Tanaka, H.: Feasibility study applying a parametric model as the design generator for 3D–printed orthosis for fracture immobilization. 3D Printing Med. 4(1), 1–15 (2018). https:// doi.org/10.1186/s41205-017-0024-1 11. Zhu, H., et al.: A review of point set registration: From pairwise registration to groupwise registration. Sensors 19(5), 1191 (2019). https://doi.org/10.3390/s19051191 12. Neri, P., Barone, S., Paoli, A., Razionale, A.V., Tamburrino, F.: A depth-camera based system for the real-time scanning of upper limb anatomy. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 245–255. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_25 13. MediaPipe Hands: Google LLC (2020). https://google.github.io/mediapipe/solutions/hands

Optimizing Galvanic Process: Wet Surface Estimation for Small Metal Parts to Avoid Material Waste Luca Puggelli(B)

, Monica Carfagni , Rocco Furferi , and Lapo Governi

Department of Industrial Engineering, University of Florence, Florence, Italy {luca.puggelli,monica.carfagni,rocco.furferi, lapo.governi}@unifi.it

Abstract. The optimization of a galvanic process is a crucial task for many manufacturers in the field of electro-deposition industry. This is true for companies operating in the high fashion field, in which expensive materials are used and reducing material waste is crucial. In this paper, the estimation of the wet surface of small metal parts is treated. In fact, considering a single piece, the amount of material required to guarantee a desired plate thickness is directly proportional to its outer surface. Starting with a rapid overview on other methods to come up with this task, the attention in principally oriented to surface estimation by means of optical scanning. A preliminary test session has been carried out and two main issues arose. The first one is related to resolution and accuracy: due to pieces small dimensions and details, high performances are required to achieve valid results. The second and principal issue is related to the high reflectivity of pieces, even before electroplating. With the aim of avoiding the use of matting paint (which is difficult to remove), the attention has been focused on commercial solution dedicated to jewellery and dentistry fields. Three devices (based on white and blue LED structured light) have been tested on high reflective and specular pieces. From the analyses, only the one based on blue LED technology was able to retrieve high reflective surfaces without matting. Minor issues arose in case of specular surface. The device has been considered suitable for the task. Keywords: Reverse engineering · 3D reconstruction · Galvanic · Blue LED structured light · 3D optical scanners

1 Introduction The optimization of a galvanic process is a crucial task for many manufacturers in the field of electro-deposition industry, leading to the deposition of the correct amount of material on a given item. This is particularly true for companies operating in the high fashion field, which are demanded to electroplate small metal parts like studs, clips and buckles using expensive materials, such as gold or platinum. To remain competitive on the market, it is strategical to deposit merely the exact quantity of material necessary to achieve the quality requirements imposed by clients, which are generally expressed in © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 743–754, 2023. https://doi.org/10.1007/978-3-031-15928-2_65

744

L. Puggelli et al.

terms of plating thickness. On its turn, coating material’s mass can be expressed in two different ways. Faraday’s laws [1] on electrolysis set a dependency between the mass deposited (m) on an electrode and current intensity (I) and time (t): m=

M ·I ·t Z ·F

(1)

where: • • • • • •

m = mass deposited on electrode [g] M = molar mass of the material to be deposited [g/mol] I = current intensity [A] t = time [s] Z = valence of material’s ions F = Faraday’s constant (96485.33 C mol−1 )

In this case, it is possible to set electroplating parameters to obtain a specific mass deposit, independently from electrode geometry. Alternatively, considering the coating as a shell, it is possible to relate mass deposit and coating thickness as follows, without a significant loss of accuracy: m = tws · T · ρ

(2)

where: • tws = total wet surface [mm2 ] • T = coating thickness [mm] • ρ = material mass density [g/mm3 ] Consequently, combining Eq. 1 and Eq. 2 it is possible to relate T to current intensity and time and, therefore, it is possible to set electroplating parameters to obtain a specific coating thickness. Unfortunately, tws remains an unknown variable and should be determined to address the problem of minimizing the amount of material to be deposited. On an industrial scale, several identical pieces to be treated by the same galvanic process are arranged on a frame by means of hooks or, more frequently, wires. This means that the total wet surface corresponds to n-times the wet surface of the single item (Eq. 3). tws = n · ws

(3)

Therefore, the evaluation of the wet surface requires not only the single item wet surface (ws) but also the knowledge of the number of the pieces arranged on the frame (n). Both tasks are quite challenging, especially for complex geometries and for poorly user-controlled arrangement on the frame. The number of pieces arranged on a galvanic frame can be highly variable and, even if the frame and the item’s typology are unchanged, a ±10% deviation is normally expected. This is because only some item typologies can be placed on hooks (e.g., buttons), whose

Optimizing Galvanic Process: Wet Surface Estimation

745

number (and position) in the frame are fixed and (generally) all positions are occupied. Conversely, almost all item typologies can be manually knotted to wires. In this case, pieces arrangement on the frame is far from constant and so is their number. In a previous work [2], authors propose a counting machine capable of automatically estimate the number of parts already mounted on a frame, by means of an automatic machine-vision based procedure. The machine is based on rear projection and a light projector is used to project frame shadow on a canvas. A fish-eye lens is used to capture the image of the entire canvas, whose dimensions approximately coincide with the frame ones (approximately 700 × 500 mm). The obtained image is then processed by means of a specifically developed computer vision procedure, based on binary images editing and analysis. Starting from a unique shadow, in which frame, wires and parts are connected, it separates the silhouette of each part and estimates their number with a more than adequate accuracy and precision. During a first testing campaign, a maximum error of around 2% has been reported. For what concerns ws, this could be easily measured on the CAD model of the item, if the variations from nominal geometry introduced by the manufacturing process are neglected. Unfortunately, only a limited number of items are digitally modeled in the high-fashion field. Mostly, buckles and studs are made by casting molding, where molds are obtained from a handmade prototype. In these cases, ws must be directly measured on the item. In this paper, the attention is focused on the analysis of most promising devices that can be adopted to provide a reliable estimation of ws.

2 Wet Surface Estimation Focusing the attention on wet surface estimation of small metal parts, one possible approach is based on electrochemical analysis, according to scientific literature. In particular, Jean-Claude Puippe [3] patented a method to estimate the surface area of copper plates by applying constant voltage to two electrodes immersed in a specific solution. By measuring current intensity, it is possible to derive the measurement of the surface of the electrodes. Despite being promising, this approach revealed some critical issues and further investigation is required before obtaining stable and satisfactory results. For this reason, authors propose a direct ws measurement by means of 3D optical scanning. 2.1 3D Optical Scanning Approach: Feasibility Analysis Optical scanning approach is based on the measurements performed on a virtually reconstructed 3D model of the item, obtained by means of 3D scanners. To assess the feasibility of this approach, a preliminary investigation has been carried out in which a professional grade 3D scanner (Romer Absolute ARM 7525 with RS1 laser scanner) has been compared with a consumer one (NextEngine HD) for the 3D surface acquisition of a belt buckle, used as a first case study (see Fig. 1). The choice of the case study is motivated by the availability of the item’s 3D CAD model, from which it has been possible to obtain a surface area measurement to be considered as ground-truth (equal to 1941 mm2 ). The choice of neglecting possible errors introduced by manufacturing processes is justified by the high threshold of acceptability (±5%), which has been suggested by production managers of a company operating in the electroplating industry.

746

L. Puggelli et al.

Fig. 1. Belt buckle used for preliminary tests.

Three main aspects have been considered to evaluate performances: surface measurement accuracy, scanning time and post processing time. Both the devices, whose principal characteristics are reported in Table 1, are active optical scanners based on laser-camera triangulation technology [4]. As widely known, this scanning technology works well for the acquisition of diffusive (opaque) surfaces, while performances are poor in case of high reflectivity [5, 6]. Table 1. Romer Absolute ARM + RS1 and NextEngine HD datasheet. Romer Abs. ARM + RS1

NextEngine HD

Technology

Laser-camera triangulation

Typology

Anthropomorphic arm

Desktop

Accuracy [µm]

58

130

Resolution [DPI]

305

508

Scan. Speed [pts/s]

3 × 105

5 × 103

Scanning software

Polyworks®

Scan Studio™

Post-proc. Software

Geomagic Design X®

RapidWorks®

Unfortunately, pieces to be measured have mostly high reflective or even specular surface, also before electroplating, and the application of a matting agent is, therefore, required. This represents the main criticality related to laser-camera triangulation scanners and, more in general, to active optical scanners. In fact, the matting agent is difficult to remove from the surface of the items, and there is a real risk of leaving traces inside narrow incisions, small or threaded holes (generally M2 or smaller). In this preliminary phase, matting agent has been applied anyway, to achieve a useful result. As explained below, in the successive phases, matting will be avoided. Preliminary test results are summarized in Table 2.

Optimizing Galvanic Process: Wet Surface Estimation

747

Fig. 2. Surfaces obtained by means of Romer Absolute ARM (left) and NextEngine HD (right) 3d scanners. Table 2. Preliminary test results Romer Abs. ARM + RS1

NextEngine HD

Measured surface area [mm2 ]

1926

2045

Accuracy [%]

−0.7%

+5.3%

Procedure

Interactive/manual

Automatic

Scanning time [s]

120

6420

Post-proc. Time [s]

600

Not required

Total time [s]

720

6420

As easily predictable, the reconstruction obtained by means of Romer scanner is valid and the relative surface measurement has an acceptable accuracy. Conversely, despite the higher resolution, the same can not be said about the one obtained by means of NextEngine HD. Even if the resulted accuracy is almost acceptable (+5.3%), the overestimation would have been significantly greater if the entire surface of the object had been successfully acquired by the scanner (several missing areas are visible in Fig. 2). A second issue about NextEngine HD is the scanning time: 360° acquisition requires at least 8 different piece positioning and a single scan last about 10’. Even if there are not strict requirements, since pieces would not be measured in the production line, it is obvious that an exceeding time for a single item can have consequences on production planning. On the other hand, it is also true that if the scanning and the post-processing procedures must be carried out manually, this requires the formation of a highly qualified personnel, with a further economic investment by a company willing to use the proposed system. In this point of view, it is preferable the adoption of a desktop solution, with automated scanning and post-processing operations. All these considerations lead to the definition of a list of requirements to optimise the search for the best commercially available solutions. In the case in which the search does not cope with the requirements, the development of a specifically designed 3D scanner has been considered as an alternative option by the authors, thanks to the experience in the field of 3D scanning and surface reconstruction [7–9].

748

L. Puggelli et al.

2.2 3D Optical Scanning Approach: Tests and Selection of Commercially Available Solutions Among all the possible scanning technology available, the search has been focused on structured-light 3D scanners, which are considered as the best trade-off between scanning speed and accuracy. Based on authors experience, this technology is also less sensitive to reflectivity issues. According to a limited number of scientific papers, structured light technology with blue LED illumination seems to be the best solution to achieve good 3D reconstruction of high reflective surfaces without matting [10, 11]. For this reason, also this specific technology has been considered. As previously mentioned, the search has been further limited to desktop solution, since these are more “user-oriented”, generally offering automated scanning procedures and intuitive and easy-to-use dedicated software. In fact, this type of devices allows the acquisition of 3D geometry by framing the object from multiple viewpoints without the need to reposition the object and/or the scanner. Once the object is locked on a special plate, positioning is done automatically by the device according to predefined protocols. Since the objects, mainly buckles, studs, or inserts, are small, research has been further refined towards solutions specifically designed to scan small objects. These specifications are common to those required by the jewellery and dental industries, for which scanning systems have been developed for years. Considering all the requirements expressed above, four potentially suitable scanners have been identified among the ones that were commercially available at the time of this analysis (see Table 3 [12, 13]). Table 3. Selected 3D scanners. Brand/Scanner Model

Scan vol. [mm] Light source

Resolution [Mp] DEMO

Solutionix/Rexcan DS3 Silver

Ø 70

Blue LED SL

2 × 1.3



Solutionix/Rexcan DS3 Gold

Ø 70

Blue LED SL

2 × 5.0



Open Technologies/AuRum LT

Ø 80 × 110

White LED SL 2 × 1.3



White LED SL 2 × 3.0



Open Technologies/AuRum 3D Ø 110

Among these, Rexcan DS3 Gold was not tested since Rexcan DS3 Silver share the same technology, has almost the same specs (only resolution is higher) and is considerably lower priced (Table 4). Tests performed during the demonstrations made it possible to assess the potential of each scanner and to highlight the main criticalities. In detail, scanning tests were carried out using samples buckles and studs with different levels of reflectivity and were performed mainly on untreated samples. Matting paint has been used only to see the different performance on highly reflective and opaque surfaces and only on those pieces for which it was possible to completely remove the paint. Three test typologies have been carried out, by using high reflective items (with and without matting) and specular items (without matting), summarized in Table 5.

Optimizing Galvanic Process: Wet Surface Estimation

749

Table 4. 3D scanners datasheet. Rexcan DS3 Silver

Rexcan DS3 Gold

AuRum LT

Aurum 3D

Accuracy [µm]

30

10

30

NA

Point spacing xy [µm]

≈75

>20

86

30

Point spacing z [µm]

NA

NA

NA

8

Fully automatic scanning







NO

Price range [103 e]

15–20

25–30

100. For this variable stiffness structural component, both the development steps and the characterization methodology are explained. Finally, an outlook on the design of a soft continuum robotic manipulator using three of these variable stiff segments and two tendons for an underactuated motion control of the whole structure is given. Keywords: Soft robotics · Variable stiffness · Soft robotic manipulator · Soft robotics material and design · Continuum robotics

1 Introduction In contrast to conventional rigid-articulated robots, soft robots are characterized by compliant structures and thus a high degree of safety and flexibility. These properties make them suitable for human-robot collaboration (HRC) or minimally invasive surgery (MIS). However, they usually lack stability and stiffness for near-human applications that require higher payloads, such as in everyday life and care [1, 2]. Industrial Collaborative robots (Cobots) on the other hand have a higher payload and better precision than soft robots. But the collaborative capabilities are usually limited to sensor-based safety options that provide motion stop or speed reduction when a human approaches. Furthermore, most Cobots are currently only used in industrial applications [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1203–1214, 2023. https://doi.org/10.1007/978-3-031-15928-2_105

1204

L. Weisheit and W. R. Wockenfuß

In order to bring robots better into human-oriented applications, like care or everyday life, it is appropriate to not only focus on precision and payload. Robots are needed that are at the same time safe and powerful with a high level of acceptance. Therefore, to combine the advantages of flexible and inherently safe soft robots and rigid-articulated robots for better payload capacity, approaches are being investigated for the development of continuum robots with switchable structural stiffness [4, 5]. Most approaches can be divided into thermally, electrically and magnetically induced as well as pressure induced stiffness changes according to the activation method (Fig. 1) [5]. In the following relevant approaches are briefly discussed. They are evaluated against the background of their use in robotics, especially regarding a high stiffness ratio K and the switching dynamics, and their advantages and disadvantages are compared.

Fig. 1. Classification of the most common effects and materials used for variable stiffness.

1.1 State of the Art Low melting alloys or materials with a low glass transition temperature and shape memory materials show significant changes in material properties when exposed to temperature. With these approaches, the largest stiffness ratios of K > 1000 are possible with mostly negligible changes in shape and volume [4, 6]. However, the use of a thermal trigger significantly affects the dynamics of these systems. While the heating time can be reduced to a few seconds by applying the appropriate energy, the heat dissipation process requires considerably more time due to the physics involved [7]. Stiffness changes due to electric current or electric fields find their application in electroactive polymers (EAP) [8, 9], electrostatically couplable layers [10, 11], and electrorheological materials such as electrorheological fluids (ERF) and elastomers (ERE) [12, 13]. While the dynamics of the electrically induced stiffness changes are very good with switching times in the millisecond range, mostly only stiffness ratios of K < 10 are achieved and voltages in the kilovolt range are required, which lowers the acceptance for use in contact with humans [1, 2].

Design, Fabrication and Characterization of a Novel Variable Stiffness Structure

1205

In magnetorheological materials (MRM) such as liquids (MRF) or elastomers (MRE), iron particles align along the field lines with the help of a magnetic field and form chains that influence the mechanical behavior of the material [14–16]. Since the magnetic field is used as a trigger, switching times in the millisecond range can also be achieved here and the possible stiffness ratio of K = 9 − 16 [5, 15, 16] are slightly higher than with ERM. In addition, the stiffness can be continuously adjusted using an electromagnet [5]. The achievable maximum shear stress in MRMs is on average two orders of magnitude higher than those of ERMs, which suggests higher stiffness ratios and explains their wider use [4]. Pressure-induced approaches work by inflating or evacuating flexible structures. In flexible fluid actuators (FFA), the increase in stiffness is achieved by inflating chambers [4]. Stiffness changes and motions can be induced by suitable arrangement of the fluid chambers and the selective use of material accumulations or fiber reinforcements [2, 17, 18]. Stiffness ratios up to K = 56 can be found in the literature [19]. The application of FFA leads to comparatively large shape and volume changes with switching times in the range of seconds (about 1 s in [20]). In the material jamming approach, a flexible shell is filled with particles or a layered structure. When the system is evacuated, jamming of the particles or layers occurs [21]. In this way, large changes in stiffness can be set without significant changes in shape and volume. Approaches to particle jamming that can achieve stiffness ratios up to K = 40 can be found in the literature [4]. In layer jamming, stiffness ratios up to nearly K = 100 for tensile stiffness can be achieved with comparatively small structural volumes by a suitable structural design [22]. The switching time depends strongly on the structure of the system and can reach values of less than 0.2 s [23]. 1.2 Selection of the Variable Stiffness Effect According to the requirements stated at the beginning regarding a high stiffness ratio and short switching times, thermally induced stiffness changes with K > 1000 are best suited for achieving high stiffness ratios. However, due to the weak dynamics with switching times t > 1 s, they are not suitable for use in soft robotics in direct contact with humans. The electrically and magnetically induced approaches, on the other hand, show switching times in the millisecond range, but with stiffness ratios of K < 10 for ERM and K < 20 for MRM, they are not in the required range of mechanical properties. Pressure-induced approaches allow stiffness ratios between K = 40 and 100 and are also in the relevant range for soft robotics at low switching times. In this context, the concepts based on material jamming are more suitable than those of flexible fluid actuators due to their switching times t < 0.2 s. Due to its high stiffness ratios in combination with low switching times as well as low shape changes during switching, the material jamming approach was therefore chosen for the development of structural components with switchable stiffness for soft robotics. Two designs exist for this method, which have different advantages and disadvantages. While layer jamming, which is based on the coupling of layers, is particularly suitable for withstanding tensile stresses but tends to buckle under compressive loading, tensile loading in particle jamming leads to particle rupture. This design, which is based on the wedging of small particles, is therefore well suited to absorbing compressive

1206

L. Weisheit and W. R. Wockenfuß

stresses. By combining both designs, the particle and the layer jamming, the respective disadvantages of one concept can be compensated with the advantages of the other [24, 25]. In the following, the further development and transfer into a robotic manipulator consisting of three switchable stiff segments of the approach presented in [24] for a structural component based on the combination of particle and layer jamming will be shown. The design, fabrication, and mechanical characterization for both a single structural component/segment and the robotic manipulator consisting of three segments will be addressed.

2 Functional Principle For improved dimensional stability, to increase the basic stiffness and to improve the absorption of tensile stresses, the concept of particle jamming is combined with that of layer jamming. For this purpose, a three chamber structure was developed, shown in Fig. 2a). Layer bundles are arranged on the top and bottom sides to absorb the tensile stresses and stabilize the particle volume in the center [24]. In order to absorb the high shear stresses during flexural loading in the edge regions, the layer bundles are arranged in a two-sided bonded and interlocked structure.

Fig. 2. (a) Functional principle of the combined jamming approach for one segment: soft state - vacuum pump off (left), stiff state - vacuum pump on (right); (b) Continuos deformation of one variable stiff segment with retracted outer silicone cover showing the movement of the layer bundles: undeformed (top left), deformed (bottom left), close up deformed (right).

Design, Fabrication and Characterization of a Novel Variable Stiffness Structure

1207

The layer bundles enclose the central chamber filled with particles, which serves to absorb the compressive stresses. In the soft state, the layers can slide on each other and the particles can flow past each other. In the stiff state, the vacuum pump sucks the air out of the inside of the structure. The ambient pressure creates a frictional connection between the layers and the particles are packed more tightly, forming a composite body. The symmetrical design ensures that the bending stiffness is set uniformly in both directions and that the segment can move equally in both directions (s. Fig. 2b). Figure 3 shows the developed manipulator in three different set target shapes. The manipulator consists of three switchable stiff segments arranged one behind the other and connected through stiff connector parts in between. The segments can be switched reversibly stiff or soft independently of each other through jamming. Two tendon actuators, one on the left and one on the right, allow the manipulator to move in a planar manner. Both tendons are attached only to the upper end of the manipulator and are guided to the lower end by loops on the intervening segments. At the lower end, there are two stepper motors that wind and unwind the two tendons, respectively. By softswitching only individual segments while the others remain stiff, it is thus possible to set complex planar shapes with only two tendons. This reduces the number of tendon actuators and results in an underactuated motion system.

Fig. 3. View of the tendon-driven soft continuum robotic manipulator with two tendon actuators for motion and three variable stiff segments turned stiff in three different shapes.

3 Design and Fabrication In order to generate the highest possible friction force between the layers and thus a high holding force, a high friction coefficient of the layer material is required in the stiff state. However, if it’s too high, the deformation and the resulting transverse forces quickly cause the layers to adhere even in the flexible state, thus blocking the movement.

1208

L. Weisheit and W. R. Wockenfuß

The modulus of elasticity of the individual layers should be set so that the layers are as flexible as possible in the flexible state and form as stiff a bond as possible in the stiff state. The determination of an optimally suitable layer material, including geometry and surface properties, is a research question to be investigated in future work. Due to a medium-high friction coefficient of µ ≈ 0.3 and medium-high Young’s modulus of E ≈ 4500 MPa, the material Mylar A FI 13010 from Dr. Dietrich Müller GmbH is used as the layer material. This material has sufficiently high fatigue strength, as well as stiffness for initial tests and allows high flexibility in the soft state. Coarsely ground coffee powder is used as the particle material, since several studies have established its good suitability (low weight, fast stiffness change, high stiffness ratio) [21, 23]. Silicone (Silpuran 2400), which exhibits high extensibility (800%) for dense packing of particles and layers during jamming, is used as the covering material. Figure 4(a–d) shows the individual components of the manipulator and their assembly. The total moving length of the demonstrator is 300 mm. One segment is 100 mm long and the outer cross-section forms a square with 35 mm × 35 mm edge length. The manipulator consists of three separate segments, which can be assembled individually on the basis of a plug-in connection. Connector A forms the base of the manipulator. Connector B is connected to Connector C by means of a spring steel (12.7 mm wide, 0.5 mm thick) via an adhesive connection (two-component adhesive) and forms the basis of each individual segment. The spring steel is used to ensure a defined segment length, a certain basic stiffness and a continuous deformation of each variable stiff segment. After the spring steel is attached, an inner silicone cover (thickness 1.5 mm) is attached to the connector parts using silicone adhesive (RTV162Q Silicone Rubber Adhesive Sealant from Conrad Electronic). To ensure an accurate fit, the inner and outer silicone covers were molded in advance using 3D printed molds. After bonding, the inner shell is filled with the coffee particles. A filling hole is provided in connector part B for this purpose (s. Fig. 5), which is closed with a rubber plug after filling. For the upper and lower layer structure, 20 interlocked Mylar layers, each with individual dimensions of 73 mm × 30 mm × 0.35 mm are used, which are connected to the connector parts by screws (s. Fig. 4b). Shim washers 0.5 mm thick are arranged between the layers on the screws so that the interlocked layer bundles have sufficient space for relative movement in the soft state (s. Fig. 2b). A cover of fabric (jersey 0.5 mm thick) is placed between the outer layer structure and the inner particle chamber. Previous investigations have shown that the layered structure tends to cut into the inner silicone cover during longer use, which can destroy the structure. In addition, the layers attached to the silicone cover tend to bond to the cover due to the high coefficient of friction of the silicone cover and can thus limit flexibility in the soft state. The fabric prevents both damage to the inner cover and adhesion of the layers on the inner cover. After the layer structure and the fabric sleeve have been attached to the connector parts, the three segments are plugged together (s. Fig. 4c) and screwed together. Finally, the outer silicone covers are glued to the connector parts A and C using silicone adhesive (s. Fig. 4d).

Design, Fabrication and Characterization of a Novel Variable Stiffness Structure

1209

Fig. 4. Assembly of the manipulator: a.) base parts for building the inner particle chamber with backbone, b.) base parts for building the two outer interlocking layer jamming bundles, c.) assembly of the three variable stiff segments, d.) preparation for gluing the outer silicone covers. Note that the final used material for the connector parts is not the shown resin but aluminium.

Figure 5 shows the push-in screw connection between connector B and C. The connector parts have grooves which provide a better grip for the silicone adhesive than on a smooth surface. The two parts are screwed together via a blind hole screw connection. Connector B and C have air channels that are connected to each other during assembly. Two channels lead to the upper and lower layer chamber and one channel to the particle chamber. A filter grid encloses a cotton filter and prevents the particles from escaping from the inner chamber. An adapter (QSML-M3-3 from Festo) connects the air channel to an external 3 mm thick hose. From each segment, a hose is routed externally along the manipulator via valves to the vacuum pump. By simultaneously evacuating the air from inside the three chambers, each segment can be made rigid individually. The top and bottom of Connector C are also equipped with threads into which eyebolts are screwed. These serve to guide the external tendons.

1210

L. Weisheit and W. R. Wockenfuß

Fig. 5. Connector parts B and C and their functionality solid (left), sectional view (right).

The rigid connector parts were first 3D printed using the stereolithography process with a Formlabs printer and the material Clear. However, due to the insufficient strength and excessive brittleness of the material, the screw connection points always broke during the first tests. The use of glued-in threaded sleeves with two-component adhesive in the connector parts was also not sufficient, and fractures occurred again due to excessive forces. For this reason, the connector parts for the final test setup were manufactured from aluminum using a laser sintering system and the required threads were inserted directly into the components (cf. Fig. 2 and Fig. 4).

4 Mechanical Characterization In order to evaluate the mechanical properties of the developed structure, forcedisplacement curves were recorded. Both a single segment and the entire manipulator were tested with regard to their bending stiffness. For this purpose, the individual segment or the manipulator was fixed to Connector A in a static tensile-compression testing machine from ZwickRoell and loaded at the free end (s. Fig.6). The bending load was applied at the free end with the aid of a steel tendon as a tensile load, since the structure would otherwise deform under its own weight in the soft state. The tendon was clamped at a height of 400 mm in jaws that were connected to a crosshead via a load cell. The large spacing of the suspension ensured that as few forces as possible were generated in the longitudinal direction of the manipulator during pulling and that primarily a pure bending load was generated. To activate the stiff state, the structure was evacuated via a hose connector using a vacuum pump. With this setup, the structures were deformed by 10 mm three times each in the soft and stiff states. A displacement-controlled deformation with a crosshead deformation speed of 0.1 mm/s was used. The regression lines were formed over the linear elastic regions of the measured and averaged force-displacement curves. The slopes of the regression lines provide the linearized effective bending stiffness k and

Design, Fabrication and Characterization of a Novel Variable Stiffness Structure

1211

the stiffness ratio K as a division of the stiffness of the stiff and of the soft state. During deformation, vacuum pressure was monitored using a PCE - P15 differential pressure gauge from PCE Instruments. The corresponding diagrams including the calculated stiffness k, stiffness ratio K as well as the measured vacuum pressure p are shown in Fig. 6. The single segment has a bending stiffness of k = 13.69 N/mm in the evacuated, stiff state. The maximum load that can be supported without significant plastic deformations is F ≈ 4 N at u ≈ 0.3 mm. When this force is exceeded, pseudoplastic deformations occur. This can be seen from the subsequent degressive course of the curve. On the basis of visual observations, it could be determined that both particle rearrangements and relative movements of the layers occur during the pseudoplastic deformations. In the soft, non-evacuated state, a strong stick-slip effect occurs, whereby the layers do not slide optimally on each other, but alternately slide and adhere. To calculate the linearized stiffness, the linear regression was formed here up to the maximum deformation set in the test, since the mean value provides an approximately linear elastic curve. With the linearized stiffness k = 0.18 N/mm in the soft state, the stiffness ratio is calculated to K = 74. In the soft state, the free end rotates around the bending axis (s. Fig. 2). In the stiff state, on the other hand, there is primarily a displacement of the free end in the direction of the applied force, but hardly any rotation in relation to the fixed end and around the bending axis (s. Fig. 6). This is because the layers can slide relatively on each other in the soft state, but not in the stiff state. The manipulator consisting of three segments has a bending stiffness of k = 1.05 N/mm in the evacuated, stiff state. The maximum force that can be supported without significant plastic deformation is also F ≈ 4 N at u ≈ 3.7 mm. In the soft, nonevacuated state, a strong stick-slip effect also occurs. When averaged, an approximately linear elastic response is present up to the maximum deformation set in the test. With the linearized stiffness k = 0.01 N/mm in the soft state, the stiffness ratio is calculated as K = 105. Due to the stick-slip effect, the stiffness in the soft state is unintentionally increased. The following influencing factors could be identified on the basis of visual observations: The individual layers are unintentionally pre-deformed, resulting in non-negligible frictional forces in the interlocked arrangement even in the undeformed state. With increasing deformation, there are also increasing transverse forces in the structure due to the bending of the individual layers and thus also increasing frictional forces between the layers. By increasing the distance between the layers or removing the pre-deformation of the layers, the stiffness in the yielding state could be significantly reduced and the switching factor K >> 100 could be achieved.

1212

L. Weisheit and W. R. Wockenfuß

Fig. 6. One segment (top) and 3-segmented manipulator (bottom) in its experimental setup for stiffness characterization (left) and corresponding force-displacement plots for the soft and stiff state (right): thin solid lines show the 3 individual measurements in each case, thick solid lines show the corresponding mean value, dashed lines show the mean linear elastic stiffness calculated using linear regression. Deformed states in the pictures on the left represent the stiff state.

5 Conclusion By analyzing different approaches for realizing switchable structural stiffness, the vacuum-based method of material jamming was identified as suitable for realizing a structural component with switchable bending stiffness for use in soft robotics, since it has both a high stiffness ratio and low switching times. By combining the different designs of layer jamming and particle jamming, it was possible to develop a structural component that has a bending stiffness ratio of K = 74 and can absorb tensile and compressive stresses with equal stability. With a maximum effective bending stiffness of k = 13.69 N/mm in the stiff state, the maximum holding force of the component is F = 4 N. A strong stick-slip effect was observed in the soft state, due to an unintentional, probably assembly-related deformation of the layers, giving a stiffness in the soft state of k = 0.18 N/mm. By optimizing the structure and adapting the materials used, it seems possible to increase the stiffness ratio to K >> 100 of the structural component. An initial design of a soft continuum robotic manipulator was developed, consisting of three segments with switchable stiffness, two tendons for an underactuated planar

Design, Fabrication and Characterization of a Novel Variable Stiffness Structure

1213

motion and three hoses for independent stiffness change. The manipulator demonstrated that a wide range of shapes can be adjusted using switchable stiff segments with only two tendons. The overall design of the manipulator had a stiffness ratio of K = 105. Further research projects should focus on structure and material optimization as well as modelling the structure. Suitable layer properties should be identified (e.g. thickness, Young’s modulus, number, coefficient of friction), which on the one hand ensure high flexibility and low adhesion in the soft state, but also high composite stiffness and holding force in the stiff state. Once a structural calculation model has been developed, it would be possible to estimate the limiting forces, maximum stiffness and stiffness ratios achievable with the combined layer and particle jamming approach in terms of scaling. For further development in line with the application, suitable concepts must be developed to guide the hoses and tendons inside the structure in such a way that the manipulator no longer has any components on the outside that can be easily blocked or damaged and forms one self-contained unit. Acknowledgment. This work was supported by “Fraunhofer Cluster of Excellence Programmable Materials CPM” for the development of the structural component with variable stiffness. The tendon-driven soft continuum robotic manipulator was developed as part of the cooperation project “BioiC - Bioinspired soft robotic systems for cognitive production” which is being carried out by the University of Naples Federico II and the Fraunhofer Institute for Machine Tools and Forming Technology. Website: https://www.bioic.unina.it/.

References 1. Rus, D., Tolley, M.T.: Design, fabrication and control of soft robots. Nature 521(7553), 467–475 (2015) 2. Trivedi, D., Rahn, C.D., Kier, W.M., et al.: Soft robotics: biological inspiration, state of the art, and future research. Appl. Bionics Biomech. 5(3), 99–117 (2008) 3. Kildal, J., Tellaeche, A., Fernández, I., et al.: Potential users’ key concerns and expectations for the adoption of cobots. Procedia CIRP 72, 21–26 (2018) 4. Manti, M., Cacucciolo, V., Cianchetti, M.: Stiffening in soft robotics: a review of the state of the art. IEEE Robot. Automat. Magaz. 23(3), 93–106 (2016) 5. Wang, L., Yang, Y., Chen, Y., et al.: Controllable and reversible tuning of material rigidity for robot applications. Mater. Today 21(5), 563–576 (2018) 6. Capadona, J.R., Shanmuganathan, K., Tyler, D.J., et al.: Stimuli-responsive polymer nanocomposites inspired by the sea cucumber dermis. Science (N.Y.) 319(5868) (2008) 7. Shan, W., Lu, T., Majidi, C.: Soft-matter composites with electrically tunable elastic rigidity. Smart Mater. Struct. 22(8) (2013) 8. Carpi, F., De Rossi, D., Kornbluh, R., et al.: Dielectric elastomers as electromechanical transducers: fundamentals, materials, devices, models and applications of an emerging electroactive polymer technology. Elsevier, Amsterdam (2008) 9. Carpi, F., Frediani, G., Gerboni, C., et al.: Enabling variable-stiffness hand rehabilitation orthoses with dielectric elastomer transducers. Med. Eng. Phys. 36(2) (2014) 10. Di Lillo, L., Raither, W., Bergamini, A., et al.: Tuning the mechanical behaviour of structural elements by electric fields. Appl. Phys. Lett. 102(22) (2013)

1214

L. Weisheit and W. R. Wockenfuß

11. Wang, T., Zhang, J., Li, Y., et al.: Electrostatic layer jamming variable stiffness for soft robotics. IEEE/ASME Trans. Mechatron. 24(2), 424–433 (2019) 12. Liu, B., Boggs, S.A., Shaw, M.T.: Electrorheological properties of anisotropically filled elastomers. IEEE Trans. Dielectrics Electric. Insulat. 8(2), 173–181 (2001) 13. Cao, C., Zhao, X.: Tunable stiffness of electrorheological elastomers by designing mesostructures. Appl. Phys. Lett. 103(4) (2013) 14. Bullough, W.A.: Electro-rheological fluids, magneto-rheological suspensions and associated technology. In: Proceedings of the 5th International Conference. World Scientific Publishing Company (1996) 15. Majidi, C., Wood, R.J.: Tunable elastic stiffness with microconfined magnetorheological domains at low magnetic field. Appl. Phys. Lett. 97(16) (2010) 16. Li, Y., Li, J., Tian, T., et al.: A highly adjustable magnetorheological elastomer base isolator for applications of real-time adaptive control. Smart Mater. Struct. 22(9) (2013) 17. Nagase, J.-Y., Wakimoto, S., Satoh, T., et al.: Design of a variable-stiffness robotic hand using pneumatic soft rubber actuators. Smart Mater. Struct. 20(10) (2011) 18. Suzumori, K., Wakimoto, S., Miyoshi, K., et al.: Long bending rubber mechanism combined contracting and extending tluidic actuators. In: IEEE International Conference on Intelligent Robot Systems, pp. 4454–4459 19. Shan, Y., Philen, M., Lotfi, A., et al.: Variable stiffness structures utilizing fluidic flexible matrix composites. J. Intell. Mater. Syst. Struct. 20(4), 443–456 (2009) 20. Correll, N., Önal, Ç.D., Liang, H., Schoenfeld, E., Rus, D. (Hrsg.): Soft autonomous materials—using active elasticity and embedded distributed computation. In: Experimental Robotics: The 12th International Symposium on Experimental Robotics. Springer, Berlin (2014). https://doi.org/10.1007/978-3-642-28572-1_16 21. Wall, V., Deimel, R., Brock, O.: Selective stiffening of soft actuators based on jamming. IEEE Int. Conf. Robot. Automat. 252–257 (2015) 22. Choi, W.H., Kim, S., Lee, D., et al.: Soft, multi-DoF, variable stiffness mechanism using layer jamming for wearable robots. IEEE Robot. Automat. Lett. 4(3), 2539–2546 (2019) 23. Cheng, N., Lobovsky, M., Keating, S., et al.: Design and analysis of a robust, low-cost, highly articulated manipulator enabled by jamming of granular media. In: 2012 IEEE International Conference on Robotics and Automation, pp. 4328–4333 (2012) 24. Wockenfuß, W.R., Weisheit, L., Rieß, S., et al.: Strukturkomponenten mit schaltbarer Steifigkeit für die Soft-Robotik/Structural components with adjustable stiffness for soft robotics. Konstruktion 73(5), 61–66 (2021) 25. Clark, A.B., Rojas, N.: Assessing the performance of variable stiffness continuum structures of large diameter. IEEE RA-L 4(3), 2455–2462 (2019)

Effects of Design Parameters on the Tip Steering Capabilities of Fabric Pneumatic Artificial Muscle-actuated Soft Growing Robots Mario Selvaggio1(B) , Stanislao Grazioso2 , Salvatore Fusco2 , Roberto Sabella2 , Giuseppe Andrea Fontanelli3 , Giuseppe Di Gironimo2 , Bruno Siciliano1 , and Antonio Lanzotti2 1

Department of Electrical Engineering and Information Technology, Universit` a degli Studi di Napoli Federico II, Via Claudio 21, 80125 Napoli, Italy [email protected] 2 Dipartimento di Ingegneria Industriale, Universit` a degli Studi di Napoli Federico II, P.le Tecchio 80, 80125 Napoli, Italy 3 Herobots S.R.L., Traversa Vecchie Fontanelle 6, Castellammare di Stabia, Napoli 80053, Italy

Abstract. Tip steering by induced deformation constitutes one of the most prominent feature to effectively navigate constrained environments with soft growing robots. In this work, we analyze the effects of design parameters on the tip steering capabilities of pneumatically-actuated soft growing robots built from fabric. More specifically, we consider the variability of material, fabric Pneumatic Artificial Muscles (fPAM) diameter, and backbone internal pressure and statistically quantify the effect on the maximum curvature achieved by the robot when a constant fPAM input pressure is applied. In our considered settings, we found a statistically significant main effect (p < 0.05) of the fPAM diameter and a relevant interaction effect between this and the material factor. These findings provide useful guidelines for the design of fabric-based PAM-actuated soft growing robots with enhanced tip steering capabilities.

Keywords: Soft robotics robots

1

· Soft actuators · Design of bioinspired soft

Introduction

In the recent years, the research and development of soft robotics solutions has undergone a tremendous growth [8,24]. Due to their inherent compliance, soft robots are able to squeeze and safely absorb impacts that would crash their rigid-body counterpart. These capabilities constitute essential requirements in unstructured, uncertain and constrained environments where the interaction with the (most of the times unknown) surrounding is unavoidable. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1215–1226, 2023. https://doi.org/10.1007/978-3-031-15928-2_106

1216

M. Selvaggio et al.

Soft growing or vine-inspired robots imitate plant-like growth to change their body length and navigate through confined spaces [18]. Besides being inherently soft, they can navigate without sliding through constrained environments [14], possibly creating structures with their body [7]. Soft growing robots utilize the eversion of a thin-walled tubular structure to grow their body by continuously supplying material to their tip. To date, vine robots have been utilized to design proof-of-concept soft catheters for low-force interactions in constrained surgery [26], millimetre-scale medical devices to detect invasive breast cancers [3] or for endovascular surgery [20], re-configurable and deployable antennas [4], root-like burrowing robots [23], and inspection devices deployed in archaeological sites in South America [9]. Soft growing robots can steer themselves to effectively navigate cluttered and constrained environments by safely exploiting interactions with obstacles [15] or soft actuators embedded along the main robot body (hereafter denoted to as backbone). Several actuation mechanisms have been proposed so far: series pouch motors [13,16], integrated pouches [1], inextensible elements attached to the main robot body [6], pre-formed tubes [26], and tendons [27], which are sometimes utilized in combination with stiffening [11] and shape-locking mechanisms [28]. Among these, fabric Pneumatic Artificial Muscles (fPAM) have been shown to be one of the most promising solution [22]. Their are usually made of a thin, thus foldable and highly conformable, silicone coated ripstop nylon fabric and behave like a McKibben muscle when pressurized, but have no sheath friction. They have been shown capable of repeatable, near-linear force-contraction relationship, and exhibit fast dynamic response and high fatigue life [22]. Thus, they have been used to effectively steer a soft growing robot through an environment cluttered with obstacles by contracting the sides of the robot’s backbone [25]. The development of model-based control techniques for navigation by growing and steering of vine robots is challenged by the complex behavior they exhibit. Accurate models require a characterization of the soft growing robot to establish the relation between the control inputs and the resulting system dynamic evolution and final configuration [2]. For example, establishing the relation between the fluid input pressure and the eversion rate of the backbone can be useful to avoid overshoot in a lengthening regulation problem. This represents a key issue since retraction is still difficult to be carried out due to complicated effects, such as buckling [21]. Solutions to this problem are currently under development [10,19]. To tackle the control problem, most of the prior works rely on simplified kinematic or dynamic models which are usually identified on purpose [5,17]. Moreover, external loads such as gravity or (typically unknown) obstacle interactions modify these relationships further complicating the matter [29]. However, the effects of the (usually imperfect) design and fabrication parameters on the robot performance have never been quantitatively established and are only limitedly known to experienced manufacturers. None of the above works have, indeed, experimentally identified and analyzed the combined effect of both

Effects of Design Parameters on fPAM-Actuated Soft Growing Robots

1217

design and external actuation parameters on soft growing robots’ motion capabilities, in particular those related to their tip steering. This work carries out the characterization of fPAM–actuated soft growing robots, in terms of finding out the relationships among their design parameters, actuation inputs and the resulting tip steering capabilities of the robot. These are quantified by the maximum deformation the robot is capable to exhibit when actuated, and constitutes the second most the prominent feature to effectively navigate constrained environments, besides growing. More in details, we establish the relation between fPAM input pressure and the resulting constant curvature steering deformation of the whole robotic structure and analyzed which are the effects of fPAMs diameter, backbone and fPAM material, and backbone stiffness on this metric. Differently to what stated in [22], we found that the fPAM relationship between the input pressure and the muscle contraction is similar to what can be observed in McKibben muscles [12] but it additionally depends on the muscle’s diameter. Moreover, the effect of this factor on the tip steering capability of the robot is influenced by the density of the material the robot is fabricated from. The rest of this paper is structured as follows: in Sect. 2 we describe the soft growing robot prototype; in Sect. 3 we report its characterization procedure; in Sect. 4 we discuss the obtained results; in Sect. 5 we draw conclusions and indicate directions for future research on the topic.

2

Soft Growing Robot Prototype Description

The soft growing robot considered in this work is made up of an everting backbone and two fPAMs glued to diametrically opposite sides. The backbone is inverted such that when pressurized it pulls new material out from its tip causing the robot body to extend by growing (see Fig. 1) [18]. Instead, when pressurized, the laterally attached fPAM contracts and cause a shortening of the backbone side, thus making this to deform and thus steer its tip. Compressed air is typically supplied through a pipe attached to an analog/digital pressure regulator. When in free space, a local constant shortening deformation of the fPAM along the backbone can be observed, thus the whole robot undergoes a constant curvature deformation inducing tip steering (see Fig. 2). The robot we constructed is made out of a single layer of woven, airtight, silicone coated, rip-stop nylon fabric, commonly used in camping tents and tarps. The rip-stop pattern is simply a plain weave with thicker, reinforcing strands at regular intervals in both the warp and weft direction. The key to the operation of the presented fPAM is fabric bias. The fabric is inextensible along the major thread lines, but is fairly elastic along the fabric bias at a 45◦ angle to these threads (see Fig. 3). This means that a tube of bias-cut fabric will be elastic, while a tube with a straight or cross grain cut will not. As a result, when the tube is pressurized it expands radially while contracting lengthwise exhibiting a behavior similar to a McKibben muscle [12].

1218

M. Selvaggio et al.

Fig. 1. Timed sequence of pictures showing a soft growing robot extending its body by everting new material from the tip.

However, differently from McKibbens, where the maximum lengthwise contraction depends only upon the angle that the strands of the sheath form with the longitudinal axis, in the considered fabric, where the circumferential expansion is given by the elastic deformation of the material itself, the maximum contraction achievable also depends upon the fPAM cross-section dimensions. Our hypothesis is that a larger circumferential total deformation can be achieved when more material is locally stretched by the internal fluid pressure. In the next section, we characterize the effects of design parameters on the tip steering capability of the considered soft growing robot prototype.

3

Soft Growing Robot Characterization

The main goal of this work is to characterize the effects of the fPAM and backbone material m, the fPAM diameter d, and the internal backbone pressure p (hereafter denoted to as factors) on the resulting curvature c = 1/r (see Fig. 4) assumed by the robot (hereafter denoted to as metric) given a fixed value of the fPAM internal fluid pressure. To this end, we performed the experiments described in the next section.

Effects of Design Parameters on fPAM-Actuated Soft Growing Robots

1219

Fig. 2. A fully extended soft growing robot backbone deformed by the contraction of a laterally glued fabric Pneumatic Artificial Muscle (fPAM). Compressed air is supplied through the blue tubing. r = constant curvature steering deformation radius; c = 1/r = constant curvature.

3.1

Experimental Plan

We designed and carried out an experimental campaign considering the variation of the three factors on two pre-defined levels. The numerical values of the two levels of the three factors are given in Table 1 and were opportunely chosen to induce variability in the robot tip steering behavior. A full factorial plan comprising a total of 23 = 8 experiments was defined. Trials were carried out using the experimental setup explained in details in the next section. For the considered experiments, four soft growing robots (full length = 700 mm) were prototyped to account all the possible combinations of materials with different density (Level 1 = 20 den; Level 2 = 40 den), and fPAM diameter (Level 1 = 25 mm; Level 2 = 45 mm). The two levels of backbone pressure (Level 1 = 0.3 bar; Level 2 = 0.5 bar) did not require the fabrication of additional prototypes. Table 1. Factors’ levels used in the conducted experimental campaign. Factor

Symbol Units Level 1 Level 2

Material m d fPAM diameter Backbone pressure p

den mm bar

20 25 0.3

40 45 0.5

1220

M. Selvaggio et al.

Fig. 3. Drawing of the material elastic deformation as a function of the bias-cut angle α. The maximum elongation ε, for a given internal stress σ ∗ , is observable when fiber angle α = 45◦ .

Fig. 4. Schematic of the experimental setup.

3.2

Experimental Setup

The experimental setup implemented for the regulation and the measurement of the fluid pressures in the fPAMs and in the backbone is shown in Fig. 4. The soft growing robot prototypes, built to carry out the experimental campaign, were constituted by one backbone and one fPAM glued to it, exploiting symmetry with respect to the backbone center line. A manometer has been used to measure the air pressure and the air flow in input to the backbone. A pressure regulator, instead, has been used to keep a constant value of pressure of the fluid inside the fPAM (this parameter is, indeed, not varying across the trials). In this way, it is possible to characterize the effect of curvature variation when the pressure inside the backbone varies. In addition, a calibrated camera has been placed in an overhead looking-down configuration to measure the curvature radius of the robot. This value has been retrieved by manually fitting a circle to the centerline of the robot from the acquired image using the Kinovea software (see Fig. 2).

Effects of Design Parameters on fPAM-Actuated Soft Growing Robots

3.3

1221

Results

The collected data were analyzed with the aim of finding out which combination of design and actuation parameters have statistically significant influence on the performance of the robot (i.e., the tip steering capability, in the considered case). A three-way ANOVA statistical test was used considering the curvature c of the soft growing robot as dependent parameter, while the backbone’s internal pressure p, the fPAM diameter d, and the material m as independent parameters. Results were obtained by means of the MATLAB Statistics and Machine Learning Toolbox and are reported in Table 2, that shows the three main and the three interaction effects between m, d, and p factors. As it is possible to note, the only factor giving a statistically significant effect (p < 0.05) on the chosen curvature metric is the fPAM diameter d. Thus, we can claim that the fPAM diameter is the factor playing the most relevant role among those considered in the design of tip steerable soft-growing robots. Although not statistically significant (p < 0.05), the interaction between the material m and the fPAM diameter d shows an interesting effect that will be better discussed in the next section.

4

Discussion

The obtained results are plotted in Fig. 5. In the top graphs, it is possible to see that for a given fPAM diameter d and internal pressure p of the backbone, the less dense material (20 den) achieves greater values of the curvature metric c with respect to the denser material (40 den). This result is expected since the less dense material is in general less stiff to bending deformation. For a given material m and internal pressure of the backbone p, increasing the diameter of the fPAM allows achieving greater values of the curvature deformation c. As explained above, this result is statistically significant and supports our hypothesis. This is given by the fact that larger values of the fPAM cross section determines greater shortening, and thus allows achieving a higher curvature of the Table 2. Analysis of variance results.

m d p m∗d m∗p d∗p Error Total

Df Sum Sq

Mean Sq

1 1 1 1 1 1 1 7

1.52296e-08 10.54 3.41959e-07 236.69 1.43625e-07 99.41 1.62587e-07 112.53 3.10859e-08 21.52 4.22775e-09 2.93 1.44477e-09

1.52296e-08 3.41959e-07 1.43625e-07 1.62587e-07 3.10859e-08 4.22775e-09 1.44477e-09 7.00158e-07

F-Value p-Value 0.1902 0.0413 0.0636 0.0598 0.1352 0.3368

1222

M. Selvaggio et al.

Fig. 5. Top graphs: box plots of the main effects of the material (m), fPAM’s diameter (d), and backbone internal pressure (p). Bottom graphs: plots of the interaction effects between m and d.

backbone. For a given level of the material m and fPAM diameter d, increasing the internal pressure of the backbone p penalizes the curvature achieved because higher pressures, in general, correspond to higher stiffness of the backbone. In this case, since the fPAM internal pressure is kept constant, the actuator exerts a constant force to deform the backbone, and this results in a lower curvature achieved. Although not statistically significant (p = 0.0598) it is worth to discuss the interaction effect between m and d. This is shown in the bottom graphs of Fig. 5. In the case of d = 25 mm better results in terms of achieved curvature can be seen going from the denser to the less dense material. However, this trend is inverted when d = 45 mm: the 40 den material exhibits a higher value of the curvature. This effect can also be observed when the material is fixed and the fPAM diameter varies. This can be better appreciated looking at the graphs in Fig. 6, where the inversion of the trend commented above is clearer. Finally, from the same plots it is possible to see that for the denser material the range of curvature values is wider with respect to the less dense material. This explains the less variance observed in the plots of Fig. 5 and is representative of the fact that the less dense material is less dependent on the internal pressure of the backbone.

Effects of Design Parameters on fPAM-Actuated Soft Growing Robots

1223

Fig. 6. In the graphs, lines represent the robot curved configurations assumed with the two considered materials when fPAM diameter and backbone internal pressure vary. Left: m = 40 den material; Right: m = 20 den material.

5

Conclusions

In this paper, we analyzed the effect of design parameters on the tip steering capabilities of fPAM-actuated soft growing robots. More specifically, we considered the variability of material, fPAM diameter, and backbone internal pressure and statistically quantify the effect on the curvature achieved by the robot when a constant fPAM input pressure is supplied. In the considered setting, we found a statistically significant main effect (p < 0.05) of the fPAM diameter and a relevant interaction effect between this and the material factor. These findings provide useful guidelines for the design of fabric-based PAM-actuated soft growing robots. Indeed, when the objective is to maximize the curvature of the robot, it is convenient to fabricate fPAMs with larger diameters, while the material and the internal backbone pressure can instead be selected according to other criteria, e.g. cost. However, it is worth to consider that, in view of the discussed interaction effect between the material m and fPAM diameter d, when larger fPAM diameters are selected, the denser material allows achieving higher curvatures. Although experiments were conducted in a controlled laboratory setup, other external effects not considered here, such as ground friction and manual pretension impressed to the fPAMs during fabrication, may have had an nonnegligible influence on the presented results. Our aim is to further investigate and reduce these effects in future studies on this topic. Acknowledgments. This work was supported by the BIOIC project (Bioinspired soft robotic systems for cognitive production). https://www.bioic.unina.it/.

1224

M. Selvaggio et al.

References 1. Abrar, T., Putzu, F., Ataka, A., Godaba, H., Althoefer, K.: Highly manoeuvrable eversion robot based on fusion of function with structure. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 12089–12096 (2021). https://doi.org/10.1109/ICRA48506.2021.9561873 2. Ataka, A., Abrar, T., Putzu, F., Godaba, H., Althoefer, K.: Model-based pose control of inflatable eversion robot with variable stiffness. IEEE Rob. Autom. Lett. 5(2), 3398–3405 (2020). https://doi.org/10.1109/LRA.2020.2976326 3. Berthet-Rayne, P., et al.: Mammobot: a miniature steerable soft growing robot for early breast cancer detection. IEEE Rob. Autom. Lett. 6(3), 5056–5063 (2021). https://doi.org/10.1109/LRA.2021.3068676 4. Blumenschein, L.H., Gan, L.T., Fan, J.A., Okamura, A.M., Hawkes, E.W.: A tipextending soft robot enables reconfigurable and deployable antennas. IEEE Rob. Autom. Lett. 3(2), 949–956 (2018) 5. Blumenschein, L.H., Okamura, A.M., Hawkes, E.W.: Modeling of bioinspired apical extension in a soft robot. In: Mangan, M., Cutkosky, M., Mura, A., Verschure, P.F.M.J., Prescott, T., Lepora, N. (eds.) Living Machines 2017. LNCS (LNAI), vol. 10384, pp. 522–531. Springer, Cham (2017). https://doi.org/10.1007/978-3319-63537-8 45 6. Blumenschein, L.H., Koehler, M., Usevitch, N.S., Hawkes, E.W., Rucker, D.C., Okamura, A.M.: Geometric solutions for general actuator routing on inflated-beam soft growing robots. IEEE Trans. Rob. 38, 1–21 (2021). https://doi.org/10.1109/ TRO.2021.3115230 7. Blumenschein, L.H., Usevitch, N.S., Do, B.H., Hawkes, E.W., Okamura, A.M.: Helical actuation on a soft inflated robot body. In: 2018 IEEE International Conference on Soft Robotics (RoboSoft), pp. 245–252 (2018). https://doi.org/10.1109/ ROBOSOFT.2018.8404927 8. Campbell, S.: The robotics revolution will be soft: soft robotics proliferate-along with their sources of inspiration. IEEE Pulse 9(3), 19–24 (2018). https://doi.org/ 10.1109/MPUL.2018.2814240 9. Coad, M.M., et al.: Vine robots: design, teleoperation, and deployment for navigation and exploration. IEEE Rob. Autom. Mag. 27(3), 120–132 (2020). https:// doi.org/10.1109/MRA.2019.2947538 10. Coad, M.M., Thomasson, R.P., Blumenschein, L.H., Usevitch, N.S., Hawkes, E.W., Okamura, A.M.: Retraction of soft growing robots without buckling. IEEE Rob. Autom. Lett. 5(2), 2115–2122 (2020). https://doi.org/10.1109/LRA.2020.2970629 11. Exarchos, I., Do, B.H., Stroppa, F., Coad, M.M., Okamura, A.M., Liu, C.K.: Task-specific design optimization and fabrication for inflated-beam soft robots with growable discrete joints. In: IEEE International Conference on Robotics and Automation (2022). https://arxiv.org/abs/2103.04942 12. Gaylord, R.H.: Fluid actuated motor system and stroking device (US Patent 2, 844, 126, Jul 22, 1958) 13. Greer, J.D., Morimoto, T.K., Okamura, A.M., Hawkes, E.W.: A soft, steerable continuum robot that grows via tip extension. Soft Rob. 6(1), 95–108 (2019)

Effects of Design Parameters on fPAM-Actuated Soft Growing Robots

1225

14. Greer, J.D., Blumenschein, L.H., Alterovitz, R., Hawkes, E.W., Okamura, A.M.: Robust navigation of a soft growing robot by exploiting contact with the environment. Int. J. Rob. Res. 39(14), 1724–1738 (2020). https://doi.org/10.1177/ 0278364920903774 15. Greer, J.D., Blumenschein, L.H., Okamura, A.M., Hawkes, E.W.: Obstacle-aided navigation of a soft growing robot. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 4165–4172, May 2018. https://doi.org/10. 1109/ICRA.2018.8460777 16. Greer, J.D., Morimoto, T.K., Okamura, A.M., Hawkes, E.W.: Series pneumatic artificial muscles (spams) and application to a soft continuum robot. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5503– 5510, May 2017. https://doi.org/10.1109/ICRA.2017.7989648 17. Haggerty, D.A., Banks, M., Curtis, P.C., Mezi’c, I., Hawkes, E.W.: Modeling, reduction, and control of a helically actuated inertial soft robotic arm via the koopman operator. ArXiv abs/2011.07939 (2020) 18. Hawkes, E.W., Blumenschein, L.H., Greer, J.D., Okamura, A.M.: A soft robot that navigates its environment through growth. Sci. Rob. 2(8), eaan3028 (2017). https://doi.org/10.1126/scirobotics.aan3028 19. Jeong, S.G., et al.: A tip mount for transporting sensors and tools using soft growing robots. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8781–8788, Oct 2020. https://doi.org/10.1109/IROS45743. 2020.9340950 20. Li, M., Obregon, R., Heit, J.J., Norbash, A., Hawkes, E.W., Morimoto, T.K.: Vine catheter for endovascular surgery. IEEE Trans. Med. Rob. Bionics 3(2), 384–391 (2021). https://doi.org/10.1109/TMRB.2021.3069984 21. Liu, Y.P., Wang, C.G., Tan, H.F., Wadee, M.K.: The interactive bending wrinkling behaviour of inflated beams. Proc. Roy. Soc. A: Math. Phys. Eng. Sci. 472(2193), 20160504 (2016). https://doi.org/10.1098/rspa.2016.0504, https:// royalsocietypublishing.org/doi/abs/10.1098/rspa.2016.0504 22. Naclerio, N.D., Hawkes, E.W.: Simple, low-hysteresis, foldable, fabric pneumatic artificial muscle. IEEE Rob. Autom. Lett. 5(2), 3406–3413 (2020). https://doi.org/ 10.1109/LRA.2020.2976309 23. Naclerio, N.D., et al.: Controlling subterranean forces enables a fast, steerable, burrowing soft robot. Sci. Rob. 6(55), eabe2922 (2021). https://doi.org/10. 1126/scirobotics.abe2922, https://www.science.org/doi/abs/10.1126/scirobotics. abe2922 24. Rus, D., Tolley, M.T.: Design, fabrication and control of soft robots. Nature 521(7553), 467–475 (2015) 25. Selvaggio, M., Ramirez, L.A., Naclerio, N.D., Siciliano, B., Hawkes, E.W.: An obstacle-interaction planning method for navigation of actuated vine robots. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 3227–3233, May 2020). https://doi.org/10.1109/ICRA40945.2020.9196587 26. Slade, P., Gruebele, A., Hammond, Z., Raitor, M., Okamura, A.M., Hawkes, E.W.: Design of a soft catheter for low-force and constrained surgery. In: IEEE/RSJ International Conference on Intelligent Robotic Systems, pp. 174–180 (2017) 27. Stroppa, F., Luo, M., Yoshida, K., Coad, M.M., Blumenschein, L.H., Okamura, A.M.: Human interface for teleoperated object manipulation with a soft growing robot. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 726–732, May 2020. https://doi.org/10.1109/ICRA40945.2020. 9197094

1226

M. Selvaggio et al.

28. Wang, S., Zhang, R., Haggerty, D.A., Naclerio, N.D., Hawkes, E.W.: A dexterous tip-extending robot with variable-length shape-locking. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9035–9041, May 2020. https://doi.org/10.1109/ICRA40945.2020.9197311 29. Watson, C., Obregon, R., Morimoto, T.K.: Closed-loop position control for growing robots via online jacobian corrections. IEEE Rob. Autom. Lett. 6(4), 6820–6827 (2021). https://doi.org/10.1109/LRA.2021.3095625

A User–Centered Approach Involving the Clinicians for the Design of Medical Devices: Case Study of a Soft Robotic Exoskeleton for Rehabilitation Teodorico Caporaso1 , Stanislao Grazioso1 , Benedetta M. V. Ostuni1,2(B) , Angela Palomba3 , Giuseppe Di Gironimo1 , Giovanni Iolascon3 , and Antonio Lanzotti1 1

2

Department of Industrial Engineering University of Naples Federico II, P.le Tecchio 80, 80125 Naples, Italy [email protected] Department of Management, Information and Production Engineering University of Bergamo, Via Salvecchio 19, 24129 Bergamo, Italy 3 Department of Medical and Surgical Specialties and Dentistry University of Campania Luigi Vanvitelli, V.le A. Lincoln, 5, 81100 Caserta, Italy

Abstract. This paper proposes a systematic approach for involving the clinicians in the design of medical devices, here used for the development of a soft robotic glove for rehabilitation. The approach considers the integration of different methodologies that take into account the emotional information of the clinicians considered as end–users (i.e. Kano–Kansei) and a deep analysis of the needs of both the patients and the clinicians (i.e. house of quality). Based on this user–centered approach, the paper develops different rehabilitation concepts realized through the technique referred to as design of experiments. Finally the optimal one is chosen re–involving the clinicians and using the ANOVA analysis.

Keywords: User-centred design rehabilitation · Soft robotics

1

· Product development · Hand

Introduction

In the recent years, technological progresses have allowed a rapid development of robotic devices in various areas of application. Among these devices, robotic exoskeletons have generated a great attention for supporting humans in industrial, medical and daily living activities. A robotic exoskeleton is defined as a robot attachable to the human body which generates force-assisting production of desired movements [1]. Robotic exoskeletons have become a popular and efficient technological solution for assisting people that suffer from different pathologies and for enhancing the motion capabilities of healthy individuals. In the context of rehabilitation, these devices can be used for helping people that c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1227–1238, 2023. https://doi.org/10.1007/978-3-031-15928-2_107

1228

T. Caporaso et al.

suffer from neurological conditions or musculoskeletal diseases (i.e. spinal cord injury, stroke, arthritis, carpal tunnel syndrome). As a matter of fact, assisted rehabilitation using robotic technologies might improve the effectiveness of the treatment, if compared to conventional therapy [2]. As an example, for the treatment of a stroke, electromechanical and robotic devices allow an improvement in daily life activities from 31% to 56%, an improvement of 32% in functionality and an improvement of 46% of muscle strength of the upper limb, if compared to conventional therapy [3]. Despite the growing evidence of the efficacy of robotic devices for upper limb rehabilitation, robot-assisted rehabilitation needs to be improved in clinical practice guidelines, in particular in terms of applicability [4]. In this sense, a correct involvement of physicians and medical personnel within the design process itself, might enhance the overall product development. Grossman et al. in [7] identify several design principles of human–centered design of wearable robots, as guidelines for the technical development. Indeed, Young-A et al. in [6] use the Kano model to identify key quality features for wearable technologies. In this paper we propose a systematic approach involving the clinicians since the early phase of medical device development. The method is illustrated through its application to a particular case study, namely the development of a robotic exoskeleton for hand rehabilitation based on soft robotic technologies [8]. Although many of previous medical devices consider an involvement of the users in the design and development phases, there is a limited evidence about methodologies for their systematic involvement. The contribution of this paper is in closing this gap, by presenting a specific case study which can be considered as example and inspiration for future works in the field.

2

Methodology

The methodological approach used in this paper is shown in Fig. 1. The starting point is the study of a design problem that arises from a human need. In this case study we assess the need to develop an exoskeleton device for hand rehabilitation that is effective and comfortable to ensure the greatest approval of the user. The second phase is composed by a preliminary questionnaire (that includes Kano questionnaire [10]) and the Quality Function Deployment (QFD) methodology [11] for user’s point of view exploring. Kano questionnaire allows to compile a list with the main CNs directly based on the information provided by the users according to their relationship with the product. Once that the main CNs have been identified, these must be translated into Technical Requirements (TRs) which will then be the basis for the development of the concepts. The house of quality, a tool belonging to QFD methodology, is used to translate CNs into TRs. Through the house of quality, we define the main TRs, the requirements with the highest technical importance rating. These requirements are then used for the generation of the different concepts using the Design of Experiment (DoE) technique (third phase). The next phase is the optimal concept selection, which re–involve the users using a second questionnaire. For choosing the criteria of

User–Centred Approach for a Soft Robotic Glove

1229

Fig. 1. Flow chart of the human-centred methodological approach.

evaluation of the different concepts, we can use the Kansei engineering approach [12]. Finally, by means of an analysis of variance (ANOVA), the optimal concept has been chosen.

3

User Viewpoint

A preliminary questionnaire was submitted to national and international medical personnel specialized in the field of rehabilitation. The user involved are sixteen: physiatrists (68.75%), physiotherapists (25%), bioengineers (6.25%). All the interviewees are highly specialized, in fact, around 50% of them have been practicing for more than 10 years. The questionnaire was structured in three different sections: (i) the first section is used to collect demographic data and data on interaction between the user and the product. It was decided to compose the first section in 29 questions, 13 revolving around the five Ws (i.e. Who, What, Why, Where and When) and another 16 of general nature on the expected operativeness of the device; (ii) the second section consists of a single open question necessary to collect some key words (i.e. Kansei words) that indicate the main feelings that the product inspires in the user; (iii) the last section consists of 39 questions, divided in groups of three questions (functional, dysfunctional and rating questions) for each of the twelve Kano words. From the first section emerged the main pathologies which, according to the users, can be treated using a robotic glove. The main benefits that the robotic glove should bring to the patient also emerged (Fig. 2). From the second section, the Kansei words were collected to define the criteria for concept evaluation. This step is described in Sect. 5.1. Lastly, the classification of the CNs emerged from the third section (reported in Sect. 3.1).

1230

T. Caporaso et al. 100 80 80

%

60

40

Improved motor control

Activation of mirror neurons

Reduction of inflammation

Proprioception improvement

Maintenance of muscle lengthening

Recovery of the deficit of strength

Recovery/maintenance of RoM

Improvement of muscle activations

Nerve injuries

Spinal cord injuries

Neurorehabilitation

Carpal Tunnel Syndrome

CRPS

De Quervain Syndrome

Dupuytren’s disease

Rheumatoid arthritis

Non-specific MSK pain

0

Stroke

0

Traumatic accidents

20

Improvement of muscle activations

40

20

Improvement of muscle activations

%

60

Fig. 2. The main pathologies which can be treated using a robotic glove (on the left) and the main benefits that the robotic glove should bring to the patient (on the right).

3.1

Kano Questionnaire

A preliminary list of possible Customer Needs (CNs) is compiled starting from [6, 7]. We have defined the main CNs for a wearable device in: (i) ergonomic; (ii) bio-inspired; (iii) light; (iv) thin; (v) customized; (vi) open on the palm; (vii) controllable; (viii) intuitive; (ix) versatile; (x) portable; (xi) eco-sustainable; (xii) economic. The classification of the CNs emerged from the Kano questionnarie is reported in Table 1. To classify the CNs from the questionnaire, both functional and dysfunctional questions related to all pre-selected CNs were submitted to all users. In addition, a 10-point ordinal subjective rating scale with extremes 1 (no relevance) to 10 (maximum relevance) was collected for each CN. These questions, based on the Kano technique, allow the assignment of a numerical value and a grade of evaluation to each CN. We only selected the CNs with the following grades: Must be (M), One dimensional (O) or Attractive (A). Among the CNs with grade A we only chose those CNs with a grade A≥ 8. This threshold was chosen as 8 points represent the choice made by 50% of the interviewees. Furthermore, the Customer Satisfaction (CS) values confirm the correctness of this choice. The Customer Dissatisfaction (CD) values, instead, confirm the importance of including the Must be and One dimensional Customer Needs. Thus, the selection of the CNs has been made as follows: (i) Ergonomic - grade M; (ii) Bio-inspired, Light, Thin and Intuitive - grade O; (iii) Open on the palm, Controllable and Versatile - grade A≥ 8.

User–Centred Approach for a Soft Robotic Glove

1231

Table 1. Kano evaluation table, grade of evaluation (i.e. M = Must be; O = One dimensional; A = Attractive; I = Indifferent; R = Reverse and S = Skeptical), mean score, Customer Satisfaction (CS) coefficient and Customer Dissatisfaction (CD) coefficient. For each CNs the higher value is underlined. Customer needs

3.2

M O A I R S Grade Score CS

CD

Ergonomic

7

6

3 0 0 0 M

9.06

0.56 –0.81

Bio-inspired

4

5

3 4 0 0 O

8.63

0.50 –0.56

Light

3

8

3 2 0 0 O

8.69

0.69 –0.69

Thin

3

5

4 4 0 0 O

7.81

0.56 –0.50

Customized

1

4

7 4 0 0 A

8.06

0.69 –0.31

Open on the palm 0

2 10 4 0 0 A

6.81

0.75 –0.13

Controllable

1

6

8 1 0 0 A

8.19

0.88 –0.44

Intuitive

4

6

4 2 0 0 O

8.25

0.63 –0.63

Versatile

1

3

8 4 0 0 A

7.81

0.69 –0.25

Portable

2

3

6 5 0 0 A

8.00

0.56 –0.31

Eco-sustainable

3

5

2 6 0 0 I

7.13

0.44 –0.50

Economic

2

4

6 4 0 0 A

7.69

0.63 –0.38

Quality Function Deployment

The main CNs, identified by the analysis of the Kano questionnaire, were used to build the house of quality shown in Table 2. The house of quality serves to highlight the relationship between the main CNs and the TRs previously defined. The TRs selected from the study of the state of the art and from the interaction with medical personnel are: (i) Range of Motion (RoM) for each joint of the human finger, i.e. MCP, PIP, DIP; (ii) forces, i.e. grasp force, pinch force, bending isometric force and extension isometric force; (iii) sensors, i.e. pressure sensor, flex sensor and electromyographic (EMG) sensor; (iv) dimension and weight. From the house of quality, a classification of the TRs based on the priority rank associated with the single TR emerged. High-priority TRs will be developed by maximizing their value as a function of the ideal value to be achieved. Instead, lower-priority TRs will be developed by not maximizing their value. The selected TRs, can be synthesized into two macro groups and associated with two design factors: (i) actuator and (ii) control mode. The Actuator DF is linked to RoMs, forces, dimension and weight; the Control mode DF is linked to sensors (sensor feedback). This choice was made based on [5]; indeed, these authors state that for a successful design of a wearable robot two main aspects must be considered: the actuation/control and the mechanical/kinematic design of the structure.









9

92 10% 4 Deg. 19–71 19–90 19–90

Light

Thin

Open on the palm

Controllable

Intuitive

Versatile

Technical importance score Importance % Priorities rank Measurement units Min value (Liter/Comp) Max value (Liter/Comp) Target (our value)

5

3

1

4

4

3

3

1

1

Bio-inspired

9

Ergonomic

5

92 10% 4 Deg. 23–87 23- 110 23–110

9

3

1

1

9

92 10% 4 Deg. 10–64 10–70 10–70

9

3

1

1

9

44 5% 9 N 9–11 9–16 9–11

9

3

1

RoM - MCP RoM - PIP RoM - DIP Grasp force

5

Customer Technical requirements → importance Customer requirements ↓ rating

Desired direction of improve

44 5% 9 N 0.8–2 0.8–10.6 0.8–2

9

3

1

Pinch force







58 6% 6 N 239.7–279.3 239.7-463.9 239.7–279.3

9

3

3

1

1

58 6% 6 N 59.9–69.8 59.9–115.9 59.9–69.8

9

3

3

1

1

45 5% 6 Pa 0–4.9 K 0–980.7 K 0–4.9K

3

9

Bending Extension Pressure isometric force isometric force sensor





67 7% 4 Ohms 17.5K-65K 25K-125K 25 K-125 K

3

3

9

1

1

67 7% 4 Hz 10–250 10–5000 10-5000

3

3

9

1

1

Flex sensor EMG





139 15% 2 mm 15 25 15

1

3

9

9

3

9

129 14% 2 g 77 500 77

1

3

9

9

1

9

Dimension Weight



Table 2. Part of the house of quality. “Priorities rank” row highlight the classification of the TRs based on their “Technical importance score” and “Importance %”

1232 T. Caporaso et al.

User–Centred Approach for a Soft Robotic Glove

1233

Table 3. Design factors and their levels selected for the development of concepts. Design factor

Level 1

Level 2

Level 3

A - Actuator

PneuNet

Fiber-reinforced

Textile

B - Control mode Predefined poses

User intention by User intention by flex sensors Electromyography (EMG)

Table 4. Evaluation parameters of the actuators: α - Bending angle; β - Grasp force; γ - Pinch force (single actuator); δ - Average isometric force (single actuator);  - Average height (single actuator); ζ - Average weight (single actuator). Actuator

α [◦ ]

PneuNet

0–200 0–12 0–2

β [N] γ [N] δ [N]  [mm] ζ [g] +0.3 20

Fiber-reinforced 0–350 0–15 0–10 +1 Textile

4

0–150 0–15 0–9

+0.5

42.5

15

44.4

3

11.4

Concepts Generation

The design factors that emerged from the QFD analysis were used to build a full factorial design plane. Three different levels have been chosen for both DFs to allow the generation of concepts with DoE techniques (Table 3). The three levels of the design factor Actuator are: 1) Pneunet design with layer jamming; 2) Fiber-reinforced design with layer jamming; 3) Textile design with layer jamming. Indeed, the three levels of the design factor Control mode are: 1) control through predefined poses; 2) control through user intention via flex sensor; 3) control through user intention via EMG sensor. Table 4 lists some evaluation parameters of the actuators in order to highlight the differences between the three levels of the A design factor. These parameters have been obtained from the literature (α, β and γ - [13–17]) and from preliminary laboratory tests carried out @ ASTRO Lab, the Laboratory for Advancements in SofT RObotics at University of Naples Federico II (δ,  and ζ). The main differences about the three levels of the B design factor, instead, consist in the use of different sensor feedback for different control modes. The use of pressure sensors has been envisaged for each control mode. For the second level of the B factor the use of flex sensors was envisaged and for the third level the use of EMG sensors was envisaged. The full factorial design plane is composed by 32 = 9 different concepts. The nine concepts are shown in Fig. 3. The concepts are: (i) PneuNet actuators (silicone-based actuators) with stiffening capability with a manual control mode; (ii) PneuNet actuators with stiffening capability with a control mode based on user intention prediction via flex sensors; (iii) PneuNet actuators with stiffening capability with a control mode based on user intention prediction via EMG; (iv) Fiber-reinforced actuators (silicone-based actuators) with stiffening capability with a manual control mode;

1234

T. Caporaso et al.

(i) Silicone-based soft ac- (ii) Silicone-based soft ac- (iii) Silicone-based soft tuators with manual con- tuators with user inten- actuators with user intentrol. tion by flex sensors. tion by EMG.

(iv) Fiber-reinforced sili- (v) Fiber-reinforced soft (vi) Fiber-reinforced soft cone soft actuators with actuators with user inten- actuators with user intenmanual control. tion by flex sensors. tion by EMG.

(ix) Textile actuators (vii) Textile soft actuators (viii) Textile with user in- with user intention by with manual control. tention by flex sensors. EMG.

Fig. 3. The nine treatments of the full factorial design.

User–Centred Approach for a Soft Robotic Glove

1235

Table 5. Kansei words based on the affinity diagram. The words that have been repeated several times by users are indicated with an asterisk. Performing

Multi-functional Ergonomic

Supportive

Functional *

Comfortable * Practical

Innovative

Versatile *

Specific

Compliant

Agile *

Efficient

Augmentative

Customizable Interactive *

Feedback *

Sensory *

Adaptable

Rehabilitation *

User - friendly Trendy Non invasive

Independent

Domiciliary

Breathable

Intuitive

Satisfying

Easy to use *

Cosmetically acceptable

(v) Fiber-reinforced actuators with stiffening capability with a control mode based on user intention prediction via flex sensors; (vi) Fiber-reinforced actuators with stiffening capability with a control mode based on user intention prediction via EMG; (vii) Textile actuators (consisting of silicone bladders and layers of fabric binding together to form a pouch) with stiffening capability with a manual control; (viii) Textile actuators with stiffening capability with a control mode based on user intention prediction via flex sensors; (ix) Textile actuators with stiffening capability with a control mode based on user intention prediction via EMG. Each actuator is designed to support two functionalities: (1) motion, to achieve the required RoM. This function allows the hand to be bent according to different bending angles for making different tasks; (2) stiffening, to allow for the retention of an isometric position. This function allows for the stiffening of the structure through the technique of layer jamming. This design is based on a recent patent submitted by the authors [18].

5

Optimal Concept Selection

For the choice of the optimal concept, the clinicians were involved in a second questionnaire. For the second test, a group composed of 43.75% of the clinicians interviewed in the first phase was re–involved. The nine concepts were rated on a scale of 1 to 10 according to the criteria selected in the Sect. 5.1. Then the results were analyzed by ANOVA analysis. 5.1

Kansei Words

Fifty–eight Kansei words were collected from the second section of the Kano questionnaire. Kansei words link the user’s feeling to the product. Six words were discarded as deemed not significant while the fifty-two remaining words were grouped according to their semantic meaning. Lastly, a reduction of words based on the affinity diagram was performed. The affinity diagram is an innovative analysis tool to combine an analytical approach and a creative approach. From this analysis five main groups were identified: (i) performing; (ii) multi– functional; (iii) ergonomic; (iv) user–friendly; (v) trendy. These five main groups

1236

T. Caporaso et al.

Table 6. Mean evaluation of the users interviewed for each proposed concept. The highest value has been highlighted in the table. Concept 11 Mean

12

13

21

22

23

31

32

33

6.07 6.75 6.89 6.00 6.82 7.14 6.71 7.25 7.82

Table 7. Contribution Ratio (Cr%) of the two factors A and B for each criterion calculated using the ANOVA analysis. Criterion → Factor↓

Performing Multi–functional Ergonomic User–friendly Mean

A - Actuator

47.45%

1.25%

49.25%

49.56%

49.46%

B - Control mode

52.55%

98.75%

50.75%

50.44%

50.54%

Factor A

Factor B 8 7.26

7 6.57

6.65

Means

Means

8

7.29 6.94

7 6.26

6

6 1

2 Levels

3

1

2

3

Levels

Fig. 4. Mean values of the factors A and B for each levels.

need to identify the criteria to build questionnaires for choosing the optimal concept. The classification of the main Kansei words is listed in Table 5. Each criterion was associated with a TR: the performing criterion was associated with grasp and pinch forces (Table 4 - parameters β and γ); the multi-functional criterion was associated with the isometric forces (Table 4 - parameter δ); the ergonomic criterion was associated with the dimension and weight (Table 4 parameters  and ζ); the user–friendly criterion was associated with the user interface and the trendy criterion was associated with the outward appearance of the soft exoskeleton glove. Trendy was discarded as not meaningful in this phase of concepts development, than only four criteria was used for the optimal concept selection.

User–Centred Approach for a Soft Robotic Glove

5.2

1237

ANOVA Analysis

As shown in Table 6, the concept that achieves a higher average consensus is concept 33 i.e. the exoskeleton glove composed of textile actuators with user intention by EMG signals (concept N◦ ix in Fig. 3). Table 7 shows the percentages values of the Contribution ratio (Cr%) for the two factors A and B for each criterion and their mean values. The ANOVA Analysis confirms the result obtained, in fact, through the analysis of the Cr we can affirm that both factors are equally decisive (except for factor A evaluated with the multi–functional criterion). Graphs 4 represent the average values of the factors A and B for each level. Both factors are significant, for both factors the most significant level is level 3, therefore it can be concluded that the optimal concept is the named 33.

6

Conclusion

A user–centered design approach involving the clinicians since the beginning of medical devices development is presented in this paper. As an illustrative use case, we present the development of a soft robotic glove with stiffening capabilities for rehabilitation. The approach uses different design methods and leads to the development of multiple concepts organized in a full factorial plane. The phase of selecting the optimal concept, involving again the clinicians in the process, allows to choose a concept that is able to satisfy the functional and technical requirements requested by users. Acknowledgments. The present work is part of the project SAFE-WORKERS (Support in Ability and Function Evaluation for Workers) funded by Fraunhofer Joint Lab IDEAS and INAIL - Direzione regionale della Campania. This work was also supported by the BIOIC project (Bioinspired soft robotic systems for cognitive production) https://www.bioic.unina.it/.

References 1. Pons Jos`e, L.: Wearable robots. John Wiley and Sons, Ltd. (2008) 2. Ren´ee, H., Tunis, B., Ross, M.: Rehabilitation robotics for the upper extremity: review with new directions for orthopaedic disorders. Dis. Rehabil. Assistive Technol. 12(8), 765–771 (2017) 3. Mehrholz, J., H¨ adrich, A., Kugler, J., Pohl, M.: Electromechanical and robotassisted arm training for improving activities of daily living, arm function, and arm muscle strength after stroke. Cochrane Database Syst. Rev. 9, 3–4 (2018) 4. Morone, G., et al.: Systematic review of guidelines to identify recommendations for upper limb robotic rehabilitation after stroke. Eur. J. Phys. Rehabil. Med. 57, 238–245 (2021) 5. Chiri, A., et al.: On the design of ergonomic wearable robotic devices for motion assistance and rehabilitation. In: 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp.6124–6127 (2012)

1238

T. Caporaso et al.

6. Salahuddin, M., Young-A, L.: Identifying key quality features for wearable technology embedded products using the Kano model. Int. J. Clothing Sci. Technol. 33, 93–105 (2020) 7. Contreras-Vidal, J., Huang, H., Grossman, R.: Human-Centered Design of Wearable Neuroprostheses and Exoskeletons. AI Mag. 36(4), 12–22 (2015) 8. Trivedi, D., Rahn, C., Walker, I.: Soft robotics: biological inspiration, state of the art, and future research. Appli. Bionics Biomech. 5, 99–117 (2008) 9. Caporaso, T., Grazioso, S., Di Gironimo, G., Lanzotti, A.: Design of wearables for biosignal acquisition: a user centered approach for concept generation and selection. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 818–826. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-030-91234-5 83 10. Sauerwein, E., Bailom, F., Matzler, K., Hinterhuber, H.H.: The Kano Model: how to delight your customers. Int. Working Semi. Prod. Econ. 1(4), 313–327 (1996) 11. Melemez, K., Di Gironimo, G., Esposito, G., Lanzotti, A.: Concept design in virtual reality of a forestry trailer using a QFD-TRIZ based approach. Turk. J. Agric. For. 37(09), 789–801 (2013) 12. Nagamachi, M.: Kansei engineering: a new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon. 15(1), 3–11 (1995) 13. Zak, A., Sun, Y., Zhang, Q., Chen, X., Chen, H.: An optimum design method of Pneu-Net actuators for trajectory matching utilizing a bending model and GA. Math. Probl. Eng. 2019, 1–12 (2019) 14. Polygerinos, P., et al.: Modeling of soft fiber-reinforced bending actuators. IEEE Trans. Robot. 31(3), 778–789 (2015) 15. Du, Q., Zhao, W., Cui, X., Fei, Y.: Design, control and testing of soft pneumatic rehabilitation Glove. In: 2020 3rd World Conference on Mechanical Engineering and Intelligent Manufacturing (WCMEIM), vol. 31(3), pp. 50–55 (2020) 16. Zhang, X., Oseyemi, A.E.: A herringbone soft pneu-net actuator for enhanced conformal gripping. Robotica 40, 1–16 (2021) 17. Yap, H.K., Sebastian, F., Yeow, C.: Design and characterization of low-cost fabricbased flat pneumatic actuators for soft assistive glove application. In: 2017 International Conference on Rehabilitation Robotics, pp. 1465–1470 (2017) 18. Grazioso, S., et al.: Esoscheletro soft pneumatico con capacit´ a di irrigidimento per la riabilitazione della mano, UIBM n. 102021000023702 (2021) 19. Park, S.: Robust Design and Analysis for Quality Engineering. Springer, New York (1998)

Preliminary Design of a EMG Wearable Interface for the Actuation of Soft Pneumatic Artificial Muscles Teodorico Caporaso1(B) , Stanislao Grazioso1 , Benedetta M. V. Ostuni1,2 , and Antonio Lanzotti1 1

2

Department of Industrial Engineering, University of Naples Federico II, P.le Tecchio 80, 80125 Naples, Italy [email protected] Department of Management, Information and Production Engineering, University of Bergamo, Via Salvecchio 19, 24129 Bergamo, Italy

Abstract. This paper presents the use of surface electromyographic (sEMG) signals for the actuation of soft pneumatic artificial muscles. The idea behind this paper is finding a relationship between a natural muscle and an artificial muscle, do it through an analysis of the sEMG data. We start from the characterization of a specific soft pneumatic artificial muscle and we relate the root mean square value of the sEMG signal to the contraction of the actuator itself. This work might pave the way for the development of intuitive wearable interfaces for the actuation of soft robots. Keywords: Human biosignal · Soft actuators · Wearable technologies · Surface electromyographic sensors

1

Introduction

Wearable sensors have been extensively used in the recent years for enabling a large set of applications [1]. Among these devices, surface electromyographic (sEMG) sensors are of particular interest for controlling robotic systems [2]. As example, sEMG sensors can be used to extract synergies for controlling robotic hand prostheses [3] or to capture the user’s intention for robotic hand exoskeletons [4]. A different application example indeed uses sEMG signals to extract fatigue models that are used for collaborative robotics [5], especially in the industrial field. The use of sEMG data for control of pneumatic artificial muscles (PAM) [6] is actual. Recently, in [7] the authors proposed Modeling-Based sEMG Signal Classifier for the control of a PAM (i.e., McKibben) in human–robot collaboration. In this context, we present the use of sEMG signals for the actuation of PAM. These actuators behave as contractile/extensile elements and are largely used for soft robotics applications [8,9]. The main contribution of the present c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1239–1246, 2023. https://doi.org/10.1007/978-3-031-15928-2_108

1240

T. Caporaso et al.

work is to show an sEMG–based control of McKibben actuator relating human muscle activity to PAM contraction. The idea is to relate the natural muscle contraction of a human to the contraction of an artificial muscle. The linking point between the duality of the natural and artificial muscle is a biosignal collected on the human body, i.e. the sEMG signal. By stressing on this concept, we expect that sEMG sensors can be used for the development of natural, bioinspired and intuitive wearable interfaces for soft robotics.

2

System Architecture

The system architecture proposed in this paper is illustrated in Fig. 1. A sensor attached to the human body captures biosignals data which are then processed and given as input for an electronic control unit which triggers the activation of valves for the actuation of a soft actuator. The idea behind this system architecture is to relate the activation of the natural muscle, as extracted from a proper sEMG wearable sensor, with the actuation of an artificial muscle (i.e., a soft actuator).

Fig. 1. System architecture.

3

Experiments

In this section we present the experiments that we have carried out to test the idea presented in Sect. 2. In particular, we use an sEMG sensor for the biosignal acquisition from the BITalino Revolution Board Kit (PLUX Wireless Biosignals S.A.) and a McKibben actuator as example of PAM realized in our lab.

EMG Wearable Interface for the Actuation

1241

Fig. 2. Percentage variation of the McKibben actuator length (Δl[%]) as function of the time opening of the solenoid valves (tv ) measured in seconds.

3.1

Prototyping and Characterization of the McKibben Soft Actuator

The McKibben pneumatic artificial muscle consists of an internal flexible tube and an external sleeving, both the tube and the sleeving being connected at their ends. When the compressed air is inflated within the soft actuator, the internal membrane inflates, while when the air is extracted, the membrane deflates. The radial expansion of the internal membrane generates an overall contraction of the actuator due to the action of the external sleeving. This allows exerting pulling forces. Our prototype is made of a soft latex rubber tubing used as an internal membrane and an expandable braided polyester sleeving used as external fiber reinforcement. The length of the prototype (l) at rest position is equal to 105 mm. The prototype of the McKibben actuator was realized @ ASTRO Lab, the Laboratory for Advancements in SofT RObotics at University of Naples Federico II. For the characterization of the McKibben actuator, a coloured marker was fixed at its tip. An high speed camera was used to record the variation of length of the actuator (referred to as Δl). The camera used is a GoPro Black Hero 5 (Woodman Labs, San Mateo, CA, USA), with the following configuration: maximum resolution (2704x2028 pixel), frame rate equal to 30 fps and linear field of c softview. The video analysis was performed using the open source Kinovea ware (Joan Charmant Contrib). We have computed the percentage shortening of the actuator (Δl[%]) corresponding to eight values of opening time of the solenoid valves (referred to as tv ), starting from 0.25 s to 2 s with increment of 0.25 s. The compressed air is inflated at a constant pressure of 0.069 MPa. The results of the characterization are reported in Fig. 2. It shows firstly a linear trend and then a quadratic one in accordance with the literature [10]. The fol-

1242

T. Caporaso et al.

Fig. 3. Electrical circuit diagram of the control board designed with the fritzing software.

lowing experimental study was limited to the linear part of variation of length of the McKibben actuator in order to reduce the risk of damage for the actuator. In particular, we can appreciate a desired linear correlation from 0 to 1 s as: Δl[%] = 16 tv ,

3.2

tv ∈ [0, 1] s

(1)

Electronic Control Unit

The diagram of the electronic control unit is shown in Fig. 3. It includes the following components: (i) an Arduino Mega 2560 microcontroller; (ii) a 5 V, 4 channels relay module; (iii) two 24 V DC solenoid valves (SY114-5LOU-Q SMC); (iv) two 3 ports manifolds (SS3Y1-S41-04-M5-Q SMC); (v) a power supply (AC 100 V - 240 V to DC 24 V); (vi) a breadboard; (vii) some jumpers. The valves are feeded with compressed air generated by a standard compressor (ABAC MONTECARLO L20 compressor 2 HP 50 L, 230 V). This simple setup was realized to execute the laboratory experiments for testing the McKibben actuator. Once that the circuit is powered, the microcontroller is used to open/close the relay channels which are connected with the solenoid valves. The contraction of the McKibben actuator is controlled through the time opening of the solenoid valves.

EMG Wearable Interface for the Actuation

3.3

1243

EMG Data Capturing and Processing

The sEMG wearable sensor used in this work is part of the BITalino Revolution Board Kit (PLUX Wireless Biosignals S.A.), a board for biosignal acquisition equipped with Bluetooth. The main specifications of the sEMG wearable sensor are: sample frequency equal to 1000 Hz, dynamic range ±1.64 mV (with VCC = 3.3 V) and bandwidth 25 Hz–480 Hz. The experiments involved a male volunteer (34 years old) without disability that had not suffered severe injuries at the upper arm. Before the real time acquisition, the Maximum Voluntary Contraction (MVC) for the monitored muscle was collected to normalize the sEMG signals. We have chosen the biceps brachii as muscle. Electrodes have been placed on the line between the medial acromion and the fossa cubit at 1/3 from the fossa cubit, as recommended by SENIAM [11] for reducing placement effects on the signal quality. Always according to the SENIAM project, to collect the MVC, we have executed a specific exercise with the sEMG sensor already placed on the muscle. Then, we have collected the sEMG data corresponding to four different isometric contraction of the biceps brachii with a different level of intensity (i.e., low, moderate, high, maximum) controlled by the volunteer. The sEMG data processing involves the following steps: (i) rectification; (ii) filtering; (iii) normalization; (iv) root mean square (RMS) assessment [12]. For the filtering operation, we have performed a smoothing of the signal through a moving average filter with a time constant of 15 ms and then a filtering using a Butterworth low–pass filter of 4th order with a cut–off frequency 50 Hz. sEMG signals were then normalized with respect to the user’s MVC. In addition, the RMS values were assessed in a fixed interval time of 1 s. Finally, we have computed the RMS and defined it within the interval [0,1]. 3.4

Results

Given the characterization of the McKibben soft actuator in Sect. 3.1 and the RMS values of the sEMG data of Sect. 3.3, to correlate the shortening of the actuator with the sEMG signal, we simply use the relationship RM S = tv ,

tv ∈ [0, 1] s

(2)

The RMS value is therefore used to trigger the activation of the electronic solenoid valves, just for the range 0 to 1 s. Table 1 reports the synthetic results in terms of: RMS of sEMG obtained from the sEMG normalization, Δl obtained using Eq. 1, as tv are related to RMS of sEMG through (2). The final expression which relates RMS of sEMG with shortening of the soft actuator, valid only for tv ∈ [0, 1] s, is given by the combination of (1) and (2) as Δl[%] = 16 RM S

(3)

Figure 4 shows the results of the experimental trials corresponding to the four isometric contraction of the biceps brachii with a different level of intensity (i.e., low, moderate, high, maximum) controlled by the volunteer. For each

1244

T. Caporaso et al.

Fig. 4. sEMG data processed (on the left) and artificial and natural muscle contraction (on the right). From the top to the bottom are reported respectively the results related of the intensity contraction low, moderate, high and maximum. The black line on the left represents the RMS of sEMG for each level of contraction.

of the four different intensities of contraction, we show the processed sEMG normalized signal on the left and the corresponding artificial and natural muscle contractions on the right. Results reported in Fig. 4 and Table 1 demonstrate the feasibility and effectiveness of the proposed approach. Delay times (i.e., about 2 s) between real muscle contraction and the PAM contraction are consistent with the application in biomedical and industrial setting.

EMG Wearable Interface for the Actuation

1245

Table 1. Results of the experiments for each level of contraction in terms of percentage sEMG RMS (RM S[%]) and percentage shortening of the soft actuator (Δl[%]). Contraction level RM S[%] Δl[%] Low

38

6.1

Moderate

62

9.9

High

80

12.8

100

16.0

Maximum

4

Conclusions

In this paper we have proposed a system architecture for the actuation of soft pneumatic artificial muscles based on the signal extracted from a sEMG sensor placed on a human natural muscle. We have tested the architecture with a real experimental setup, which includes a commercial sEMG wearable sensor and an ad–hoc realized McKibben actuator. We have found a model which correlates the contraction of the McKibben actuator with the RMS of sEMG signal. This simple application might leverage the development of intuitive and bioinspired wearable interfaces for the actuation of soft robots. Future works will focus on the study of the relation between the McKibben actuator stiffness and human muscle stiffening. Acknowledgments. This work was supported by the BIOIC project (Bioinspired soft robotic systems for cognitive production) https://www.bioic.unina.it/.

References 1. Caporaso, T., Grazioso, S., Panariello, D., Di Gironimo, G., Lanzotti, A.: Understanding the human motor control for user-centered design of custom wearable systems: case studies in sports, industry, rehabilitation. In: Rizzi, C., Andrisano, A.O., Leali, F., Gherardini, F., Pini, F., Vergnano, A. (eds.) ADM 2019. LNME, pp. 753–764. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31154-4 64 2. Bi, L., Guan, C., et al.: A review on EMG-based motor intention prediction of continuous human upper limb motion for human-robot collaboration. Biomed. Signal Process. Contr. 51, 113–127 (2019) 3. Della Santina, C., Piazza, C., Grioli, G., Catalano, M.G., Bicchi, A.: Toward dexterous manipulation with augmented adaptive synergies: The Pisa/IIT softhand 2. IEEE Trans. Rob. 34(5), 1141–1156 (2018) 4. Sierotowicz, M., et al.: EMG-driven machine learning control of a soft glove for grasping assistance and rehabilitation. IEEE Robot. Autom. Lett. 7(2), 1566–1573 (2022) 5. Lorenzini, M., Kim, W., De Momi, E., Ajoudani, A.: A new overloading fatigue model for ergonomic risk assessment with application to human-robot collaboration. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 1962–1968. IEEE (2019)

1246

T. Caporaso et al.

6. Daerden, F., Lefeber, D., et al.: Pneumatic artificial muscles: actuators for robotics and automation. Eur. J. Mech. Environ. Eng. 47(1), 11–21 (2002) 7. Antonelli, M.G., Beomonte Zobel, P., Durante, F., Zeer, M.: Modeling-based EMG signal (MBES) classifier for robotic remote-control purposes. In: Actuators. vol. 11, p. 65. MDPI (2022) 8. Guan, Q., Sun, J., Liu, Y., Wereley, N.M., Leng, J.: Novel bending and helical extensile/contractile pneumatic artificial muscles inspired by elephant trunk. Soft Rob. 7(5), 597–614 (2020) 9. Grazioso, S., Tedesco, A., Selvaggio, M., Debei, S., Chiodini, S.: Towards the development of a cyber-physical measurement system (CPMS): case study of a bioinspired soft growing robot for remote measurement and monitoring applications. ACTA IMEKO 10(2), 104–110 (2021) 10. Payne, C.J., et al.: An implantable extracardiac soft robotic device for the failing heart: mechanical coupling and synchronization. Soft Rob. 4(3), 241–250 (2017) 11. Merletti, R., Rau, G., Hagg, C.G.: Seniam project. www.seniam.org/ Accessed Mar 29 2022 12. Grazioso, S., et al.: Assessment of upper limb muscle synergies for industrial overhead tasks: a preliminary study. In: 2019 II Workshop on Metrology for Industry 4.0 and IoT (MetroInd4. 0&IoT). pp. 89–92. IEEE (2019)

Preliminary Study of a Capacitive Force Sensor for Soft Robotic Applications Paolo Bellitti1(B) , Teodorico Caporaso2 , Stanislao Grazioso2 Antonio Lanzotti2 , Emilio Sardini1 , and Mauro Serpelloni1

,

1 Department of Information Engineering, University of Brescia, 25123 Brescia, Italy

[email protected] 2 Fraunhofer JL IDEAS, Department of Industrial Engineering, University of Naples

Federico II, 80125 Naples, Italy

Abstract. The paper focuses on a preliminary study of an easy-to-customize capacitive soft sensor to measure forces that can enable soft robot features like sensitive skins or permits dexterous object manipulation thanks to the perception of the grasping force. The prototype has been realized overlapping five different layers choose among commercial and easy to find materials. The sensor is completely composed by customized or self-produced parts. The stack definition involves compatibility test to define the correct combination of layers and adhesives. An evaluation of the behavior has been performed applying weights in the range [20–5800] g finding a mean sensitivity of 0.143 pF/kg over an initial value C0 of 3.151 pF. The sensor prototype showed good performance in term of sensitivity and hysteresis in the defined application range. Dielectric viscoelastic phenomena and decreasing repeatability have been observed in the upper part of the measuring range. The sensor proposed shows promising characteristics encouraging future developments. Keywords: Capacitive force sensor · Conductive inks · Customizable force sensor

1 Introduction Soft robotics involves the development of mechanical systems that take inspiration from nature and biology trying to mimic skills and strategies developed by living beings through evolution and natural selection. In soft robotics, movements [1, 2], sensing [3] and adaptation [4] mechanisms of the animal world are studied and replicated, to take advantage for robot development. An important theme in soft robots development concerns the characteristic defined context awareness [5, 6], that is, the ability of having the perception of the surrounding environmental parameters. The greater the awareness, the greater can be the robotic system autonomy in carrying out its tasks. This capability derives from the robot sensing skills obtained exploiting sensors. Soft sensor development is a promising field of research. The need for integration in soft robots establishes that the sensors must have adequate flexibility characteristics in addition to the classic © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1247–1255, 2023. https://doi.org/10.1007/978-3-031-15928-2_109

1248

P. Bellitti et al.

metrological characteristics such as accuracy, repeatability. In soft robotics applications, soft force sensors enable robot features like the development of a sensitive skin or enable dexterous object manipulation [6] thanks to the perception of the grasping force. In this paper, a capacitive force sensor for soft robotic applications is shown and a preliminary analysis of the behavior is reported.

2 Capacitive Force Sensors Design 2.1 Parallel-Plate Capacitor Capacitive sensors enable the transduction of physical quantities in a change of the sensor capacitance from the nominal value. The most classic form of capacitor is made up by two conductive parallel plates separated by an insulating element called a dielectric [8] (Fig. 1).

Fig. 1. Diagram of the parallel plates capacitor showing main elements and geometrical parameters

The characteristic parameters of the parallel plate capacitor determine the capacitance (C) according to the following formula (1): C=

ε·A d

(1)

where: – – – –

C represents the capacitance E is the dielectric permittivity A is the plates area d is the dielectric gap thickness.

The overall capacitance of the element is therefore dependent on both the geometric parameters and the permittivity of the material contained within the gap between the plates. Capacitive transducers can measure different quantities thanks to the variation of these parameters, for example capacitive humidity sensors use a hygroscopic dielectric, that absorb moisture and cause a change in dielectric permittivity [9]. Capacitive force sensors, as in this case, base their operation on the variation of parameter d. A compressible dielectric is chosen so that an external force acting on the plates can cause them to

Preliminary Study of a Capacitive Force Sensor for Soft Robotic Applications

1249

approach each other. Compression causes a reduction in the distance between the plates and a consequent increase in capacity. In the proposed solution, the capacitive force sensor is developed ad-hoc by overlapping layers of different materials. The use of easily available materials and the realization method make it potentially easy to customize in terms of shape, size and measurement range. The description of the layers used is reported in Fig. 2.

Fig. 2. Stack diagram of the soft capacitive force sensor.

The lower layers (Fabric and Polyurethane) represent the sensor substrate while the three upper layers (Plate #1, Dielectric and Plate #2) represent the force sensitive element. The lower layer is realized from a 404-Revolutio-NAL® technical fabric produced by Carvico composed by 71% polyamide and 29% elastane. These choices allow good adaptability to the different surfaces and shapes on which the sensor could be adapted. The next layer is formed by polyurethane, and it is necessary to permits the fabric functionalization. The presence of the polyurethane layer makes the fabric waterproof and not allow the next layer to be absorbed into the fabric fibers. 2.2 Plate #1 The layer named Plate #1 is the first capacitor plate. To meet the requirements of simplicity of customization and realization, it is made depositing a conductive ink on the polyurethane layer. In this way, it is possible to obtain any shape and size simply by “painting” the desired shape with the ink. To evaluate the performance and compatibility of some of the inks available on the market, tests were made on fabric samples as shown in Fig. 3. In this phase, the samples are obtained depositing manually a certain quantity of conductive ink on small fabric pieces.

Fig. 3. Plate #1 realization: Conductive inks sample on polyurethane layer.

1250

P. Bellitti et al.

The conductive ink considered for the realization of the first capacitor plate are: – Dupont PE874: stretchable silver-based conductive ink – Creative materials 128–30 stretchable silver-based ink – Dupont PE671: stretchable carbon-based ink All the selected inks need a thermal curing process to make solvents evaporating and to make the silver or carbon microparticles melting together in order to create conductive paths. Dupont PE874 and DUPONT PE671 samples were placed in a thermostatic oven for 30 min at 130 °C, Creative Materials 128–30 sample was placed at 170 °C for 30 min. After the curing process the resistance across the sample has been measured to evaluate the conductivity: DUPONT PE874 shows a resistance of 1.1  over a 30 × 15 mm area, Creative Materials 128–30 shows a resistance of 0.5  over a 20 × 15 mm area and DUPONT PE671 shows a resistance of 1.3 k  over a 25 × 10 mm area. A cross hatch test was performed for all samples and good adhesion with the polyurethane layer is observed. The less conductive sample (DUPONT 671) was excluded from further test. 2.3 Dielectric Layer Dielectric layer is the insulator placed between the two plates that permits electrical and physical separation and contributes to define the overall sensor capacitance. The dielectric shore must enable a thickness variation of the sensor in the force range [0–57] N. In addition, the insulator layer must show good adhesion with the plates. Dielectric layer is obtained with silicone produced by Zhermac Dental. The choose product is the Elite Double 16 Fast characterized by a Shore hardness of 16 and a declared elastic recovery of 99.95%. This silicone is a bicomponent product: when the two parts are mixed together the mixture start polymerizing reaching the final hardness in about 30 min. In Fig. 4 is reported the dielectric layer realization. The polymerizing process take place inside a custom designed 3D-printed mold. The mold size is 100 × 100 × mm.

Fig. 4. 3D printed mold for silicon polymerization process.

Preliminary Study of a Capacitive Force Sensor for Soft Robotic Applications

1251

The pre-mixed silicon mixture is poured inside the mold, after the polymerization time the silicone sheet was extracted, and then cut into portions of the desired size. The silicone elements were bonded on the plate #1 with a LOCTITE 406 adhesive specifically designed for plastics and rubbers helped with a polyolefin primer specific for promoting adhesion on difficult-to-bond plastics (LOCTITE 770). The bonded process was satisfying for both the DUPONT 874 and Creative Material 128–30 sample. After the bonding process the Creative Material 128–30 resistivity increases to some megaohm. Due to the incompatibility with the adhesive, this conductive past was excluded from further tests. 2.4 Plate #2 The second plate is realized bonding a conductive fabric (Shieldex® Nora Dell) on the silicon layer with the same combination of LOCTITE 406 + LOCTITE 770 adhesive and primer used in paragraph 2.4. The bonding between the silicon layer and the fabric is strong enough to guarantee good adhesion between the layers. This step concludes the first part in which the material compatibility is verified, and the stack is defined.

3 Sensor Prototype Fabrication The second phase regards the sensor prototype fabrication process. Twelve different sensors have been realized overlapping the layers described in the previous paragraph. The process starts with a big rectangular fabric piece (27 × 18 cm) with polyurethane layer. The polyurethane layer was applied with a professional heat press for t-shirts. The size of the first plate, in this series of prototype is 14 × 22 mm with a little extension for electrical connection. The geometrical size has been defined with kapton tape (Fig. 5a), the DUONT PE874 conductive ink has been sprayed with an airbrush (Fig. 5b) and then cured in a thermostatic oven for 30 min at 130 °C (Fig. 5c).

Fig. 5. Realization of the first conductive plate: (a) masking with kapton tape, (b) spraying conductive ink with airbrush, (c) plates after the curing process.

The 3-mm dielectric layer composed by the pre-polymerized Elite Double 16 Fast silicon was cut in rectangular pieces whose size are: 15 × 23 mm and bonded to the first

1252

P. Bellitti et al.

plate using the primer and adhesive combination (Fig. 6a). Then the last layer composed by the conductive fabric is bonded with the same adhesive (Fig. 6b).

Fig. 6. Last two steps of the sensor prototype realization: (a) silicon and (b) fabric bonding.

4 Preliminary Experimental Tests The tests on the sensors are currently in progress, in this section are presented the preliminary results obtained analyzing data retrieved from repeated tests on a single sensor. The experimental setup is showed in Fig. 7. The selected sensor was connected to a HP4194A Impedance/Gain-Phase Analyzer set to measure the sensor impedance while the sensor has been loaded with different weights. The sensor capacitance is then calculated from the impedance at 100 kHz. A mechanical structure has been realized to uniformly distribute the force on the sensor surface and the sensor has been placed on a scale to verify the load.

Fig. 7. Experimental setup used to load the sensor and measure the capacitance.

Preliminary Study of a Capacitive Force Sensor for Soft Robotic Applications

1253

The test protocol is here explained: starting from the sensor unload, the weight has been increased in steps of about 500 g from 20 g to 5800 g. The first weight value is about 20 g (weight of the first part of the mechanical structure), the second weight value is 660 g (weight of the full mechanical structure), and the last weight is 5800 g (scale full scale). The weight path has been made 5 times: 3 times for increasing weights and 2 times for decreasing weight values. 4.1 Preliminary Experimental Results The first data series represented in Fig. 8 is obtained for weight increasing values from 20 g to 5800 g.

Fig. 8. Capacitance vs. weight graph of the first set of data for increasing weights.

Analyzing data there is a clear increasing trend in the capacitance as the weight applied rises. Between the first (20 g) and the second (688 g) weight values there is a considerable capacitance increase from 3.151 pF to 3.654 pF, this gap is greater than all the following steps. A more detailed characterization is needed in this range to evaluate the sensor behavior. In the range [1000–5500] g the mean sensitivity is 0.143 pF/kg. Between 4500 g and 5000 g a considerable reduction in sensitivity can be observed (0.024 pF/kg), this behavior could be associated to viscoelastic property of the silicone. In Fig. 9 is reported the capacitance versus weight graph for increasing (#1) and decreasing (#2des) weights. The same consideration about the two phenomena observed in the previous graph (Fig. 8) can also be found in the decreasing dataset. It can be observed qualitatively that no relevant hysteresis behavior appears.

1254

P. Bellitti et al.

Fig. 9. Capacitance vs. weight graph for increasing (#1) and decreasing (#2des) weights.

Finally, all the 5 datasets retrieved are analyzed in term of mean values and standard deviation (Fig. 10). The results show a significant spread about the mean value. The spread appears growing as the weight increase.

Fig. 10. Capacitance vs. weight graph for the 5 datasets express in terms of mean values and standard deviations.

5 Conclusions and Future Works An easy-to-customize capacitive force sensor has been realized overlapping different structural and functional layers. The compatibility of the different layers has been preliminarily analyzed both in terms of application suitability and compatibility with the adhesives. The final stack defined is composed by 404-Revolutio-NAL® technical fabric, polyurethane layer, Dupont PE874, Double 16 Fast silicone by Zhermac Dental

Preliminary Study of a Capacitive Force Sensor for Soft Robotic Applications

1255

and Dell’s Shieldex® Nora conductive fabric. A preliminary analysis was performed to observe the capacitance dependence by the pressure applied on the sensor, obtaining an average sensitivity of 0.143 pF/kg in the range [1000–5500] g. Repeated tests were performed for the qualitative evaluation of hysteresis and repeatability. Although further studies are needed to deepen the viscoelastic phenomena observed and better characterize the lower range [0–1000] g, the prototype is promising for use in soft robot applications like sensitive-skins or soft grippers. Further studies on different sensor prototypes will be performed to assess the repeatability of the production process. Stiffness and deformation test will also be performed.

References 1. Wu, Q., et al.: A novel underwater bipedal walking soft robot bio-inspired by the coconut octopus. Bioinspir. Biomim. 16, 46007 (2021) 2. Liao, B., et al.: Soft rod-climbing robot inspired by winding locomotion of snake. Soft Robot. 7(4), 500–511 (2020) 3. Zhang, R., Shen, Z., Zhong, H., Tan, J., Hu, Y., Wang, Z.: A cephalopod-inspired soft-robotic siphon for thrust vectoring and flow rate regulation. Soft Robot. 8(4), 416–431 (2021) 4. Malley, M., Haghighat, B., Houe, L., Nagpal, R.: Eciton robotica: design and algorithms for an adaptive self-assembling soft robot collective. In: Proceedings IEEE International Conference on Robotics and Automation, May 2020, pp. 4565–4571 (2020) 5. Liu, H., Wang, L.: Collision-free human-robot collaboration based on context awareness. Robot. Comput. Integr. Manuf. 67, 101997 (2021) 6. Liu, K., et al.: Artificial sensitive skin for robotics based on electrical impedance tomography. Adv. Intell. Syst. 2(4), 1900161 (2020) 7. Yang, W.-T., Kuang, Z., Wang, C., Tomizuka, M.: Development of Soft Tactile Sensor for Force Measurement and Position Detection (2021) 8. Grove, T.T., Masters, M.F., Miers, R.E.: Determining dielectric constants using a parallel plate capacitor. Cit. Am. J. Phys. 73, 52 (2005) 9. Chaudhary, P., Maurya, D.K., Yadav, S., Pandey, A., Tripathi, R.K., Yadav, B.C.: Ultrafast responsive humidity sensor based on roasted gram derived carbon quantum dots: experimental and theoretical study. Sens. Actuat. B Chem. 329, 129116 (2021)

Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant with the Boundary Condition Design Criterion Mattia Maltauro1(B)

, Roberto Meneghello1

, and Gianmaria Concheri2

1 Department of Management and Engineering, University of Padova, Stradella San Nicola 3,

36100 Vicenza, Italy [email protected] 2 Department of Civil, Environmental and Architectural Engineering, Laboratory of Design Methods and Tools in Industrial Engineering, University of Padova, Via Venezia 1, 35131 Padova, Italy

Abstract. Shaft-hole pattern fits based on the Boundary Condition design criterion allows a 100% acceptability rate, but they may be not economically convenient. If the rejection rate needs to be statistically quantified and the pattern is itself the alignment feature, therefore promoted as datum feature (Intrinsic datum system), there is no trivial solution to create a tolerance stack-up: a unique assembly function cannot be determined. The focus of this contribution is “2x” patterns: different methodologies to create tolerance stack-up assessing assemblability are discussed and verified through Monte Carlo simulation. An equation to transform the variability seen from the Intrinsic datum system to the one seen from an external arbitrary reference system is given. The mutual distance between any two elements of an “nx” pattern is discussed and the implication of multiplicity and datum system is highlighted. A case, derived from an industrial case study, will be discussed by comparing the result from the simulated manual and automated assembly. A path towards “nx” patterns generalization is also presented. Keywords: Tolerancing · Boundary Condition · Virtual condition · Tolerance analysis · Rejection rate

1 Introduction Shaft-hole pattern fits are widely used in mechanical assemblies. They can be used for many different purposes such as bolting of plates or flanges, accurate alignment with dowel pins, etc. The Boundary Condition design criterion is a simple tool, used in the design phase, for assigning the tolerance zones that satisfy the worst-case, therefore allowing a 100% acceptability rate; the method is described in appendix B “Formulas for positional tolerancing” of ASME Y14.5-2018 [1]. Throughout this contribution, the Boundary Condition is given by the collective effect of the Feature of Size (FoS) at its Maximum Material Size (MMS) and the location tolerance; its value shall be used to determine clearance between parts and/or to determine gage sizes [1]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1256–1267, 2023. https://doi.org/10.1007/978-3-031-15928-2_110

Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant

1257

The Worst Case is unlikely to occur if the Cp is equal to or greater than one. Therefore, tolerance allocation based on the statistical approach may be convenient from an economical point of view. Another reason to deviate from the Boundary Condition design criterion is that using the Worst Case approach a certain clearance is very likely to be present. Wobbling and/or vibrations may occur during operations, therefore decreasing the perceived quality by the customer [2]. For these reasons, it may be required to choose a tighter nominal fit regardless of location and size tolerances. In all these cases, the result is a fit that does not comply with the Boundary Condition design criterion, implying a certain amount of scraps: the acceptability rate, or other metrics, must therefore be statistically quantified. When the Maximum Material Requirement (MMR) or Least Material Requirement (LMR) [3–5] modifier is used in the geometric specification of the pattern, the possible establishment of a relationship between location and size tolerances for each FoS adds further complexity but gives, as an advantage, the reduction of non-conformal, but still functional, parts [6]. When a pattern of fits is located with reference to an External datum system (see Fig. 1a), that functionally guarantees the alignment of the mating parts, the assembly equation is trivial and a tolerance stack-up analysis to compute the statistical metrics for each fit of the pattern can be performed. Indeed, each fit is independent of one another. Fischer [7] presents a simple mono-dimensional model to perform a tolerance stack-up that can be used also in case of material requirements (MMR or LMR). Commercial CAT (Computer Aided Tolerancing) software, e.g., CETOL 6σ™, 3DCS™, etc., can be used when the material requirement is applied to a FoS. However, when the pattern is the datum feature, that is when the pattern is itself the alignment geometry (see Fig. 1b), each feature is located with respect to the best fit of the pattern, which can be considered as the centroid of the actual pattern. Therefore, the actual contacts depend on the actual geometry of both mating parts. Consequently, it is not possible to know in advance in which of the single fits the contact will occur. Multiple assembly configurations are possible, and different assembly functions should be studied. The creation of a tolerance stack-up and the quantification of statistical metrics is not trivial since a unique, explicit, or implicit, assembly function is not definable. Scholz [8], discusses the riveting problem where two holes need to match, and a pin needs to enter both holes simultaneously (floating fastener). By assuming negligible hole and pin size variability, and trueposition part alignment, he shows that the position mismatch increases on the order of log(n). Considering “true alignment” in fact coincide with adopting an external datum system. Scholz states that the relative adjustment between the parts, that arise when the “best alignment” is considered, adds “dependency complications”. 1.1 Aim of the Paper The present contribution focuses primarily on the “2x” pattern, assessing different possible methodologies that can be applied to create a tolerance analysis when the pattern of FoS is itself the alignment feature (i.e., datum feature). The differences and implications of the two different datum systems will be discussed and a general procedure to express

1258

M. Maltauro et al.

the variability seen from the centroid in an external arbitrary cartesian reference frame will be presented. An analytical procedure to estimate the rejection rate, based on Root Sum of Squares (RSS) tolerance analysis, will be developed and presented. The procedure will be compared to Monte Carlo Simulation. A path towards generalization to “nx” pattern will also be discussed. Finally, a case derived from an actual industrial study case will be presented and discussed: a pattern of two shaft-hole fits. The holes are punched into sheet metal; the shafts protrude from a plastic injection moulded part. In the assembly, the alignment is given by the fit itself. Two configurations will be studied: one assuming a manual assembly (Intrinsic datum), the second considering an automated assembly, and therefore the location tolerances given by the robotic arm (External datum). It is noteworthy to highlight that the proposed method is relevant in the (functional) design phase: it gives the designer the possibility to simulate the impact of his design intent regardless of subsequent manufacturing decisions.

2 Materials and Methods 2.1 The Updated Linear Stack-Up Model The model presented by Fischer [7], implemented in Excel, is used as a starting point for tolerance stack-ups. Several add-ons have been implemented to the model to increase efficiency and flexibility; an integrated graphical representation of the result has also been developed. The updated worksheet internally computes a new nominal value and symmetric tolerance limits when asymmetric limits are assigned. It also allows to explicitly assign, in each row, the coverage factor k, used to convert the tolerance limit in terms of standard deviation. The worksheet has been modified to integrate the possibility to set the sensitivity coefficient (|s|) using an adaptation of the formulas described by Cox [9, 10]. The output mean and variance are found with the following equations: n |si | · μi (1) μOUT = i=1

2 = σOUT

n i=1

(|si |)2 · σi2

(2)

It must be noted that for the computation of the mean of the output (μOUT ), each input mean (μi ) should be used with their positive or negative value according to the convention found in [7], allowing considering the absolute value of the sensitivity. Once the output mean and variance is known, the rejection rate can be easily found by integrating the Gaussian probability density function:    LSL 1 (x − μOUT )2  exp −0.5 r= ∂x 2 σOUT 2 −∞ 2π σOUT    +∞ 1 (x − μOUT )2  exp −0.5 ∂x (3) + 2 σOUT USL 2π σ2 OUT

Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant

1259

where LSL and USL are the lower and upper specification limits respectively. 2.2 General Hypothesis The general hypotheses that are used throughout the entire work are the following: • tolpos refers to the location tolerance: the associated bilateral tolerance is half its value (±tolpos /2). • The coverage factor k is assumed equal to 3 for all input variables, i.e., σ = tol 3 . • The location variability is known from the datum system used in the specification. • All the sizes are non-correlated. • Feature locations are non-correlated if seen from an External datum system; correlated if seen from the Intrinsic datum system.

3 External vs. Intrinsic Datum System When the pattern of FoS is located with reference to an external, and independent, completely defined, datum system, see Fig. 1a, regardless of the multiplicity, each element is not correlated to the others. As such the actual location of each element doesn’t imply any limitation in the position of the others: the covariance is null and the variance is the same for each element.

Fig. 1. Different geometric specifications for a spacer with two holes: a) External Datum System; b) Intrinsic Datum System.

In case of a pattern with multiple features, considering the i-th and the j-th elements, the mutual distance variance (σL2 ) can be expressed as follow.  σL2 = var(L) = var xj − xi = σj2 + σi2 − 2σij = 2σ 2 (4) Regardless of pattern multiplicity, the actual mutual distance between any two axes (i.e., derived feature) of the fits, from a statistical point of view, can therefore be written as: √ tolpos (5) L = Lnom ± 2 · 2

1260

M. Maltauro et al.

When the pattern is specified with the CZ indication and is promoted as a datum feature, see Fig. 1b, the datum system origin can be assumed as the centroid of the pattern. In this reference system, the centroid is fixed, as such xG = 0 and σG = 0. For a “nx” pattern the centroid variance can be expressed as follow: σG2 = 0 =

n  σ2 i

i=1

n

+

2 σij n

(6)

i>j

From the last equation, assuming the same variance for all the pattern elements and the same covariance for each pair of elements, the covariance can be found: 1 (7) σ2 n−1 i Using this result, the variance for the mutual distance of any pair of a “nx” linear pattern can be found. n σ2 (8) σL2 = 2 n−1 i It is noteworthy that for an infinite pattern the result coincides with the one found for an External datum system; this means that the covariance decreases when the number of elements increases. At the infinite limit there is no more correlation and dependency. The mutual distance between two fits located from the centroid can be written as:

tolpos n · (9) L = Lnom ± 2 n−1 2 σij = −

It can be noted that the use of the Intrinsic datum system has an impact on the variability of the mutual distance between two elements of the pattern. 3.1 Datum System Transformation To perform a Monte Carlo simulation when dealing with an Intrinsic datum system, because of the dependency between the reference system and the actual pattern situation, it is not possible to directly sample the location of the pattern. It is necessary to base the sampling on an independent reference system. If σ is the variability (expressed as standard deviation) seen from the centroid reference system and σ˜ is the variability seen from any external fixed reference system, the following relation can be found.

σ˜ n = . (10) σ n−1 To prove Eq. (10) a Monte Carlo simulation has been performed for a “2x” and a “3x” patterns. Given a fixed value for the standard deviation seen from the external reference system, 500,000 samples have been simulated. Per each sample, the pattern location from the centroid is computed and statistical metrics are found and compared to the one seen from the External datum system, see Table 1 and Table 2. The result of the simulation confirms the transformation formula, the error is well below 1% in both cases. Skewness and Kurtosis, not displayed, both confirm that in both cases the distribution seen from the centroid tends to normality as expected.

Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant

1261

Table 1. Monte Carlo simulation results for 2x pattern. External reference system

Intrinsic reference system

X1

x1

X2

Comparison

x2

X1

0.000

X2

80.000

x1

−40.000

x2

40.000

σ˜ 1

0.067

σ˜ 2

0.067

σ1

0.047

σ2

0.047

Theoretical Experimental



n n−1

1.414

σ˜ σ

1.416

Table 2. Monte Carlo simulation results for 3x pattern. External reference system X1

X2

Intrinsic reference system X3

x1

x2

Comparison x3

X1

0.000

X2

80.000

X3

160.000

x1

−80.000

x2

0.000

x3

80.000

σ˜ 1

0.067

σ˜ 2

0.067

σ˜ 3

0.067

σ1

0.054

σ2

0.054

σ3

0.054

Theoretical Experimental

 σ˜ σ

n n−1

1.225 1.225

4 2x Patterns Stack-Up In this section different methodologies to create a tolerance stack up when the reference system is based on the centroid are presented. The geometric specification for the shaft can be seen in Fig. 2, the specification for the holes in Fig. 1b. The Boundary Condition for holes is 9.9 − 0.5 = 9.4 mm, while for shafts is 10 + 1 = 11 mm, therefore no 100% fit is allowed, and the resulting rejection rate must be estimated.

Fig. 2. Geometric specification for a two shafts cap

4.1 Stack-Up by Cases For a “2x” pattern, one element can be considered as master and the gap can be evaluated on the second one. In this case, two different assembly equations can be found, and for each one, a unilateral gap can be analysed, see Fig. 3.

1262

M. Maltauro et al.

Fig. 3. Stack-up scheme

With the first case, it is possible to compute the rejection rate for overshoot, the second case gives the rejection rate for undershoot. The overall rejection rate is the sum of the two (Fig. 4).

Fig. 4. External and Internal gap distribution and rejection rate, stack-up by cases

A generalized gap, able to consider both internal and external gaps simultaneously, can be defined by applying a dummy upper limit equal to twice the mean gap, the rejection rate and other statistical metrics can be directly found in the worksheet, see Fig. 5.

Fig. 5. Generalized Gap distribution and Rejection Rate, stack-up by cases

This method, generalized gap excluded, is a straightforward application of the model as described by Fischer [7] and, although consistent, does not use a rigorous definition of the datum system. In Sects. 4.2, 4.3, and 4.4 alternative methods, tested by the Authors, are proposed.

Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant

1263

4.2 Symmetric Stack-Up Considering that the pattern is symmetric with respect to the datum system, the tolerance stack-up can be centered, considering the datum systems of the mating parts aligned. Consequently, one single fit can be studied. Two distinct unilateral gaps can be defined, see Fig. 6a.

Fig. 6. Stack-up scheme for all symmetric cases: a) Internal and external gaps for the symmetric stack-up; b) Elements use in the symmetric adjusted stack-up.

As for the previous case, a generalized gap can be defined and statistical metrics computed, see Fig. 7.

Fig. 7. Generalized Gap distribution and Rejection Rate, symmetric stack-up

The rejection rate is higher than the one found with the previous method. In this case, the true position alignment is considered and the size of the two FoS are correlated. 4.3 Symmetric Adjusted Stack Up To overcome the limitation highlighted in Sect. 4.2, an adjustment to the model can be implemented. Instead of studying only half of the pattern, the whole pattern is considered. The distance from the datum system of the second FoS is correlated to the first one, consequently, it cannot be added: the half distance must be duplicated instead, using a sensitivity of 2. Four gaps are now to be considered, but two by two are symmetrical. By considering the gaps given by the difference Lmax − lmax (Sum of the external gaps)

1264

M. Maltauro et al.

and dmin − Dmin (Sum of the internal gaps), it is possible to study, as for the previous cases, two unilateral gaps, see Fig. 6b. A generalized gap can also be defined by adding a dummy upper limit as in the previous cases, see Fig. 8.

Fig. 8. Generalized Gap distribution and Rejection Rate, symmetric adjusted stack-up

In this case, the rejection rate is equal to the one found with the first method: it can represent the non-linearities due to the contact points. Moreover, since it starts from the datum system, the material condition can be added. 4.4 Symmetric Optimized Stack Up The model as described can be further optimized since the TED (Theoretically Exact Dimension, i.e., Nominal dimension) contribution can be neglected, it does not give any contribution to the gap, and the FoS contribution can be √ added once by using a sensitivity equal to 1 and scaling the tolerance of a factor 1/ 2 that adjust the contribution to count the independence between the two-element sizes. However, this does not allow considering the material conditions. 4.5 Monte Carlo Verification To verify the result of the proposed method a Monte Carlo simulation is used. The standard deviation associated to the localization of the FoS is “corrected” with the relation given by Eq. (10), a total number of 500,000 combinations for the eight variables are initiated and both the external and internal gaps are calculated. If both gaps are positive the assembly is accepted. The variability seen from the centroid and its normality is checked. Over twenty re-computation of the simulation, the average rejection rate is 20.15% with a standard deviation of 0.06% against an analytical value of 20.16%. The analytical model is therefore considered verified.

Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant

1265

5 Path Towards Generalization To generalize the proposed method to a “nx” linear pattern it is first possible to define n − 1 independent pairs of FoS in the pattern. For each independent pair, the method can be applied considering the location variability seen from the centroid of the pair that can be found equal to half of the variability of the mutual distance. The rejection rate (ri ) for any pair can be determined, and the probability that the assembly is successful (P = P(i) = 1 − ri ) as well. If each pair is considered stochastically independent P(j|i) = P(j)∀j, i, the probability that both the j-th and the i-th pairs can be assembled is P(j ∩ i) = P(j)P(i) = P 2 . It can be easily derived that for a “nx” pattern the probability to assemble the pattern, according to the given hypothesis, is Pn = P n−1 . The rejection rate accordingly becomes rn = 1 − [1 − r]n−1 . Through Monte Carlo simulation, this equation was not verified: the probability of assembling a further pair of the pattern is less than the previous ones since the relative adjustment between the mating FoS depends on the actual situation of the previous pairs. A more general equation for the rejection rate is proposed: rn = 1 − [1 − β · r]n−1

(11)

where β ≥ 1 is a coefficient that needs further investigation to determine which parameters it depends on. The formalization of the method for a “nx” pattern may be used also for rectangular and circular patterns. In the first case, a rectangular pattern (p x q = n) may be decomposed into two “nx” patterns along the two principal directions. A “nx” circular pattern may be considered as a linear “nx” pattern if a curvilinear coordinate is used.

6 The Case Study The geometric specification for the case study, derived from an actual industrial case, can be seen in Fig. 9. The dimensions and tolerances have been modified ensuring the same general proportions. Using the method proposed in Sect. 4.3, the estimated rejection rate, under “best alignment” (manual assembly) is 24.16%. To simulate the automated assembly, first, the model presented in Sect. 4.2 should be used to exclude the possible adjustment between parts during assembly. As second, two additional terms should be added, the first on the sheet metal part to describe the variability between the centroid of the pattern and the alignment features used by the robotic arm (not in the specification), the second on the cap side to describe the robotic arm precision.

1266

M. Maltauro et al.

Fig. 9. Geometric specification for the case study

In the stack-up, the first term is assumed ±0.3 mm, the second one ±0.1 mm. The estimated rejection rate now becomes 32.72%. The difference (8.56%) stands for a fraction of parts that cannot be assembled by the specific assembly process that has been chosen. This value may be lowered by changing the automated assembly process considering its associated costs. It is noteworthy to highlight that the automated assembly adds another source of uncertainty to the stack-up. The functional specification for pattern fits should describe the assemblability as a necessary requirement for the mating parts. As a result of the case study, the assemblability of the pattern fit depends on the actual assembly process. For this reason, it is important to distinguish functional and manufacturing specifications: the latter should consider each step of the production (assembly included) and eventually tighten the tolerance values. The manufacturing specification should be used to tune the process since it transforms the functional specification assigning tighter limits based on the whole manufacturing chain; the functional specification still represents the nonnegotiable boundary that when exceeded does not guarantee functionality.

7 Conclusions The aim of this contribution was to define a method to assess the rejection rate for a pattern of fits not compliant with the Boundary Condition design criterion with a particular focus on the case in which the pattern is itself the alignment feature. The 2× case was deeply investigated and a method based on RSS was proposed and verified through a Monte Carlo simulation. If an Intrinsic datum system is used, dependency among the locations of the features arises, Eq. (7) can be used to find the statistical covariance. Equation (10) has been presented and verified to perform Monte Carlo simulation when the Intrinsic datum system is used in the pattern specification. The use of a dummy upper limit to study a single statistical bell for the two gaps that a fit create has been presented and the result was verified through Monte Carlo simulation.

Conformity Rate Estimation for Shaft-Hole Pattern Fit Not Compliant

1267

A case, derived from an industrial study case, was presented and resolved: it was possible to simulate the difference between by hand and automated assembly. Equation (11) may represent the analytical generalization to a “nx” linear pattern but further investigations are needed to fully define the β parameter; a generalization for a rectangular pattern should also be investigated. The analytical definition of a pattern fit may be used to define a new kind of Kinematic constrain to be used in stack-up analyses. The current practice consists in defining a primary and secondary fit to define the kinematic constrain between mating parts. It can also be used for formal computation of “nx” dowel pins where more than two pins with greater clearance behave as two tighter pins.

References 1. ASME. ASME Y14.5 - 2018 - Dimensioning and Tolerancing, New York (2019) 2. Bastiaan, J.M., Green, E., Kaye, S.: Preliminary study of perceived vibration quality for human hands. SAE Int. J. Adv. Curr. Pract. Mobil. 1, 1741–1754 (2019). https://doi.org/10. 4271/2019-01-1522 3. ISO International Organization for Standardization. ISO 2692:2021 - Geometrical Product Specifications (GPS). Geometrical Tolerancing. Maximum Material Requirement (MMR), Least Material Requirement (LMR) and Reciprocity Requirement (RPR), Geneva (2021) 4. Pierre, L., Anselmetti, B., Anwer, N.: On the usage of least material requirement for functional tolerancing. In: Procedia CIRP, pp. 179–184 (2018). https://doi.org/10.1016/j.procir.2018. 04.068 5. Anselmetti, B., Pierre, L.: Complementary writing of maximum and least material requirements, with an extension to complex surfaces. In: Procedia CIRP, pp. 220–225 (2016). https:// doi.org/10.1016/j.procir.2016.02.153 6. Markiewicz, M., Bachtiak-Radka, E., Dudzi´nska, S., Grochała, D.: Statistical process control using LMC/MMC modifiers and multidimensional control charts. In: Hamrol, A., Grabowska, M., Maletic, D., Woll, R. (eds.) MANUFACTURING 2019. LNME, pp. 244–253. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17269-5_18 7. Fischer, B.R.: Mechanical Tolerance Stackup and Analysis. CRC Press (2004). https://doi. org/10.1201/9780203021194 8. Scholz, F.: Title Hole Alignment Tolerance Stacking Issues. Seattle (1999) 9. Cox, N.D.: Volume 11: How to Perform Statistical Tolerance Analysis. American Society for Quality Control, Milwaukee (1986) 10. Cox, N.D.: Tolerance analysis by computer. J. Qual. Technol. 11, 80–87 (1979). https://doi. org/10.1080/00224065.1979.11980884

A Case Study on the Correlation Between Functional and Manufacturing Specifications for a Large Injection Moulded Part Mattia Maltauro1(B) , Roberto Meneghello1 , Gianmaria Concheri2 Daniele Pellegrini3 , Massimo Viero4 , and Giuseppe Bisognin4

,

1 Department of Management and Engineering, University of Padova, Stradella San Nicola 3,

36100 Vicenza, Italy [email protected] 2 Department of Civil, Environmental and Architectural Engineering, Laboratory of Design Methods and Tools in Industrial Engineering, University of Padova, Via Venezia 1, 35131 Padova, Italy 3 Department of Industrial Engineering, University of Padova, Via Venezia 1, 35131 Padova, Italy 4 Electrolux Appliances S.P.A., Corso Lino Zanussi 24, 33080 Porcia, PN, Italy

Abstract. Large parts produced by injection moulding are usually subjected to large deformations that may be reduced during assembly. The single parts manufacturing specification should refer to the as produced (free) state. On the other hand, the functional specification, derived from the assembly functional specification should address the “as assembled” state. Geometrical inspection, based on the functional specification requires dedicated fixtures to simulate the “as assembled” state. This contribution suggests a procedure, based on FEM simulation, to correlate the geometric specification at the “as assembled” state with the “as produced” (free) state, applied to an industrial case study. The result of the procedure are free state tolerance limits, e.g., manufacturing specification, that allows conformity of the part to the functional specification once assembled. The part may be inspected based on the manufacturing specification fixtureless during mass production. The result of the case study shows a significant reduction in position and orientation error due to the assembly process as it was expected. Keywords: Geometrical Product Specification · Tolerancing · Deformable assemblies · Compliant assemblies · FEM simulation

1 Introduction Large parts are subject to large deformations when produced by injection moulding, but these deformations may be significantly reduced during assembly. Therefore, the functional specifications of the assembly should not be used to assess the conformity of the single sub-assembly parts, whose manufacturing specifications should refer to a free state condition (see ISO 10579 [1]). © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1268–1278, 2023. https://doi.org/10.1007/978-3-031-15928-2_111

Correlation Between Functional and Manufacturing Specifications

1269

The management of the relations among functional and manufacturing specifications is still an issue in industry. The different documents (functional, manufacturing and verification specifications) establish a multi-pole structure, driven by hierarchical relations, therefore imposing a rigorous correlation (see ISO/TS 21619:2018 [2]). For rigid parts and assemblies, the correlation may be achieved by tolerance stack-up analysis, an example may be given by the tolerance transfer method [3]. Finite Element Method (FEM) simulation-based approaches have been presented in the literature when dealing with deformable bodies. Many different approaches can be found in the literature for sheet metal parts that are widely used in the automotive and aerospace sector. Sellem and Rivière [4] proposed a mechanical approach based on the computation of influence coefficient matrices for tolerance computation of welded, bolted, glued or riveted sheet metal parts. Liu and Hu [5] present the offset finite element model as a mono-dimensional strategy to predict the assembly variability when plates are spot welded. Liu, Hu and Woo [6] investigated the difference between “series” and “parallel” assembly for deformable plates: the parallel assembly allow a smaller assembly variability compared to the variability of the single parts. A further development [7] presents the Method of Influence Coefficient in which a sensitivity matrix linking the spring-back of the assembly to the free-state condition is calculated and used to determine the “as assembled” configuration. This methodology has been further developed to integrate shape defects and contact surfaces [8, 9]. A review of these methods can also be found [10]. Stockinger et al. [11] presented an approach to combine the elastic deformation in tolerance stack-ups with sheet metal parts. The approach was validated with experimental results and compared to a commercial solution that integrates 3DCS™ (Dimensional Control Systems® ) and CATIA V5™ workbench TAA™ (Tolerance Analysis of deformable Assemblies) (Dassault Systèmes® ) that analyze the deviations impressed by the assembly processes based on FEM analysis. Radvar-Esfahlan and Tahan [12] developed the Generalized numerical inspection fixture (GNIF) that allows performing freeform surfaces inspection on thin-walled nonrigid parts without using fixtures. The part deformation is considered isometric, as such the geodesic distance between two internal points remain the same allowing the determination of correspondent points between the CAD and the free-form dataset. This methodology was further developed improving the definition of the boundary conditions [13] and automated [14]. Raymauld et al. [15] proposed a methodology for performing virtual measurement in constrained state for a thin-walled plastic component. Most of the contribution found in the literature deals with sheet metal and/or thinwalled parts. The methodologies are optimized for tolerance analysis or inspection purposes. For quality control, the methodologies are used as a post-processing operation during the measuring protocol: the analysis needs to be routine per each acquisition. Our aim is to suggest a procedure to correlate the geometric specification at “as assembled” state with the “as produced” (free) state, therefore, performing the activity at start-up of production and inspecting the part at free state during mass production. The presented procedure is based on FEM simulation and uses the datum system and geometric tolerances definition given by the ISO-GPS (Geometrical Product Specification) standards.

1270

M. Maltauro et al.

Furthermore, the procedure is intrinsically deterministic resulting in a straightforward possibility to simulate the result of each configuration without losing the effect of the statistical covariance of the inputs parameters. 1.1 The Case Study The present contribution aims to describe the preliminary activities that were performed to correlate the functional and the manufacturing specifications for an industrial case study. The assembly consists of three main parts: an inner core that will be considered as a rigid body and two outer deformable shells. The focus is on the two large external parts that are produced by plastic injection moulding. The single parts, at the end of the manufacturing process, show a deformation that is not compliant with the functional specification. The assembly process takes place through vibration welding, impressing a stable configuration to both parts. Consequently, part of the deformation that is present in the as-produced state is reduced. For this reason, one of the main problems is the assessment of the tolerance limits in free state conditions coherent with functional requirements. The full activity has been preliminarily performed in one of the two moulded parts. The first hypothesis is that during the welding process the deformation is impressed only in the welding plane, the other portions of the structure are free to deform accordingly. The second hypothesis is that at the end of the process the welding plane is perfectly planar, which is not realistic since the elasticity of both mating parts allows further deformation (spring back) after the welding is completed: the result is the best-case scenario, i.e., the maximum possible reduction.

2 Materials and Methods During a preliminary phase of the study, one single part was acquired using an articulated scan arm with a laser probe. The resulting mesh (STL file) was compared with the CAD nominal model to determine the most deformed areas and the overall range of deviations. The part, at free state, is first acquired using a 3D articulated arm. A discrete point cloud is obtained for the datum features and the welding plane. The anti-deviations of the welding plane (i.e., the opposite value of the normal deviation), representing the constraints impressed by the assembly process, are then used as input for a FEM simulation. These constraints are applied to the nominal geometry assessing the “virtual deformation” of nominal features when an actual assembly deformation is imposed. The result of the simulation is then summed to the free state deformation to simulate the constrained state of the part, see Fig. 1.

Correlation Between Functional and Manufacturing Specifications

1271

Fig. 1. Methodology workflow: the inputs are the “CAD Model”, the “Free state shell” and the “Constrained State Functional Specification Limits”, the output is the definition of “Free state “Manufacturing Specification Limits”. Once defined, these limits are used in the “Manufacturing verification” to check for non-conformities.

2.1 Free State Acquisition A total of nine parts, from two different pre-production batches (5 + 4), are acquired using a 3D articulated arm (FARO® ScanArm 2) with a touching probe controlled through Autodesk® PowerInspect® . The datum features, namely two cylinders [A-B] (i.e., bearing seats), one plane [C] (i.e., external bearing shoulder), and one point [D] (i.e., one point in one of the shotted holes used for alignment during the welding) are acquired and used for the alignment of the CAD and the functional datum system definition [A–B|C|D]. The welding plane is acquired through 24 discrete points, Fig. 2, that are sampled five times per part. To guarantee that the same points were acquired each time a template was used. The normal deviation of each point is the average among the five repetitions.

1272

M. Maltauro et al.

Fig. 2. Measuring points for the welding plane

The actual cartesian coordinate, based on the functional reference system [A-B|C|D], are exported in CSV format. A gage R&R (crossed) study is performed in Minitab® to assess the repeatability of the measuring process for the welding plane deviations. Each point in each part is considered as a different item, the operator term is not assigned. 2.2 Numerical Simulation The FEM simulation is performed in SolidWorks® (Dassault Systèmes SolidWorks Corporation) using the “SolidWorks® Simulation” plug-in. The software allows the application of boundary conditions only along edges or surfaces of the CAD body. There are no external forces applied to the body. The displacements in the welding plane are locked all over the perimeter of the part and the displacements normal to the welding plane are applied on small circular areas (Ø 8 mm) based on the free state measure. The body is meshed with linear tetrahedral solid elements with a minimum dimension of 2 mm and a maximum dimension of 10 mm. The analysis is launched and the coordinate of the mesh nodes of the edges of the Datum features [A], [B] and [C], in deformed states, are exported and elaborated in Rhinoceros® 6 (Robert McNeel & Associates) to extrapolate the derived geometry in a format compatible with the free state measure. 2.3 Superimposition of the Effects If small deviations from nominal and linear constitutive relations are considered, the superposition principle can be used to find the deviation from the nominal of the real geometry in the constrained state by summing the two contributions.

Correlation Between Functional and Manufacturing Specifications

1273

The free state acquisition assesses the deviations of the real part from the nominal geometry due to the manufacturing: injection moulding. The simulation outputs represent the deviations of an ideal part (nominal geometry) after the application of the assembly constraints quantified on a real part. The result is a discrete cloud of points simulating the deviations of the part in the constrained, as assembled, state. 2.4 Approach Validation To validate the overall approach two “simulated” welding plane deformations, from which the resulting deformation of the axis [A–B] is known, were tested, Fig. 3. The first configuration (a) is an axisymmetric deformation that according to the hypothesis should not influence the final orientation of the axis [A–B]. The second configuration (b) is an angular rotation of the welding plane, in this case, a rigid rotation of the axis [A–B] in the opposite direction is expected.

Fig. 3. The case studies used to validate the approach: a) Axisymmetric deformation; b) Rigid rotation.

2.5 Comparison To compare and correlate the result of the free state measure with the simulated constrained state it is necessary to obtain a comparable metric between the two configurations. It is chosen to use orientation and position errors expressed in mm. The orientation error is the distance between two planes parallel to the nominal one, or, in other words, perpendicular to the primary datum [A–B], that strictly contains the measured points.

1274

M. Maltauro et al.

The position error is the distance between two planes parallel and symmetrical to the nominal one that strictly contains the acquired data. The definition of errors is given according to ISO 1101:2017 [16]. For the free state measure, the welding plane deviations are exported in the direction parallel to the primary datum [A–B]: the orientation error coincides with the range of the deviations; the position error coincides with double the maximum deviation in absolute value Fig. 4. The constrained state is known from a reference system centered in the welding plane: the deviations of the welding plane are null. It is necessary to change the reference system to express the state of the welding plane from the functional datum system. The orientation error is found as the product between the diametral dimension and the tangent of the angle of the axes [A–B] with respect to the welding plane (α). The position error is found accordingly considering the deviation of the plane [C], Fig. 5.

Fig. 4. Position and orientation error interpretation at free state

Fig. 5. Position and orientation error interpretation at constrained state

Correlation Between Functional and Manufacturing Specifications

1275

3 Results and Discussion The Gage R&R study showed that the measuring procedure has a repeatability one order of magnitude less than the normal deviation range. Therefore, it can be considered sufficient to evaluate the deformation of the welding plane given its deviation range. The approach validation shows a residual error, for the axis [A–B] orientation, of 7.1 · 10−4 [deg] for the first configuration tested, and 8.1 · 10−3 [deg] for the second configuration. Although the difference is negligible in both cases, it can be interpreted as due to the imperfect application of the boundary condition on the welding plane, furthermore, in the first case, the part stiffness is non-axisymmetric. The approach is successfully validated. The two batches are analysed separately. The results are presented as the ratio between the actual value and the maximum value. One sample of the first batch (4 parts) shows a different trend compared to the others, this sample is considered an outlier, see Fig. 6. The remaining three samples show a maximum normal deviation of 0.724, a minimum deviation of 0.061, a range of 0.663, and an average deviation of 0.505. The second batch (5 parts) shows a maximum deviation of 1.000, a minimum deviation of −0.250, a range of 1.250, and an average deviation is 0.562, see Fig. 7.

Fig. 6. First batch sampling: Welding plane normal deviations

Fig. 7. Second batch sampling: Welding plane normal deviations.

The average deviation, point by point, of the two batches, as can be seen in Fig. 8, both have the same general behaviour, but the values of the deviations differ.

1276

M. Maltauro et al.

Fig. 8. Comparison among two available production batches

The methodology workflow (see Fig. 1) has been applied only to the second batch of five parts. The deviation of the points of the welding plane converted in orientation and position error, expressed as the ratio between its value and the maximum, can be found in Table 1. The mean orientation error is 0.516, and the average position error is 0.893. Table 1. Position and orientation error for the welding plane at free state. Shell 51

Shell 52

Shell 53

Shell 57

Shell 58

Average

[mm/mm] Orientation error

0.625

0.511

0.513

0.504

0.436

0.516

Position error

1.000

0.804

0.883

0.964

0.834

0.893

The orientation and position error derived from the constrained state can be seen in Table 2. The mean orientation error is 0.368, and the mean position error is 0.708. Table 2. Position and orientation error for the welding plane at constrained state. Shell 51

Shell 52

Shell 53

Shell 57

Shell 58

Average

[mm/mm] Orientation error

0.459

0.360

0.281

0.391

0.350

0.368

Position error

0.829

0.648

0.585

0.783

0.693

0.708

3.1 Comparison By comparing the average error for the free state and the simulated constrained state a general reduction of both errors can be seen. For the five parts tested a 28.6% reduction for the orientation error and a 20.8% reduction for the position error can be seen, Table 3.

Correlation Between Functional and Manufacturing Specifications

1277

Table 3. Error reduction due to the simulated assembly process. Shell 51

Shell 52

Shell 53

Shell 57

Shell 58

Average

[%] Orientation error

26.5

29.57

45.1

22.5

19.8

28.6

Position error

17.1

19.4

33.8

18.7

16.9

20.8

As a preliminary consideration, the manufacturing tolerances, applied to the welding plane, may be increased by a factor of 28.6% for the orientation and 20.8% for the position with respect to the functional values that are assigned considering the as assembled state and may be validated through tolerance stack-up analysis. It is noteworthy to highlight that these are preliminary results obtained from a small test batch produced during the product development; once the first production batch will be available, the procedure may be run with a larger sample thus resulting in a higher statistical significance.

4 Conclusions In this paper, a case study on the correlation between functional and manufacturing specifications has been presented considering a large injection moulded part. The proposed procedure uses a FEM approach to simulate the constrained (as assembled) state starting from the free state acquisition as input. The comparison between the two states is based on the datum system and geometric tolerances definitions in ISOGPS standards. The average location and orientation error reduction for a planar feature is determined and used as a preliminary scaling factor for the manufacturing tolerance limits applied to the feature. In the early stages of product development, the use of injection moulding CAE software may also be added to the loop to simulate actual geometries due to different process parameters, instead of performing actual measurements, aiming to determine a possible range of process settings. The industrial experience shows that in the field of large, and massive parts, the injection moulding simulation still may not consider variables that may have a significant impact on the final, pre-assembly, geometry such as stoking and transportation. The overall methodology may be further improved. First, a statistical correlation model between free state and constrained state may be developed to replace the scaling factor. A sensitivity matrix, considering only inputs and outputs that are needed for the correlation, may be created allowing to speed up the simulation process: once the matrix is defined no more FEM simulations are required. The use of a sensibility matrix may be integrated into the Influence Coefficient Method to evaluate the spring back of the assembly. Even though each step should be experimentally validated, a simulation approach may allow testing free state initial deformations that are not available in the production batch exploring out-of-the-envelope configurations.

1278

M. Maltauro et al.

References 1. ISO International Organization for Standardization. ISO 10579:2013 - Geometrical Product Specifications (GPS). Dimensioning and Tolerancing. Non-Rigid Parts, Geneva (2013). 2. ISO International Organization for Standardization. ISO/TS 21619:2018 - Geometrical Product Specifications (GPS) - Types of Documents with GPS, Geneva (2018) 3. Anselmetti, B., Louati, H.: Generation of Manufacturing Tolerancing with ISO Standards. Int. J. Mach. Tools Manuf. 45 (2005). https://doi.org/10.1016/j.ijmachtools.2005.01.001 4. Sellem, E., Rivière, A.: Tolerance analysis of deformable assemblies. In: Volume 2: 24th Design Automation Conference. American Society of Mechanical Engineers (1998). https:// doi.org/10.1115/DETC98/DAC-5571 5. Charles Liu, S., Jack Hu, S.: An offset finite element model and its applications in predicting sheet metal assembly variation. Int. J. Mach. Tools Manuf. 35, 1545–1557 (1995). https://doi. org/10.1016/0890-6955(94)00103-Q 6. Liu, S.C., Hu, S.J., Woo, T.C.: Tolerance analysis for sheet metal assemblies. J. Mech. Des. 118, 62–67 (1996). https://doi.org/10.1115/1.2826857 7. Liu, S.C., Hu, S.J.: Variation simulation for deformable sheet metal assemblies using finite element methods. J. Manuf. Sci. Eng. 119, 368–374 (1997). https://doi.org/10.1115/1.283 1115 8. Atik, H., Chahbouni, M., Amegouz, D., Boutahari, S.: Optimization tolerancing of surface in flexible parts and assembly: influence coefficient method with shape defects. Int. J. Eng. Technol. 7, 90 (2018). https://doi.org/10.14419/ijet.v7i1.8470 9. Atik, H., Chahbouni, M., Amagouz, D., Boutahari, S.: An analysis of springback of compliant assemblies by contact modeling and welding distortion. Int. J. Eng. Technol. 7, 85 (2018). https://doi.org/10.14419/ijet.v7i1.8330 10. Polini, W., Corrado, A.: Methods of influence coefficients to evaluate stress and deviation distribution of flexible assemblies—a review. Int. J. Adv. Manuf. Technol. 107(5–6), 2901– 2915 (2020). https://doi.org/10.1007/s00170-020-05210-3 11. Stockinger, A., Lustig, R., Meerkamm, H.: Computer-based and experimental validation of an approach to combine tolerance zones with elastic deformations. In: Proceedings of the 16th International Conference on Engineering Design (ICED 2007). DS 42 (2007) 12. Radvar-Esfahlan, H., Tahan, S.-A.: Nonrigid geometric metrology using generalized numerical inspection fixtures. Precis. Eng. 36, 1–9 (2012). https://doi.org/10.1016/j.precisioneng. 2011.07.002 13. Sabri, V., Tahan, S.A., Pham, X.T., Moreau, D., Galibois, S.: Fixtureless profile inspection of non-rigid parts using the numerical inspection fixture with improved definition of displacement boundary conditions. Int. J. Adv. Manuf. Technol. 82(5–8), 1343–1352 (2015). https:// doi.org/10.1007/s00170-015-7425-3 14. Sabri, V., Sattarpanah, S., Tahan, S.A., Cuillière, J.C., François, V., Pham, X.T.: A robust and automated FE-based method for fixtureless dimensional metrology of non-rigid parts using an improved numerical inspection fixture. Int. J. Adv. Manuf. Technol. 92(5–8), 2411–2423 (2017). https://doi.org/10.1007/s00170-017-0216-2 15. Raynaud, S., Wolff, V., Dinh, T.T., Pareja, O.: Modélisation et évaluation de l’incertitude de mesure lors de l’utilisation de MMT avec des pièces déformables. In: Larquier, B. (ed.) 17th International Congress of Metrology. p. 13006. EDP Sciences, Les Ulis (2015). https://doi. org/10.1051/metrology/201513006 16. ISO International Organization for Standardization. ISO 1101:2017 - Geometrical Product Specifications (GPS). Geometrical Tolerancing. Tolerances of Form, Orientation, Location and Run-out, Geneva (2017)

InvNet: Transfer Learning on Point Cloud for Invariant Surface Classification Yifan Qie(B)

and Nabil Anwer

Universit´e Paris-Saclay, ENS Paris-Saclay, LURPA, 91190 Gif-sur-Yvette, France [email protected]

Abstract. Geometrical operations are defined in ISO standards on geometrical product specifications and verification (GPS) for obtaining ideal and non-ideal features such as functional surfaces on mechanical parts. These surfaces, which can be used for further geometry processing, should be categorized into one of the seven invariance classes according to their kinematic invariance pairs. A new data-driven method for recognizing invariant surfaces of CAD models is introduced in this paper based on deep learning structure and transfer learning. Due to the limitation of the availability of a sufficient amount of data, transfer learning with AlexNet is implemented for invariance identification of point clouds extracted from surfaces. In addition, the paper further illustrates several strategies to generate a variety of 3D point clouds. The pre-trained network, named InvNet, provides an effective and non-parametric way to identify surface invariance. The experiment results show that InvNet is efficient and robust for recognizing primitive shapes from CAD models for ISO GPS. Keywords: Point cloud · Transfer learning · Invariant class · ISO GPS

1

Introduction

Geometrical operations are defined in ISO standards on geometrical product specifications and verification (GPS) [5] for obtaining ideal and non-ideal features to represent surfaces on mechanical parts. These features, which can be used for further geometry processing, should be categorized by their geometric properties. In the context of ISO GPS, surface portions are defined by their kinematic invariance pairs and are classified into planar, cylindrical, helical, spherical, revolute, prismatic, and complex surfaces, as shown in Table 1. The invariant surface classification is a critical criterion for ISO geometry operations. For example, partitioning operation is used regarding both specification and verification perspectives to decompose the mechanical part into meaningful surfaces. These surfaces are obtained by firstly identifying the invariance class for each surface on the mechanical part. Moreover, for the association Supported by China Scholar-ship Council (first author), under Grant NO.2018060 20187. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1279–1289, 2023. https://doi.org/10.1007/978-3-031-15928-2_112

1280

Y. Qie and N. Anwer

operation that is used to fit ideal features to non-ideal features or point clouds, Surface invariances are used in the operation to define the ideal surface types to be used for the fitting process. Table 1. ISO GPS invariance classes and their relationship to kinematic degrees of freedom [5] Number ISO GPS Invariance classes

Kinematic degrees of freedom

1

Planar

2 Translations and 1 Rotation

2

Cylindrical

1 Translation and 1 Rotation

3

Helical

1 Translation and 1 Rotation, linked by pitch

4

Spherical

3 Rotations

5

Revolute

1 Rotation

6

Prismatic

1 Translation

7

Complex

None

The focus of the paper is the analysis of single surfaces represented by discrete points since the point cloud can be obtained by tessellation on either nominal model or Skin Model [10] for the perspective of specification or by measuring manufactured parts for the verification perspective. The rest of the paper is organized as follows. A state-of-the-art of related work is introduced in Sect. 2. Data processing and a transfer learning process are introduced in Sects. 3 and 4, respectively. The experiment and discussion are illustrated in Sect. 5. Finally, the conclusions and outlook are highlighted.

2

Related Work

The goal of partitioning is to cluster the model (mechanical part) into meaningful segments by either surface-based or part-based methods [11]. V´ arady et al. provide a detailed survey in the reconstruction of mechanical parts and underline that the surface type recognition and surface fitting are the specific issues for reconstructing a B-rep model [12]. Therefore, surface classification is the key criteria for partitioning operation for reverse engineering. In the context of ISO GPS, surface classification is based on the kinematic invariance, i.e. kinematic degree of freedom as explained in Table 1. Many contributions have been proposed to address invariant surface identification of mechanical parts. Gelfand et al. put forward a partitioning method based on slippage analysis that is able to identify surface patches as kinematic invariance types [3]. Cai et al. proposed a three-step hybrid process for partitioning mesh or point cloud into surface portions and identified each of them as one of the seven invariance classes of surfaces using slippage analysis [1]. Qie et al. put forward a two-step partitioning method that addresses natural boundaries

InvNet: Transfer Learning on Point Cloud for Invariant Surface Classification

1281

among surfaces and surface classification by statistical modeling [9]. The methods mentioned above are able to address the invariance class identification with either a pre-defined parameter or high computational complexity. However, the performances are limited considering large point clouds. Due to the ability in analyzing and revealing patterns of existing data, deep learning has been increasingly investigated in advanced manufacturing. As a wellknown deep learning algorithm in computer vision to tackle problems such as image classification, face recognition, etc., Convolutional Neural Network (CNN) has been widely investigated due to its effectiveness in capturing the features of the input data. Considering the limitation in practice when collecting real data with enough variety, the transfer learning [7] based method provides a promising solution to training neural networks in mechanical engineering. CNN-based model transfer learning is studied for classification or identification by means of pre-training a CNN with a large amount of non-manufacturing specific data and then fine-tuning the fully connected layers with correctly labeled in specific domain problems such as defect detection in semiconductor device fabrication [4] or fault diagnosis in manufacturing [13]. It is critical to identify the surface portions as one of the seven invariance classes in the partitioning operation of ISO GPS for further geometry processing. Exiting transfer learning-based methods enable the identification of the object type regarding a limited dataset. Therefore, a pre-trained network, InvNet, is proposed in this paper for invariant surface identification for point clouds.

3

Point Cloud Processing for Invariant Surface Classification

In the context of ISO GPS, the extraction operation can be conducted from both specification and verification perspectives. From the perspective of specification, point clouds are obtained by tessellation on either the nominal model or the Skin Model. From a verification perspective, a cloud of points is obtained by measuring manufactured parts. Partitioning can be utilized to decompose the parts into surface portions [1,9]. Therefore, the focus of the paper is centered on the analysis of single surfaces represented by discrete points. Point clouds have many advantages for representing mechanical parts and are widely used for a variety of purposes. However, the sparsity, randomness and nonstructure of point clouds have to be addressed in the data pre-processing stage. three types of strategies, including view-based, voxel-based and point-based, are most cited in point cloud learning. In practice, it is often difficult to collect data with enough variety for training a robust network since in most cases, planar and cylindrical surfaces appear more often than other invariant surfaces. The limited number of samples becomes a bottleneck for machine learning based methods. Regarding our target, i.e. identifying surface invariance from the point cloud, as well as the size of our dataset, it is not possible to train point-based deep networks such as PointNet [8] since they require millions of data samples for training. Meanwhile, bias from the shapes of the surface rather than the surface invariance

1282

Y. Qie and N. Anwer

will have a crucial impact on the learning performance. On the other hand, considering the point cloud property (surface invariance), voxel-based methods are limited by resolution in the data pre-processing and might be influenced by noise on the surfaces. Therefore, a view-based method that is capable of processing datasets with multiple types of invariant surfaces is implemented in this paper. The generation of points cloud of different invariant surfaces is introduced firstly and then the data set construction for training neural networks is put forward in this section. 3.1

Point Clouds Generation for Different Invariant Surfaces

In order to address invariance class identification for point clouds, a training dataset that consists of diverse types of point clouds is created and labeled by surface invariance. The dataset is constructed considering the sparsity, randomness and non-structure of 3D point clouds. For each type of invariance class, point clouds are firstly generated with different parameters (e.g. number of points, sampling strategy, the diameter of a sphere, the height of a cylinder, angle of the cone, etc.). Gaussian noise is added as a set of random variables corresponding to nominal points. Then the surfaces are optionally manually post-processed to simulate missing data. In the end, the surfaces are translated and rotated randomly regarding rigid motions. Some of the generated point clouds are shown in Fig. 1.

Fig. 1. The point clouds generated for training InvNet

InvNet: Transfer Learning on Point Cloud for Invariant Surface Classification

3.2

1283

Training Set Construction: A PCA-Based Method

The sparsity, randomness and non-structure of point clouds are considered in the point clouds generation process. This approach employs a view-based supervised learning technique for addressing these data properties in the process of training set construction. Three viewing images are collected from the three principal directions for feature extraction and recognition. The view projection process is explained in Fig. 2. For each point cloud, the orientation of the surface can be specified by principal component analysis (PCA). Therefore, at the training dataset construction stage, each point cloud is rotated so that each of the three principal directions lies along the x-axis, y-axis and z-axis, respectively. Then the point cloud is moved by translating the center of gravity to origin. All the points in the point cloud are normalized into a unit sphere.

Fig. 2. The process of training set construction

Once the point cloud is rotated, translated and normalized, three gray viewing images and their corresponding labels are then collected from three viewing directions of the point cloud, as shown in Fig. 2 (b). The viewing images with 28 × 28 pixels are considered in the order of (1) x-y direction (2) y-z direction and (3) x-y direction. The three viewing images are superimposed to become a new RGB image with the three gray images corresponding to three channels of the RGB images, as illustrated in Fig. 2 (c). In practice, it is noticed that the order

1284

Y. Qie and N. Anwer

of three viewing images is not critical to the RGB image generation. However, it is important to confirm that each viewing direction is always corresponding to one of the R-G-B channels in the training process. A training dataset is constructed by the illustrated construction process. Some of the data that represents different point clouds can be seen in Fig. 3. The details of the training dataset are listed in Table 2. Since helical surfaces are not commonly used in engineering, they are not considered in our study. Meanwhile, considering that complex surfaces are not able to be defined by parameters, complex type is not discussed in the InvNet.

Fig. 3. Some of the data used in the training process

Table 2. Training set information Number Invariance classes Data number 1

Revolute

487

2

Cylindrical

525

3

Planar

414

4

Prismatic

269

5

Spherical Total

347 2042

InvNet: Transfer Learning on Point Cloud for Invariant Surface Classification

4

1285

Transfer Learning on the Training Set

Regarding the limited number of cases in the constructed training dataset, a transfer learning-based method is proposed in this section to address invariant surface identification. It is reasonable to implement transfer learning due to the availability of pre-trained neural networks and the fact that the first few layers of these well-trained neural networks address the basic geometry such as vertical boundaries in the images and the last layers can be fine-tuned by our own dataset. The proposed method makes it possible to leverage the image classification capability of existing pre-trained neural networks for invariant surface recognition. As a large network structure with 60 million parameters and 650,000 neurons, AlexNet [6] shows superior performance to previous methods with image classification problems. A summary of the architecture of AlexNet is given in Table 3. Regarding the time-consuming training process and the limited number of training data, transfer learning is employed and the updated network is named as InvNet since it is designed for invariant surface classification. The structures of AlexNet and InvNet are compared in Fig. 4. The last three sublayers of AlexNet output layer are replaced by newly designed layers as shown in InvNet. For invariant surface identification using InvNet, all the layers, except for these last three layers, are extracted from AlexNet. The output layers of the pre-trained AlexNet are configured for 1000 classes for 1000 types of targets. The updated last three layers of InvNet consist of a fully connected layer, a soft-max layer, and a classification output layer. The fully connected layer is set to have the same size as the invariance classes in ISO GPS (here we set it as 5 instead of 7 in practice). Table 3. The architecture of AlexNet Layer

Feature map Size

Kernel size Stride Activation

Input

Image

1

227 × 227 × 3 -

-

-

1

Convolution

96

55 × 55 × 96

11 × 11

4

Relu

Max pooling

96

27 × 27 × 96

3×3

2

Relu

Convolution

256

27 × 27 × 256 5 × 5

1

Relu

Max pooling

256

13 × 13 × 256 3 × 3

2

Relu

3

Convolution

384

13 × 13 × 384 3 × 3

1

Relu

4

Convolution

384

13 × 13 × 384 3 × 3

1

Relu

5

Convolution

256

13 × 13 × 256 3 × 3

1

Relu

Max pooling

256

2

6 × 6 × 256

3×3

2

Relu

6

Fully connected -

9216

-

-

Relu

7

Fully connected -

4096

-

-

Relu

8

Fully connected -

4096

-

-

Relu

Output Fully connected -

1000

-

-

Softamx

1286

Y. Qie and N. Anwer

Fig. 4. Transfer learning from AlexNet

The parameters in AlexNet are pre-trained on millions of images (ImageNet) and marked in green in Fig. 4 while the preserved parameters are also marked in green in InvNet. The transferred network in InvNet is highlighted in orange since the parameters in these layers are going to be trained with the training dataset constructed in this paper. InvNet is trained by Stochastic Gradient Descent with Momentum (SGDM). The minibatch size is set to 10 and the max epochs are set to 6. The momentum is 0.9 and the initial learning rate was 10−4 .

5 5.1

Experiment and Discussion Performance Evaluation

All tests are performed on a mobile workstation equipped with a 2.40 GHz Intel Core i5 CPU, an 8 GB RAM, and the Windows 10 operating system. The training progress is shown in Fig. 5. The training accuracy and training loss converge at the end of the training process. In order to illustrate the performance as well as the evaluation of our methods quantitatively, the confusion matrices for both training and testing sets are provided in this paper as shown in Fig. 6. From the matrices, it should be noticed that the overall performance of InvNet is acceptable for practical usage. However, there could be problems when the input point clouds are cylindrical or prismatic shapes. Further operations such as curvature estimation could be considered for these two invariance types to improve the performance of our method.

InvNet: Transfer Learning on Point Cloud for Invariant Surface Classification

1287

Fig. 5. Training process for InvNet

5.2

Practical Application of InvNet

The pre-trained InvNet is used for invariant surface classification. Based on the identified invariant surface types, a fitting step can be followed in order to obtain ideal features from the point cloud. Figure 7 illustrates the application of InvNet for surface reconstruction. When the invariance type of the point is known as planar, cylindrical, or spherical, it is able to obtain parameters such as the axis of the cylinder and the radius of the sphere by methods recommended in ISO GPS such as least square fitting. InvNet enables to obtaining geometry information from the point cloud within a second while other methods require extra efforts such as local [3] or global [2] properties for surface identification and reconstruction. 5.3

Discussion

In the context of ISO GPS, geometric operations such as association, filtration, construction, and reconstruction are implemented based on a prior surface type. Therefore, the partitioning operation is crucial in automating mainly the verification process. When a single surface represented by discrete points is obtained, the application of InvNet is able to address surface invariance identification in an efficient and robust way. When a large dataset containing point coulds from simulations and measurements is available, the InvNet can be trained and reimplemented with a better performance.

1288

Y. Qie and N. Anwer

Fig. 6. Confusion matrices for both training set (left) and testing set (right).

Fig. 7. The application of InvNet for surface reconstruction.

6

Conclusion and Outlook

In this paper, we proposed a transfer learning network called InvNet to address invariant surface identification and classification. Point clouds obtained from single surfaces can be identified by their invariance classes, which can be used in further geometry processing operations such as reconstruction. Experiment results showed that InvNet achieved high accuracy for the target. Meanwhile, transfer learning greatly reduced the time for training the neural networks based on existing network architecture. The proposed method only identifies 5 of the 7 invariance classes defined in ISO GPS due to the limited data. However, the experiments show a promising solution for future steps based on point clouds obtained from measurement. In the future, more data with a variety of distributions should be collected the improve the performance of InvNet.

InvNet: Transfer Learning on Point Cloud for Invariant Surface Classification

1289

References 1. Cai, N., Anwer, N., Scott, P.J., Qiao, L., Jiang, X.: A new partitioning process for geometrical product specifications and verification. Precision Eng. 62, 282–295 (2020) 2. Chiabert, P., Costa, M.: Statistical modelling of nominal and measured mechanical surfaces. J. Comput. Inf. Sci. Eng. 3(1), 87–94 (2003) 3. Gelfand, N., Guibas, L.J.: Shape segmentation using local slippage analysis. In: Proceedings of the 2004 Eurographics/ACM SIGGRAPH Symposium on Geometry Processing, pp. 214–223 (2004) 4. Imoto, K., Nakai, T., Ike, T., Haruki, K., Sato, Y.: A CNN-based transfer learning method for defect classification in semiconductor manufacturing. In: 2018 International Symposium on Semiconductor Manufacturing (ISSM). pp. 1–3, IEEE (2018) 5. ISO: Geometrical product specifications (gps) - general concepts - part 1: Model for geometrical specifications and verification. ISO 17450-1:2011 (2011) 6. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012) 7. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009) 8. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660 (2017) 9. Qie, Y., Qiao, L., Anwer, N.: Enhanced invariance class partitioning using discrete curvatures and conformal geometry. Comput. Aided Des. 133, 102985 (2021) 10. Schleich, B., Anwer, N., Mathieu, L., Wartzack, S.: Skin model shapes: a new paradigm shift for geometric variations modelling in mechanical engineering. Comput. Aided Des. 50, 1–15 (2014) 11. Shamir, A.: A survey on mesh segmentation techniques. In: Computer graphics forum, vol. 27, pp. 1539–1556. Wiley Online Library (2008) 12. Varady, T., Martin, R.R., Cox, J.: Reverse engineering of geometric models-an introduction. Comput. Aided Des. 29(4), 255–268 (1997) 13. Wang, P., Gao, R.X.: Transfer learning for enhanced machine fault diagnosis in manufacturing. CIRP Ann. 69(1), 413–416 (2020)

X-Reality for Interactive Design

Apprentices’ Opportunities and Criticalities in Adopting Affordable User Interface Development Tools Stefano Filippi(B)

and Giovanni Vanino

DPIA Dept., University of Udine, 33100 Udine, Italy [email protected]

Abstract. Day by day, more and more people get in touch with user interface (UI) development matters, since user experience (UX) and, as a direct consequence, user-centered design (UCD) gain importance in establishing the success of products on the market or, simply, because users feel the need to interact with products easier and more directly. In the last years, affordable UI development tools have appeared on the market, freeing apprentices from owning specific skill and knowledge and boosting interactive activities by allowing different competencies to work in synergy. This research exploits the experience where university students - the apprentices - use several affordable tools to develop the UI of a refrigerator. Pointing out the opportunities and criticalities encountered during the experience allows highlighting those variables to consider in selecting the most suitable tools already on the market as well as in developing new ones, all of this aiming at a sort of “discount UI development”. Keywords: User experience design · User experience evaluation · Affordable user interface development tools · Discount user interface development

1 Introduction Day by day, more and more people get in touch with user interface (UI) development matters [1], since user experience (UX) [2] and, as a direct consequence, user-centered design (UCD) [3] gain importance in establishing the success of products on the market or, simply, because users feel the need to interact with products easier and more directly. Up to some years ago, both solid theoretical background and deep skill regarding software packages were the only way to approach the development of UIs. Today, the market offers affordable tools allowing UX apprentices to develop UIs by avoiding those strict requirements. This happens because tools are getting somehow more “usable” and make the required knowledge about UX principles available in an “automatic” way. This research exploits the experience where university students - the apprentices, without any knowledge about UX matters except for a 40-h course dealing with product interaction and innovation, use affordable tools to develop UIs of a refrigerator. At the end of the course, three teams of five students each perform the definition of the interactions © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1293–1304, 2023. https://doi.org/10.1007/978-3-031-15928-2_113

1294

S. Filippi and G. Vanino

[4], their implementations into UIs [5] and undergo user testing of the results [6]. The definition of the interactions comes thanks to the InVision software package [7] while the UI implementations exploit the BrioVR software package [8]. BrioVR capabilities regarding Virtual Reality (VR) [9], together with its easy connectivity with the HTC Vive Pro equipment [10], allow performing the user testing by making the participants interact with the refrigerator in a realistic VR scene. Pointing out the opportunities and criticalities encountered during the experience - opportunities and criticalities seen from the students’ points of view and described using their language, allows highlighting those variables to consider in selecting the most suitable tools already on the market as well as in developing new ones. All of this aims at a sort of “discount UI development” (this comes from the “discount usability engineering”, the cost-effective method of usability evaluation founded by J. Nielsen in the nineties [11]), a design paradigm that pushes naturally towards interactive activities since affordable tools allow different competencies working together by eliminating technological barriers or domain-specific knowledge gaps. The paper develops as follows. The material and methods section describes briefly what has been used during the students’ experience, from the definition of the interactions to the user testing. The activities section reports the whole approach to the research. The results and discussion section develops the expected variables starting from the opportunities and criticalities. The conclusions and perspectives close the paper.

2 Material and Methods 2.1 InVision The definition of the interactions, as mentioned previously, comes thanks to the software package InVision [7]. Companies use this platform mainly to bring design-driven innovation in the development of user-friendly web interfaces. Due to an accelerating transformation towards digitalization, companies need to develop intuitive and usable platforms to promote their digital products for users to be able to navigate easily through the different stages of an interface. InVision provides the instruments to generate user-interfaces for both independent designers and companies, instruments that power a repeatable and streamlined design workflow. The definition of interaction process starts with the setting of the dimensions of the display in which the interface will be casted. Dimensions change – and the UI will change dramatically as well - if the UI is meant to work on a laptop, smart-phone, or smart TV. The process continues with the creation of a layout (page) for each stage of the UI. This creative procedure also defines the links that, when selected by the user, lead to the next stage. This step is particularly important because it influences the usability of the UI since bad links could impact the comprehension of the functionalities of the product the interaction is designed for. A tangible example of a good interaction definition is the InVision homepage itself, which displays the functionalities in a way that allows users understanding the aim of each link easily. The InVision-based design process results in a smooth and low time-consuming experience. When the final draft of the interaction is completed, InVision allows simulating the interaction to get an idea of what the final result would work when the UI will be implemented in a website

Apprentices’ Opportunities and Criticalities

1295

or software. However, this step is not fundamental here since InVision has been adopted for the generation of the backbone of the UIs only. 2.2 BrioVR The implementation of the interactions into the UIs comes thanks to the BrioVR software package. BrioVR is a cloud-based, web-based platform that enables content creators, designers and companies to produce interactive experiences for audiences [8]. This instrument does not require any knowledge in coding or programming for the implementation of UIs within a tri-dimensional environment. Consequently, BrioVR has been chosen for its completeness and understandability; it allows people with almost zero insight into user-interface development to design an interactive product. The tools made available by BrioVR vary from light controlling systems to interaction commands that allow the generation of enhanced photorealistic experiences. The functionalities for UI development are called behaviors; they define the actions that a specific object, within the 3D environment, will do when a user interacts with it. 3D virtual rooms can be defined and populated using 3D objects imported from external sources. BrioVR offers the opportunity to import different file types as images (jpeg, png, etc.), short video-clips and 3D objects (step, obj, etc.). Moreover, designing features allow creators enhancing the realism of the final experience by adjusting some parameters such as lighting position, brightness and saturation. When a “base-experience” - a scene viewable on VR without any kind of interactions between users and 3D elements - has been created, the behaviors enable the definition of the UI. Commands such as “color change”, “image change” or “move” are behaviors assignable to the objects in the scene. An example is given by the behavior “image change”; when a user selects a 2D billboard (object belonging to the default library of BrioVR), if the “image change” behavior is assigned to that object, the image changes into another one previously set. This feature allows reproducing all the links, defined thanks to InVision, among the stages of the UIs. 2.3 HTC Vive Pro At the end of the implementation, users have the opportunity to test the UIs in a virtual reality environment. This is made possible by the HTC Vive Pro equipment consisting of a VR headset, two handlers - the controllers - and two proximity sensors [10]. The VR headset provides virtual reality to wearers who can move within the virtual room designed thanks to BrioVR. The VR headset is a high-end product meant for gaming and equipped with high-tech features as stereoscopic head-mounted display, head motion tracking sensor and built-in headphones [10]. In addition, the mere viewing of the scene reassembles a side aspect compared to the assessment of the usability of the UI within a tri-dimensional context. To this end, the controllers are the main characters since, metaphorically speaking, they are the extent of the user’s hands in the virtual environment. In other words, the actual interaction between human and computer is made possible by a precise pointing device. These controllers are equipped with a trackpad and some buttons allowing the user to select links during an interactive virtual reality experience. The procedure to connect each mentioned device to the designed UI is almost straightforward. An intermediate software package called SteamVR is used to control

1296

S. Filippi and G. Vanino

the functionalities of each device and to access a chosen web browser [12]. The latter stage enables the users to enter the previously created VR environment by logging in the web-link made available on the BrioVR.com website [8].

3 Activities 3.1 Experience Planning Figure 1 depicts the experience overview, from the definition of the teams up to the generation of the opportunities/criticalities list. An IDEF0-like format is used; it requires to make clear the Input, Output, Controls and Mechanism of each Activity [13]. The activities are represented using the boxes labelled A1 to A5. The instruments are listed below them. Two university professors, the supervisors, act as controls and are represented by the arrows entering the activities boxes form the top. The input consists of the students who design the UIs, the problem to solve and the students who perform the user testing. Finally, the output, as expected, is the opportunities/criticalities list. Each activity reports the maximum duration expected/allowed. All the activities are described in the following, together with some examples taken from the different teams’ design efforts.

Fig. 1. Experience overview.

3.2 Team Definition The experience starts with the definition of the teams. This occurs thanks to PERSEL, the PERsonality-based user SELection tool, which characterizes every individual by his/her personality and inclination for UI matters. Regarding personality characterization, PERSEL exploits the “big five”, one of the most known models to highlight individual personality traits [14]. The five personality traits (PTs) are: extraversion (PT1), agreeableness (PT2), conscientiousness (PT3), neuroticism (PT4) and openness to experience (PT5). PERSEL uses empiric relationships between PTs and UX characteristics

Apprentices’ Opportunities and Criticalities

1297

like usability, aesthetics, symbolic significance, emotions, consequences of use, etc., to suggest the best team compositions aiming at satisfying precise redesign needs like improve product functionalities, improve product learnability, etc. [15]. Here, due to the students’ scarce UI knowledge and experience, all UX characteristics and needs are reputed as equally important; therefore, emphasis is on personalities only and PERSEL is used just to point out that for each student. The outcome of PERSEL allows defining three teams of five students each. This occurs thanks to the anticlustering approach [16], a method for composing group of people showing as similar personality traits as possible. Table 1 shows the team compositions, reporting the PT values for each student. The Table 1. Compositions of the three teams based on the personality traits. Team 1

Student 4

Student 6

Student 10

Student 13

Student 14

Average

PT1 (0/100)

40.63

84.38

71.88

62.50

40.63

60.00

PT2 (0/100)

69.44

72.22

63.89

66.67

66.67

67.78

PT3 (0/100)

75.00

69.44

69.44

52.78

50.00

63.33

PT4 (0/100)

50.00

31.25

31.25

59.38

56.25

45.63

PT5 (0/100)

70.00

62.50

72.50

40.00

80.00

65.00 60.35

Team 2

Student 1

Student 3

Student 7

Student 8

Student 11

Average

PT1 (0/100)

81.25

68.75

37.50

37.50

71.88

59.38

PT2 (0/100)

86.11

58.33

36.11

55.56

77.78

62.78

PT3 (0/100)

86.11

77.78

69.44

47.22

66.67

69.44

PT4 (0/100)

6.25

56.25

37.50

62.50

56.25

43.75

PT5 (0/100)

75.00

80.00

57.50

60.00

55.00

65.50

Team 3

Student 2

Student 5

Student 9

Student 12

Student 15

60.17 Average

PT1 (0/100)

50.00

62.50

87.50

43.75

43.75

57.50

PT2 (0/100)

66.67

61.11

77.78

58.33

66.67

66.11

PT3 (0/100)

72.22

66.67

75.00

52.78

61.11

65.56

PT4 (0/100)

40.63

46.88

28.13

59.38

53.13

45.63

PT5 (0/100)

72.50

82.50

67.50

52.50

47.50

64.50 59.86

1298

S. Filippi and G. Vanino

anticlustering approach generates groups by working on each personality trait singularly (for example, regarding PT1, the three teams have average values 60, 59.38 and 57.5, respectively) as well as on all traits together (the average values of the three teams are 60.35, 60.17 and 59.86, respectively). Two professors with a thirty-year experience in usability-UI-UX design and evaluation overview the whole experience; they take the role of experts in the field. 3.3 Interaction Definition The three teams start defining the interactions with the refrigerator by listing the functions the interface will make available to the users. Then, use case diagrams, personae/scenarios and paper prototypes allow building a clear point of view about the product affordance and feedback [5]. Figure 2 shows a couple of sketches from a paper prototype used during the definition of one interaction.

Fig. 2. Sketches from a paper prototype.

Once the study based on the paper prototype is finished, each team uses the InVision software package to describe the interaction as clearly as possible. Figure 3 shows a screenshot taken during the definition of one interaction. It depicts the whole interaction quantitatively. Figure 4 allows understanding the content/meaning of one of the pages as well as the links to other pages of the UI. It shows the logical flow built around the main page ‘HOME’. Blue arrows represent the links. Each arrow starts from one of the elements of the page and points to the destination page. For example, the element “PRODUCT LIST” is linked to the page listing the current content of the refrigerator. Moreover, several arrows pointing towards the HOME page denote the presence of corresponding interactions that redirect the user to the HOME page.

Apprentices’ Opportunities and Criticalities

1299

Fig. 3. Overview of the whole UI of the refrigerator.

Fig. 4. Emphasis on one page of the UI and on the links to/from other pages.

3.4 UI Implementation Figure 5 depicts one moment of the implementation of the interactions into the UIs. Here, one team prefer implementing the UI using two displays placed on the two doors of the refrigerator. The images to the right are a couple of screenshots from the smaller display (that placed on the left door). This implementation occurs thanks to the BrioVR software package. It gives the UI the final appearance; moreover, it implements the UI behavior (affordance and feedback) that will occur during the user-product interaction.

1300

S. Filippi and G. Vanino

Fig. 5. UI implementation thanks to the BrioVR software package. The whole refrigerator (left) and a couple of pages from the smaller display placed on the left door of the refrigerator (right).

3.5 User Testing Tests of the UIs of the refrigerator occur by involving other nine students, different than those belonging to the three teams. These tests aim more at verifying the testability of the UI than at evaluating the goodness of the design results. This because the focus is on evaluating existing affordable UI development tools to build the whole design process, from the UI definition to its testing, rather than on developing quality UIs. Tests are made possible by the involvement of the HTC Vive Pro equipment. Figure 6 is a screenshot taken during the user testing of a UI. It is a frame of the VR scene the testers interact with through the HMD. There is a kitchen representing the comfortable environment

Fig. 6. VR scene for user testing made available by the HTC Vive Pro equipment.

Apprentices’ Opportunities and Criticalities

1301

the testers are in; the refrigerator is on the right. The avatars of the physical handles the testers use to interact with the UI are on the spot. 3.6 Opportunities/Criticalities List Generation While each team proceeds through the three main activities (interaction definition, UI implementation and user testing), the two professors write down notes by asking for a think aloud approach to the UI development [5]. The activities of the three teams are filmed throughout. At the end, the whole classroom, together with the two professors, define the list of opportunities and criticalities encountered during the experience. The well-known brainstorming idea generation method is used to get the result. More than this, the recorded videos are analyzed to get the durations of the activities as well as all details that could have been missed in real time. Table 2 contains the durations of the activities performed by the three teams. These durations contribute to highlight opportunities and criticalities of the tools used for the UI development since anomalies (shorter/longer times than expected) shown by specific activities suggest to analyze pros and cons of the tools used during those activities. For example, the three teams performed quite equally as reported in the last column of the table (17 h 45 m, 17 h 30 m and 17 h 15 m, respectively). All these durations are shorter than expected (18 h), thus, there is for sure something good (opportunities) to highlight somewhere in the UI development process. At the same time, all specific and average values are higher than expected regarding the interaction definition and user testing activities. Thus, something bad (criticalities) should be investigated during those specific activities. Table 2. Durations of the activities of the three teams.

4 Results and Discussion The experience appeared as valuable from both the students and the professors’ points of view. Successfully reaching the end of the design process witnessed the goodness of the approach and the completeness of the university course. Apart from the quality of the developed UIs, although not bad at all, the highlighting of opportunities and criticalities of available affordable UI development tools got the point. Regarding the opportunities, students felt quite comfortable in developing shortperiod projects with the available tools. They developed UIs satisfying the main user

1302

S. Filippi and G. Vanino

needs by interacting directly with them and modifying them with relatively easy adjustments without the need for major knowledge in developing VR scenes. Achieving the latter goal with no particular insight into 3D CAD matters represents a turning point for the creation of self-made VR experiences completely customized by the users themselves. Regarding the criticalities, highlighted by focusing the attention on the interaction definition and user testing as suggested by the analysis of the activity durations, the efficiency of the UI generation process did not meet students’ expectations due to severe restrictions imposed by the software packages. Indeed, an outstanding problem was a missing clear and smooth procedure to implement 2D UIs on 3D software packages. Although several shortcuts to get the final result were available, the realism, in the interactive 3D experience, was sometimes missing or left behind on purpose, to give space to the practical aspects of the whole project, i.e., develop a functioning interactive user experience. A clear example of this occurred during the user testing; many people were not able to perceive the reality in the scene due to the absence of a proper selecting tool for the interaction with the touch displays of the refrigerator. Another important criticality emerged as the dialogue started becoming complex and articulated, i.e., made by a meaningful number of pages/elements. Limitations concentrated on behavior management due to the high computational power required. Moreover, there was scarce suitability about possible cross-platformed experiences because software packages did not allow exporting the results for using them somewhere else. Finally, as soon as the students went a little deeper in sophisticating behaviors, solid background of computer programming languages became required. These issues, with other relatively smaller technicalities, seem to be the heaviest criticalities, mainly referring to technological concerns still present or to required knowledge that should not be taken for granted. The students and the professors, together, reasoned about these criticalities and developed a list of suggestions, expressed under the form of variables (with measurement units), to take care of in selecting UI development tools and/or for developing new ones in the future. Table 3 summarizes the criticalities and the related variables. Table 3. Criticalities of the current UI design tools and the related variables. Criticality

Variables

Tools are not suitable for creating 2D UIs inside Number/effectiveness of features to manage 3D environments working spaces, interactions, widgets, etc., showing different dimensions The number of behaviors is limited due to the high computational power required

Number of behaviors made available and managed by the software package

Interactive VR scenes are available only inside the proprietary software package. Exporting files is almost impossible

Number/effectiveness of facilities for importing/editing/exporting interactive VR scenes

The customization of the commands of the VR devices (e.g., controller buttons) requires specific knowledge on computer languages

Percentage of features that do not require specific computer language knowledge to enhance the realism in the VR experience

Apprentices’ Opportunities and Criticalities

1303

5 Conclusions and Perspectives Thanks to an experience where university students defined and developed UIs of a refrigerator, this research highlighted opportunities and criticalities of the affordable UI development tools involved. The aim was to verify the compliance of these tools in a “discount UI development” scenario. Some criticalities emerged and the related variables to take care of for possible upgrades of these tools and for developing new ones appear interesting and quite objective. For what concerns some research perspectives, these variables need to be checked for feasibility and sustainability. Moreover, the set of tools considered was extremely limited; an exhaustive analysis requires to integrate it with other tools currently on the market. Finally, other than the tools, the research should focus on the design process as well; there could be different design paradigms that make those tools - or others - more suitable and effective than they are. Acknowledgments. The authors would like to thank prof. Ruggero Bellio of the University of Udine for his help in suggesting the best statistical tool to compose the three teams as homogeneously as possible.

References 1. Cox, K.: User-Interface Design, 2nd edn. Prentice Hall, New York (1993) 2. Law, E., Roto, V., Vermeeren, A.P.O.S., Kort, J., Hassenzahl, M.: Towards a shared definition of user experience. In: CHI ’08 Extended Abstracts on Human Factors in Computing Systems. Presented at the CHI ’08: CHI conference on human factors in computing systems, Florence, Italy, pp. 2395–2398. ACM (2008). https://doi.org/10.1145/1358628.1358693 3. Chammas, A., Quaresma, M., Mont’Alvão, C.: A closer look on the user centred design. Procedia Manuf. 3, 5397–5404 (2015). https://doi.org/10.1016/j.promfg.2015.07.656 4. Sears, A., Jacko, J.A.: Human-computer interaction (2017) 5. Shneiderman, B., Plaisant, C.: Designing the User Interface: Strategies for Effective HumanComputer Interaction, 5th edn. Addison-Wesley, Boston (2010) 6. Alégroth, E., Ardito, L., Coppola, R., Feldt, R.: Special issue on new generations of UI testing. Softw. Test. Verif. Reliab. 31 (2021). https://doi.org/10.1002/stvr.1770 7. InVision software package. https://www.invisionapp.com/. Accessed 17 Jan 2022 8. BrioVR software package. https://experience.brioxr.com/. Accessed 17 Jan 2022 9. Di Gironimo, G., Matrone, G., Tarallo, A., Trotta, M., Lanzotti, A.: A virtual reality approach for usability assessment: case study on a wheelchair-mounted robot manipulator. Eng. Comput. 29, 359–373 (2013). https://doi.org/10.1007/s00366-012-0274-x 10. HTC Vive Pro equipment. https://www.vive.com/eu/product/vive-pro/. Accessed 17 Jan 2022 11. Nielsen, J., Molich, R.: Heuristic evaluation of user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems Empowering People - CHI ’90. Presented at the SIGCHI conference, Seattle, Washington, United States, pp. 249–256. ACM Press (1990). https://doi.org/10.1145/97243.97281 12. SteamVR software package. https://store.steampowered.com/app/250820/SteamVR/. Accessed 17 Jan 2022 13. Ross, D.: Structured Analysis (SA): a language for communicating ideas. IEEE Trans. Softw. Eng. 3, 16–34 (1977). https://doi.org/10.1109/TSE.1977.229900

1304

S. Filippi and G. Vanino

14. Rothmann, S., Coetzer, E.: The big five personality dimensions and job performance. S. Afr. J. Ind. Psychol. 29, 68–74 (2003). https://doi.org/10.4102/sajip.v29i1.88 15. Filippi, S.: PERSEL, a ready-to-use PERsonality-based user SELection tool to maximize user experience redesign effectiveness. MTI 4, 13 (2020). https://doi.org/10.3390/mti4020013 16. Papenberg, M., Klau, G.W.: Using anticlustering to partition data sets into equivalent parts. Psychol. Methods 26, 161–174 (2021). https://doi.org/10.1037/met0000301

Role-Based Sailing Augmented Reality Interface: Three Proposals for Head-Mounted Display Francesco Laera(B) , Alessandro Evangelista , Vito Manghisi , Massimo Foglia , and Michele Fiorentino Polytechnic University of Bari, Bari, Italy [email protected]

Abstract. This work is part of a larger project aiming to develop a comprehensive Augmented Reality (AR) interface for recreative and professional nautical sailing navigation. Due to the complexity of the marine environment and the dynamism of the crew members on board, we propose to diversify the display of navigation information concerning the role, position and activity onboard. The interfaces are designed to be viewed by one or more crew members, solving the problem of everyone viewing the same data, and giving each one the information they need based on their role or position. The three novel proposals for sail-specific AR graphic interfaces are designed to be displayed on Head-Mounted Display HMD and each of them differs for the information stabilization: Screen-Stabilized, Body-Stabilized and Boat-Stabilized. Each interface is different from the others differentiated according to the type of navigation and the user who must use that information. These approaches have been prototyped and evaluated by a panel of experts in the field of sailing navigation and were able to showcase their potential for future evaluations in different scenarios. Keywords: Augmented Reality · Sailing · Nautical · Spatiality · User-based · Interface

1 Introduction Sailing is a dynamic activity and requires a lot of mobility on board by the crew. Unlike the automotive and aeronautical sectors, the board position is not fixed in a single area. The user has the entire boat available to move and perform the manoeuvres of adjusting the sails, tracking the route, and checking the route navigation data and guide of the boat [1]. It should also be considered that sailing employs several members on board, each with their specific role and that the board position depends on the role of each crew member. The roles on board have various levels of dynamism and various dedicated areas: the helmsman has almost always the same position and will only change board in the various gaits, the sail trimmer who control the sails have greater mobility having the whole cockpit available, the bowman is the most dynamic as he controls the entire area © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1305–1315, 2023. https://doi.org/10.1007/978-3-031-15928-2_114

1306

F. Laera et al.

of the bow, walking on it during the manoeuvring phases, often passing from inside the boat to the outside. All roles have a point of view that is variable and depends on the navigation phase (e.g. static gait or manoeuvre) and on the task that depends on their role. The helmsman will look forward to focusing on navigation data, course, any obstacles or waypoints on the course. Sailtrimmer will mainly look at the sails to better adjust them. The bowman will have under control the bow, the lines for manoeuvres and will take care to have the boat to avoid problems or accidents. In situations with a reduced crew, a solo outing or a single active crew member accompanying passive guests, multiple roles are played by the same person, increasing their stress and responsibilities.

2 Motivation This work aims to take a first step towards the definition of specific AR graphic interfaces for sailing, dedicated and optimized on the specific requests of the user. Current devices for displaying information already on board offer the ability to configure interfaces so that the user can see exactly the data he needs in that specific activity. This operation is not easy, and the tools can also be configured to suit the needs of a single crew member, forcing all other members to have the same information. Considering the large differences between crew members’ needs and positions, we find it useful to use customized AR data in different scenarios. We have diversified the display of navigation information related to the user’s role, position, or activity on board through a dedicated AR graphic interface system for HMD for each user who requests it. This would solve the problem of everyone displaying the same data, giving each person the information they need based on their role or position on board. Furthermore, we believe that the user not only needs different information depending on the task, role or situation, they also need a specific stabilization system to display that information suitably in the specific use case. For these reasons and the reflection that has arisen, we present these three proposals for interfaces for sailing.

3 State of the Art We started with a study of the state of the art on AR graphical interfaces specific to marine applications, reported in systematic reviews specific to this field [1, 2]. We analyzed the data to understand which AR devices are most commonly used and with which stabilization systems the variables are displayed. The research was conducted on interfaces in sailing and maritime fields according to a study of scientific and commercial literature. The result is that in the sailing field, 58% of systems use HMDs while 24% use Head-Up Display (HUD) and 18% use AR Monitors (MAR). As for the maritime field, most systems, 64%, use MAR while HMDs are used by 27% of systems.

Role-Based Sailing Augmented Reality Interface

1307

We note that in the sailing field most systems use HMDs, probably due to the high dynamism on board, while in the maritime field where the command position is confined to a specific location, mainly MAR systems are used. Since we wanted to develop our AR graphic interface specifically for sailing, we decided to develop them for HMDs as highlighted by the systematic review [1]. The AR solutions described in the systematic review for sailing that uses only HMD are 58% and of these, 42.8% are World-stabilized, 28.6% are Screen-stabilized and 28.6% are hybrid solutions in which some information is displayed World-stabilized while others are Screen-stabilized. None of the interfaces reported in the systematic review analyzed uses a Body-stabilized system. Therefore, in addition to presenting a proposed AR graphic interface for the Worldstabilized and Screen-stabilized system, we propose for the first time an AR interface Body-stabilized to study this approach to information visualization as well. Regarding hybrid systems, at this stage, we decided not to make proposals but to focus on systems already defined in the literature [3].

4 AR Interfaces We propose three different graphical interfaces for viewing sailing navigation information, differentiated according to the type of navigation and the user who must use that information. The interfaces proposed are graphic elaborations that simulate apps for AR and which have the function of a preliminary approach to the development of real apps. The interfaces are designed for an AR device of the Head Mounted Display type that can be worn by any active crew member or by an inactive guest. Following these works [3, 4], we developed the three hypotheses according to three different types of information stabilization in space: Screen-stabilized, Body-stabilized and Boat-stabilized. Concerning Billinghurst’s work [3] we introduce the concept of Boat-stabilized in which the information is fixed concerning the boat with the system placed amidships and remains in that position as the user’s point of view and position changes. This system of representation of information in space is because the boat becomes the space on which the user moves and interacts even if the boat continues to move in the natural marine environment. We have called it “Boat-stabilized” to continue to distinguish it from a World-stabilized system in which information is fixed to the real world. The motivation for this always derives from the intrinsic dynamism of the activity of sailing, which is closely related to the boat. 4.1 Method The three AR interface proposals have been developed by a panel of three sailor experts and are the result of the experience gained over many years of sailing activity in the three main macro sectors of sailing activity: race, pleasure craft and teaching (Table 1). Their experience was necessary and fundamental for the development of the three interface proposals presented.

1308

F. Laera et al. Table 1. Experience and skill by activities for each expert of the panel

Expert Years of experience Activities

Skills

E. 1

45

Race

Performance, safety, situation awareness, route planning, use of on-board electronics

E. 2

20

Race and teaching Safety, route planning, docking, on-board comfort, use of on-board electronics

E. 3

15

Pleasure

Safety, learning, situation awareness, easy of use

The panel of experts met with the research team for a year weekly, to help researchers understand the dynamics during sailing from the point of view of roles on board and different types of users and navigation. With the help of graphics programs such as Adobe Photoshop and Adobe Illustrator, graphics were generated for the simulation of the three different graphic interface systems within the real onboard navigation scenarios. 4.2 Screen-Stabilized Interface This graphical interface proposal shows the main navigation information already familiar to users, represented in a Screen-stabilized 2D system. The data shown is commonly displayed on all boats in the form of numerical values represented on LCD screens located near the helm, on the cockpit or the mast base. In the proposed interface the information is located in the upper part of the viewpoint in correspondence with the sky in most user positions. It is divided into two areas: a central area with numerical information and a more peripheral area where a graph is represented (Fig. 1). The data displayed in the left-central area are related to the wind: apparent wind angle (AWA), apparent wind speed (AWS), true wind direction (TWD) and true wind speed (TWS). These wind data are important in determining the choice of sails and their correct orientation as well as planning the route according to the wind conditions. In addition, these data are indicative of the weather and sea conditions and are useful for monitoring the evolution of the meteorological phenomenon to ensure safe navigation. In the central area, there is also information relating to the depth of the seabed, which is essential for conducting safe navigation, especially along the coastline and useful during anchoring to determine the length of the chain to be dropped into the water. In the right-peripheral area, we find the compass graph representing a top view of the boat inscribed in a compass. In this way, it is possible to have a graphic confirmation of the direction of the boat’s bow and the direction of the wind concerning the boat. Inside the compass graph, there is a red circular sector that indicates the area in which it is not possible to navigate (Dead area) because it is against the wind. At the centre of the Compass graph, we find the speed and course of the boat.

Role-Based Sailing Augmented Reality Interface

1309

Fig. 1. Screen-stabilized AR 2D interface proposal: the navigation data and the compass graph are placed in the upper peripheral area and fixed on the user’s point of view

This proposal is similar to the interfaces of EWOL [5] and Tactics [6] both use an HMD, use 2D Screen-stabilized graphics and text. Unlike these two solutions, our proposal leaves free the central part of the user’s field of view avoiding having an invasive presence of AR information. Moreover, the information is grouped by category and limited in the number of data represented: five numerical data and one graph, in line with Miller’s magic number [7]. This interface allows the users to view fixed information from their point of view at any point of the boat, both external and internal, leaving both the orientation and the position of the information unchanged as the user’s orientation and position change. This application can be used for coastal navigation for recreational use, both daily and a longer period. It provides the user with information on wind and route for safe navigation and allows the user or him/her to manage navigation along the coast and the anchoring phases typical of a sailing holiday. In this case, the user can be the boat’s skipper and allows a single user who must perform all the functions on board to always have control of what is happening. For this application, the user will be positioned mainly at the helm and his point of view will be stable except for when he can be found inside the boat. The strength of this system is that the user can monitor sensitive data wherever he is, even from inside the boat and is especially suitable for users who can self-refer to the boat and who know the sailing conventions. 4.3 Body-Stabilized Interface The following proposal shows a purely graphical interface in a Body-stabilized 3D system. The interface is stabilized to the user’s hand because during navigation phases having a gesture system that allows activation would be useful and safer; the use of voice

1310

F. Laera et al.

would be difficult or impossible due to the disturbance of wind noise, moreover the use of protective gloves, which are regularly used onboard, would complicate the use of buttons or physical interaction with the device. In this case, user-centred interfaces can be adopted to activate the interface and improve the user experience, with a reduction in cognitive load and effort in interactions [8]. This stabilization system keeps the boat avatar always oriented in the same orientation as the real boat. This is very useful in order not to make the user lose his orientation, even when he is not able to see the bow of the real boat (e.g. at night or below deck). The representation of the interface elements is a hybrid between figurative and abstract elements: at the centre of the system is the figurative representation of the boat around which there are discs with vectors that are the abstract representation of meteorological phenomena and the route of the boat (Fig. 2). The interface follows a level structure [9], where information is displayed categorized and grouped by levels. The orientation of the graphics in the interface always coincides with the direction of the real boat, in this way the three-dimensional graphic model of the boat and the real boat are always aligned so that the user always has information about the direction of navigation even if he is inside the boat.

Fig. 2. Body-stabilized AR 3D interface proposal: the graphics reproduce the avatar of the boat and navigation data always oriented according to the real orientation of the boat and fixed concerning the user’s body

On the highest level we find information about the wind, at sea level all navigation information about the course and current, and at the bottom a double level formed by a floating level representing the seabed and changing its altitude according to the depth of the seawater and a red level representing the minimum depth at which the boat can navigate.

Role-Based Sailing Augmented Reality Interface

1311

This solution is not comparable to any marine-specific AR interface found in the literature or the commercial sector, whether for sailing or maritime navigation [1, 2]. We present, therefore, for the first time a Body-stabilized AR graphic interface specifically for sailing. This interface allows users to view fixed information regarding the position of the user’s body and do not change position as the user’s point of view changes. Therefore, if the user moves on the boat, the interface will follow him in his movement, always maintaining the same position concerning the user. This application is designed to inform guests about the weather and route conditions during yachting navigation. The information is only graphic and does not present a numerical detail of the information but provides a three-dimensional model of what is happening to the boat and allows for immediate visual feedback on the navigation situation. A low level of information detail is suitable for guests, newbies and in all those situations where a high degree of precision is not required. In this case, the user can always be in different positions with different points of view and it is precisely for this reason that a Body-stabilized system has been chosen. The strength of this solution is to be able to have concise but immediate information of the route and data of the boat suitable for informing and entertaining guests and passengers even if they are unable to read and understand the numerical data onboard. 4.4 Boat-Stabilized Interface In this latest graphical interface proposal, the information is represented in a hybrid graphical and numerical Boat-stabilized 3D system. The representation of information is a hybrid of numerical data and graphic elements. The reference system is polar with an axis passing through the centre of the boat. On this axis, we find three circumferences that show the compass degrees, arranged according to a level structure relating to navigation and wind data (Fig. 3). The first level at the top, positioned in the sky, represents the direction of the wind and the angle of the wind for the bow of the boat. The arrow indicates the wind direction similarly to what happens with an analogue direction indicator, and on the circumference of the compass, it is possible to read the degrees, while the angle in green determines the angle of the wind concerning the wind which is also represented as numeric data. The central level positioned just above the horizon, is the main compass on which we find the references of the cardinal points and in which it is possible to read the degrees of all indicators (wind, current, course, obstacles and waypoint). The indicators, in addition to pointing to their reference compass, have a linear pointer that also shows the degrees on the main compass. The third level, located at the height of the boat, shows the line of the boat’s direction, the direction of the current and the direction it would have after the tack. The direction of the boat is represented as a line similar to what happens on a compass with the guideline, while the current is represented as a wavy element that indicates where the water flow goes. In this interface, there is also the representation of obstacles using a triangular element (danger sign), or waypoints using circular element, surmounted by a vertical line that shows the reading of the bearing degrees on the main compass. Next to the triangle,

1312

F. Laera et al.

Fig. 3. Boat-stabilized AR 3D interface proposal: the graphics of the navigation data envelops the user in a cylindrical representation with an axis passing through the base of the mast

we find the distance between the boat and the obstacle providing the operator with the necessary data to monitor the course of his boat from the obstacle. Our Boat-stabilized AR interface with 3D elements is similar to the works of Butkiewicz [10] and Wisernig [11] and shares their three-dimensional approach of visualized data placed in the surrounding environment, the use of the compass as the main element of the interface and the data stabilization system. The substantial difference between our solution and the above-mentioned ones is the layered structure placed at different elevation planes to the observer’s point of view. In this interface, the information is fixed concerning the boat with the system centre located amidships and remains in that position as the user’s point of view and position changes. Except for the display of obstacles that are fixed concerning the marine environment of navigation. This interface proposal is useful both for recreational use and for a regatta application. The use of numerical data such as degrees and the distance between the boat and obstacles/waypoints allows for good information accuracy which makes it useful even in regattas. Waypoint bearing can be applied to race marks and competitors, and the course on opposite tacks is useful in determining the correct course to reach the marks. This application allows users to have a high level of information detail and is extremely useful for those who manage navigation on board, but it is also good for the skipper in situations of solitary navigation or with a small crew. The strength of this interface is the quality of the information it provides to an experienced user and the versatility that allows it to be used both in cruising and racing.

Role-Based Sailing Augmented Reality Interface

1313

5 Discussion The AR interfaces presented are at an early stage of development and should be intended as proposals for future development work and testing. In any case, we can make some preliminary considerations to evaluate its positive aspects and weaknesses. As for the Screen-stabilized proposal, we can say that it is a flexible solution that allows you to configure the information that is useful to the user, moreover, the numerical display of the information in the presented model is already familiar to users (currently the data are shown numerically on configurable displays), and this would reduce the learning phase. However, the use of numerical and two-dimensional data as it occurs on current displays, may not represent a real innovation but simply the transposition of an old information display system made through a device different from the one currently used. The Body-stabilized interface presents an absolute novelty in this field and makes the display of information easy to understand even by less experienced users. This can be very useful in the learning phase of sailing and would be a useful teaching support tool by providing a real-time trend of the phenomena that interact with the boat, such as the direction of the wind which is a phenomenon invisible to the eye and difficult to identify by the less experienced. Thanks to this interface, more people could approach the sport of sailing even starting from a situation of total inexperience, strengthening the concept of easy sailing [12]. The limitation of this solution is the lack of numerical data which makes it of little use by expert users. On the Boat-stabilized interface, we can say that it is quite similar to the AR interfaces for HMD found in the literature. Its strength is the accuracy of the information represented and the metaphorical affinity with physical elements that are constantly used onboard such as the compass or the analogue wind indicator. On the other hand, this system may not be immediate in learning how it works and this would require a longer training phase than other solutions. In any case, it remains a work never tried before that can lead to the definition of guidelines for the design of specific AR interfaces for sailing, contributing to the development and dissemination of this technology in a field that has not yet been investigated.

6 Perspectives In the future, we plan to use these proposed interfaces for the development of AR interface application systems. We want to use the work presented to test which information stabilization systems work best for the user, depending on the data to be represented and the navigation condition. The intention is to prototype the three interface hypotheses and place them in a virtual environment consisting of a realistic marine scenario and a pleasure sailing boat similar to one of the models on the market. In this way, we could simulate the movement of the sea and the boat, the atmospheric events, the wind and the changes of light in a controlled and always repeatable environment. The repeatability will give us the possibility of having scenes that are comparable to each other with the same variation of the elements.

1314

F. Laera et al.

The interfaces subjected to a sample of users will be followed by the Experience Questionnaire (UEQ) [13] and the Questionnaire on the System Usability Scale (SUS) [14], to collect data on the progress and evolution of the work. This methodology will allow us to compare the effectiveness of the proposals made on a real sample of sailors and possible future users. These tests and experiments, partially completed in VR modality [15], will help us in the development of a system of AR interfaces that can be modulated on the needs of individual users. According to two systematic reviews [1, 2] specific to the use of AR interfaces for nautical navigation, research of this type has not yet been addressed in this field. Our work aims to bring AR technology to the nautical field because we believe it can be a tool that simplifies navigation and increases safety on board [12].

7 Conclusions Three novel proposals for sail-specific AR graphic interfaces are presented for each information stabilization system applied: Screen-Stabilized, Body-Stabilized, and BoatStabilized. The different choice of the system depends on the role, position or activity that the user carries out on board and each is oriented to a different use. We propose an interface for Screen-stabilized suitable for a skipper in a solitary situation or a reduced crew in pleasure sailing; an exclusively graphic and intuitive Bodystabilized interface designed for guests onboard or inexperienced users who would not be able to read and interpret numerical data; a graphic and numerical hybrid Boat-stabilized interface with a high level of detail of specific information for expert users, useful both in racing and cruising. The interfaces are designed to be viewed on wearable HMD devices, like Hololens 2, by one or more crew members, solving the problem of everyone viewing the same data, and giving each one the information they need based on their role or position.

References 1. Fiorentino, M., et al.: Sailing data visualization in augmented reality: systematic review, issues, and perspectives. Mar. Technol. Soc. J. 55(2), 64–80 (2021). https://doi.org/10.4031/ MTSJ.55.2.2 2. Laera, F., et al.: Augmented reality for maritime navigation data visualisation: a systematic review, issues and perspectives. J. Navig., 1–18 (2021). https://doi.org/10.1017/s03734633 21000412 3. Billinghurst, M., et al.: A wearable spatial conferencing space. In: International Symposium on Wearable Computers, Digest of Papers (1998). https://doi.org/10.1109/ISWC.1998.729532 4. Feiner, S., et al.: Windows on the world: 2D windows for 3D augmented. In: UIST 1993 Proceedings of the 6th Annual ACM Symposium on User Interface Software and Technology (1993). https://doi.org/10.1145/168642.168657 5. EWOL: EWOL, World’s first Augmented Reality Sailing Navigation (2018). https://ewol.eu/ worlds-first-augmented-reality-sailing-navigation/. Accessed 12 Jan 2021 6. Tactiqs®: The Tactiqs® Performance System (2019). https://tactiqs.io/. Accessed 12 Jan 2021

Role-Based Sailing Augmented Reality Interface

1315

7. Miller, G.A.: The magical number seven, plus or minus two: some limits on our capacity for processing information. Psychol. Rev. (1956). https://doi.org/10.1037/h0043158 8. Uva, A.E., et al.: A user-centered framework for designing midair gesture interfaces. IEEE Trans. Human-Machine Syst. (2019). https://doi.org/10.1109/THMS.2019.2919719 9. Laera, F., et al.: Towards sailing supported by augmented reality: motivation, methodology and perspectives. In: 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 299–274 (2020). https://doi.org/10.1109/ismar-adjunct51 615.2020.00076 10. Butkiewicz, T.: Designing augmented reality marine navigation AIDS using virtual reality. In: OCEANS 2017 - Anchorage (2017) 11. Wisernig, E., et al.: Augmented reality visualization for sailboats (ARVS). In: Proceedings - 2015 International Conference on Cyberworlds, CW 2015 (2016). https://doi.org/10.1109/ CW.2015.74 12. Laera, F.: Augmented reality for easy sailing. In: 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct) (2020). https://doi.org/10.1109/ ismar-adjunct51615.2020.00088 13. Sauro, J., et al.: Quantifying the User Experience: Practical Statistics for User Research, 2nd edn. (2016) 14. Bangor, A., et al.: An empirical evaluation of the system usability scale. Int. J. Hum. Comput. Interact. (2008). https://doi.org/10.1080/10447310802205776 15. Laera, F., et al.: Augmented reality interface for sailing navigation: a user study for wind representation. In: 2021 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 260–265 (2021). https://doi.org/10.1109/ismar-adjunct54 149.2021.00060

Displaying Augmented Reality Manuals in the Design Phase of the Product Lifecycle Enricoandrea Laviola(B)

, Michele Gattullo , and Alessandro Evangelista

Department of Mechanics, Mathematics, and Management, Polytechnic Institute of Bari, via Orabona, 4, 70125 Bari, Italy [email protected]

Abstract. Augmented Reality (AR) has been proved to be effective in maintenance operations in the industrial field. In a concurrent engineering approach, the authoring of AR manuals, to convey instructions to operators exploiting AR, must be done during the design phase of the product lifecycle. A reliable solution is needed to speed up the AR manual development process when the product is not physically available. We compared three solutions for displaying a demo version of an AR manual when the real product is not available, opting to replace it with its CAD model. Based on the user study results, the main features for each demo version were collected. The Augmented Reality (AR) solution allows to show the product that is not physically available in a real scale with the drawback of needing a physical printed marker. The Desktop Virtual Reality (DVR) prototype overcomes the problem of managing different devices and real-life locations, but without a real scale and a natural interaction. The Augmented Desktop Virtual Reality (ADVR) prototype allows to distinguish more easily the virtual elements of the true AR from those simulating the real product, but with a less natural interaction due to the use of a secondary screen. As a case study, we chose a compressor that a local company is going to produce with its AR manual. Although users overall preferred the AR demo version, the company chose the ADVR solution due to a better perception of what would be the result of the true AR application. Keywords: Augmented Reality · Task simulation · Work instructions · Prototyping

1 Introduction and Motivation Augmented Reality (AR) is an emerging technology that is becoming increasingly important in the branch of Industry 4.0 in the last decade. In the literature, there are many demonstrations of the use of AR in various fields as industry, education, retail, advertising, commercial, cultural heritage, medical, and so on [1–3]. In particular, AR technology offers a great added value in the manufacturing field [4–6] thanks to its capability to show in real-time additional virtual information referenced in a real context as defined by Azuma et al. [7]. AR has been proved to be effective in maintenance operations by many studies in the literature [8–10]. The authoring of AR manuals consists of defining how to convey © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1316–1326, 2023. https://doi.org/10.1007/978-3-031-15928-2_115

Displaying Augmented Reality Manuals in the Design Phase

1317

instructions to operators exploiting AR. In a concurrent engineering approach, their design must be done during the design phase of the product lifecycle, parallelly to the product design. In fact, the AR manual project also has some implications on the product design, e.g., as regards solutions to improve the tracking. Therefore, there may be situations where developers do not have the product physically available, so a reliable solution is needed to speed up the AR manual development process. In this work, we compared three solutions for displaying a demo version of an AR manual when the real product is not available, opting to replace it with its CAD model. Traditionally, companies intend maintenance in a sequential engineering process that aims to restore any functionality of a product within its lifecycle [11]. Product manuals are usually produced after the design process, writing all the work instructions in the form of text and drawing. The authoring of AR manuals is not a simple process because AR depends on many factors that need to be considered at the same time, such as environment and lighting. Moreover, it offers the possibility to display instructions in various ways thanks to visual assets, i.e., visual user interface elements associated with a real scene such as auxiliary models, product models, drawings [12]. Then, AR manuals, compared to traditional manuals, need a more complex design. From the user needs analysis, it is possible to design the AR manual, specifying its system requirements. Based on these system requirements, in an industrial field, a demo version of the AR manual is generally developed to test the design goodness, which is shown to the company managers who discuss and propose some reviews to the designed AR manual. In case of existing products, the AR demo manual can be shown directly on the real product. For new products, only after the production of some prototypes it could be possible to display the AR demo manual on the real prototype slowing down the process of developing the final application. Most of the works in the literature show how to design AR manuals for existing products, usually converting old, printed manuals into AR manuals [13, 14]. Then, the research question of this work is: “how to display a demo version of an AR manual in the design phase of a new product?” To answer this research question, we compared three different solutions, using a case study of an innovative semi-hermetic compressor that has not yet been produced.

2 Case Study The case study of our work refers to the development of an AR manual commissioned by a local company. The product is an innovative semi-hermetic compressor still under development, thus not physically available. The application needs to help workers to carry out operations concerning installation, dismantling, and maintenance of the compressor. Moreover, the instructions shall also be used to identify and localize all the components of the product. Therefore, an in-depth study for the choice of the correct visual assets is required to fully exploit the potential of AR. In a concurrent engineering approach, it is necessary to develop the prototype of the AR manual parallelly to the product design. Thus, it is understandable that the main issue could be how to show the compressor and attach AR instructions on it. Therefore, three different demo versions of the manual were developed with the same purpose:

1318

E. Laviola et al.

identify the best technical solution in order to perform adequate analysis to choose the visual assets that will be used in the final AR prototype when the real compressor will be available. For each proposal, we developed an application for a handheld device using Unity 3D Engine, importing the same 3D CAD model of the compressor provided by the company. It is important to highlight that all three demo versions show the same content but propose different interactions between users and the 3D CAD model to view the AR information referenced in the virtual compressor. When users start the application, an initial virtual menu appears with buttons that guide them to each section according to the structure of the digital manual provided by the company. As regards virtual scenes that do not require AR instruction, users can access to the information through textual content primarily, but also screen-fixed graphics and illustrative drawings. In the AR interface, instead, users can interact with visual assets (e.g., auxiliary models, product models, drawings, labels) thanks to buttons that are arranged along the edges of the device screen not to occlude the virtual elements that are displayed in AR but also to ensure users the right comfort while pressing them and, at the same time, keeping the display with both hands. 2.1 Demo Versions of the AR Manual The three developed demo versions of the AR manual were named using the following terminology: 1. Augmented Reality (AR) prototype 2. Desktop Virtual Reality (DVR) prototype 3. Augmented Desktop Virtual Reality (ADVR) prototype In the AR approach, Vuforia natural feature tracking was used to attach the CAD model of the compressor to a 2D image that is a real drawing in scale 1:1 of the top-view of the CAD model (see Fig. 1). In this way, during the AR manual demonstration, the 3D model of the compressor is displayed on the user’s device in place of the real compressor, and all the AR instructions are attached to it.

Fig. 1. User interaction example of the AR demo version: the handheld device frames the natural feature, and in this way, the AR content, consisting of the virtual compressor and the proposed visual assets, appears on the user’s device.

Displaying Augmented Reality Manuals in the Design Phase

1319

As second solution, the DVR prototype was developed by creating an entirely virtual environment with both the product and the instructions displayed on the handheld device (see Fig. 2). Similarly to what proposed by [15], users can interact with the 3D model of the virtual compressor using the touch screen of the display. They can pan, rotate, and zoom the 3D model of the entire product as done in whatever 3D CAD software. Therefore, users can use GUI buttons to visualize instructions as virtual elements that would normally be seen in AR if the product were physically available.

Fig. 2. User interaction example of the DVR demo version: the handheld device shows the 3D model of the compressor and the AR content, consisting of the proposed visual assets, in an entirely virtual environment.

In the last alternative, the ADVR demo version was proposed. A support application was first used to show the CAD model of the compressor in a typical 3D CAD software environment on a computer desktop. As for the DVR solution, the user could pan, rotate, and zoom the 3D model of the entire product, but using a keyboard and a mouse on the computer desktop instead of the touch screen on the handheld device. Therefore, for the AR manual, on the handheld device, the Vuforia model target (360° option) was used to track the CAD model of the compressor, as it was the real compressor (see Fig. 3). Then, during the demonstration, the CAD model of the product is displayed on a fixed large screen, and instructions are attached to it in the AR manual.

Fig. 3. User interaction example of the ADVR demo version: the handheld device shows the AR content, consisting of the proposed visual assets, referenced to the 3D CAD model of the compressor displayed on a secondary screen.

1320

E. Laviola et al.

2.2 Design of the Experiment The entire experiment carried out for each user was divided into two parts: an initial session to explain the three prototypes of the AR manual and a subjective questionnaire. In the first phase, experimenters explained to participants how each of the proposed approaches works, also focusing on the type of user interaction. At the end of the demonstration, users were asked to fill out a subjective questionnaire. It measures subjective satisfaction based on different features associated with user satisfaction, such as the content and interaction with the content. The proposed questionnaire is based on statements evaluating the user experience of mobile augmented reality services [16] and the Technology Acceptance Model (TAM) criteria [17]. Some items from the table “Examples of formative subjective statements with regard to the value and overall goodness of the service in terms of the UX category in question” and from the standard TAM questionnaire were picked and adapted to our scenario. The proposed questionnaire consists of 10 items for each demo version on a seven-point Likert scale (1–7), including eight categories of evaluation criteria, which are empowerment, efficiency, meaningfulness, intuitiveness, captivation, motivation, perceived ease of use, intention to use (see Table 1). At the end of the questionnaire, users were asked to provide comments on the advantages and disadvantages of each demo version. Table 1. Questionnaire used for the user study. Category

ID

Item

Empowerment

Q1

I think the application allows to pursue goals that are not supported by the other proposed technologies

Efficiency

Q2

I think the application can help me to choose visual assets for the final AR manual in an efficient way

Meaningfulness

Q3

I think with the application I can access information in the most appropriate place and time

Q4

I think the content of the application makes sense in the context I use it

Intuitiveness

Q5

I think the application allows a natural way to interact with digital information

Captivation

Q6

I think I have a good conception of what is real and what is augmented in the AR application

Motivation

Q7

I think the application encourages me to produce and share information with other users in choosing visual assets

Perceived ease of use

Q8

I think learning how to use the application is easy for me

Q9

I think the application simulates well the use of the final interface of the AR manual

Q10

Assuming I have access to the application, I intend to use it for choosing visual assets in an AR manual

Intention to use

Displaying Augmented Reality Manuals in the Design Phase

1321

2.3 Participants The survey was provided to 15 unpaid participants (2 females, 23 to 33 years old, mean = 25.2, SD = 2.71). They were all master’s degree students in Mechanical Engineering, except for a Ph.D. student and a post-doc research fellow in Mechanical Engineering. The familiarity levels with AR, rating 5.2 on average (SD = 1.33, Median = 5, Min = 2, Max = 7), and VR, rating 5.6 on average (SD = 0.88, Median = 6, Min = 4, Max = 7), were collected on a 7-point Likert rating item (1: Not at all familiar – 7: Extremely familiar) before conducting the experiment. 2.4 Results A within-subject experiment was designed since data were obtained for each participant under all three experimental conditions (AR, AVR, ADVR demo versions). The Shapiro-Wilk normality test, AS R94 algorithm, revealed that all the original data did not follow a normal distribution, then non parametric tests were performed. Friedman 2-way ANOVA was used to compare the three samples for each item. The Wilcoxon ranks-sum test, with the Bonferroni correction resulting in a significance level set at p < 0.017, was used as post-hoc test of the Friedman 2-way ANOVA. The following Fig. 4 summarizes all data collected from statistical analysis. AR

* *

7

DVR

* *

ADVR

* * *

*

6 5 4 3 2 1 0 Q1

Q2

Q3

Q4

Q5

Q6

Q7

Q8

Q9

Q10

Fig. 4. Plot of the questionnaire median score for each item (*: significant difference).

For all the three demo versions, it was observed that there was no statistically significant difference for the items Q1, Q4, Q6, Q7, and Q10. There was a statistically significant difference in the perceived efficiency (item Q2), χ2(2) = 18.250, p < 0.001. Post hoc analysis with Wilcoxon signed-rank tests was conducted. Median perceived efficiency for AR, DVR, and ADVR demo versions were 6 (6 to 6), 6 (5 to 6), 5 (3 to 6), respectively. There were no significant differences between the AR prototype and the DVR one (Z = −1.265, p = 0.206) or between DVR and ADVR (Z = −2.254, p = 0.024). However, there was a statistically significant reduction in perceived efficiency in the ADVR demo version vs AR one (Z = −2.446, p = 0.014).

1322

E. Laviola et al.

There was a statistically significant difference in the perceived meaningfulness regarding information access in the most appropriate place and time (item Q3), χ2(2) = 18.250, p < 0.001. Post hoc analysis with Wilcoxon signed-rank tests was conducted. Median perceived meaningfulness for AR, DVR, and ADVR demo versions were 6 (3 to 6), 7 (5 to 7), 3 (2 to 4), respectively. There was no significant difference between the AR prototype and the DVR one (Z = −1.977, p = 0.048). However, there were statistically significant reductions in perceived meaningfulness in the ADVR demo version vs both AR one (Z = −3.166, p = 0.002) and DVR one (Z = −3.141, p = 0.002). There was a statistically significant difference in the perceived intuitiveness (item Q5), χ2(2) = 14.941, p < 0.001. Post hoc analysis with Wilcoxon signed-rank tests was conducted. Median perceived intuitiveness for AR, DVR, and ADVR demo versions were 7 (6 to 7), 6 (5 to 6), 5 (2 to 5), respectively. There was no significant difference between the AR prototype and the DVR one (Z = −1.833, p = 0.067). However, there were statistically significant reductions in perceived intuitiveness in the ADVR demo version vs both AR one (Z = −3.146, p = 0.002) and DVR one (Z = −2.523, p = 0.012). There was a statistically significant difference in the perceived ease of learning (item Q8), χ2(2) = 17.882, p < 0.001. Post hoc analysis with Wilcoxon signed-rank tests was conducted. Median perceived ease of learning for AR, DVR, and ADVR demo versions were 7 (6 to 7), 6 (6 to 7), 5 (4 to 6), respectively. There was no significant difference between the AR prototype and the DVR one (Z = −1.000, p = 0.317). However, there were statistically significant reductions in perceived ease of learning in the ADVR demo version vs both AR one (Z = −2.820, p = 0.005) and DVR one (Z = −2.831, p = 0.005). There was a statistically significant difference in the perceived ease of use regarding the similarity to end-use with the real compressor (item Q9), χ2(2) = 8.851, p < 0.012. Post hoc analysis with Wilcoxon signed-rank tests was conducted. Median perceived ease of use for AR, DVR, and ADVR demo versions were 6 (6 to 7), 6 (5 to 7), 4 (4 to 6), respectively. There were no significant differences between the AR prototype and the DVR one (Z = −1.026, p = 0.305) or between DVR and ADVR (Z = -1.588, p = 0.112). However, there was a statistically significant reduction in perceived ease of use in the ADVR demo version vs AR one (Z = −2.914, p = 0.004). As regards participants’ feedback on the AR demo version, 11 of them mentioned the naturalness and immediacy of interaction and 6 the advantage to display virtual information referenced to a real scale CAD model. However, users also expressed the weaknesses of using a natural feature as the fact that tracking is far from what the final result will be with the compressor physically available. In particular, 2 of them stated that it is difficult to distinguish the visual assets from the virtual representation of the real object. As for participants’ comments on the DVR prototype, 10 of them mentioned the ease of accessibility not requiring any type of tracking. 5 users stated the naturalness of the interaction, even if 8 participants expressed the absolute detachment from the real context with high difficulty in recognizing the true AR from the virtual 3D model replacing the real product. Moreover, 2 of them said that not using a scale of 1:1 could be a disadvantage while testing the design goodness of the visual assets.

Displaying Augmented Reality Manuals in the Design Phase

1323

Lastly, the ADVR solution was found more demanding regarding the interaction by 12 participants due to the need of using a secondary screen. 2 users also expressed the potential criticality of the tracking since the model target is used on a 3D model placed on a 2D screen compromising the final result. However, 4 participants mentioned great ease in recognizing the visual assets from the virtual representation of the real object, and 5 users stated a possible increase in information sharing between colleagues. The participants’ final choices (see Fig. 5) show that the AR demo version is overall preferred (67%) compared to the DVR (20%) and ADVR (13%) prototypes.

Fig. 5. Plot of the participants’ final choices of the best demo version display in the design phase of a new product.

3 Discussion The user evaluation of the proposed AR, DVR, and ADVR demo versions was useful for an initial answer to our research question “how to display a demo version of an AR manual in the design phase of a new product?”. The results obtained with the questionnaire are confirmed by the technical comments provided by the participants highlighting the advantages and disadvantages of the prototypes under study. The AR solution presents the main drawback of needing a physical printed image as natural feature to attach AR instructions on the compressor, even if it is a CAD model instead of a real one. However, this limitation can be overcome using markerless tracking methods. Furthermore, this approach allows to show the product that is not physically available in a real scale, trying to provide a realistic perception of the effective size of the compressor during the demonstration. The DVR proposal is characterized by a technique that is commonly called simulated AR [18], indirect AR [19], or immersive virtual AR [20]. Previous works [19, 20] revealed that using a simulated AR system ensures results comparable to true AR for

1324

E. Laviola et al.

user evaluation, but with lower costs and without the problem of managing different devices and real-life locations, as for a product that is not physically available. In fact, this approach does not need a printed marker during the demonstration but only the handheld device. This result is also confirmed by the answers to item Q3, in which the DVR prototype obtained the highest score compared to the other demo versions. However, the compressor 3D model is resized in a full virtual environment. The interaction cannot be considered natural since the user has to use the touch screen to change the view of the product, even if the result of our case study for item Q5 did not find a significant difference in the interaction between the DVR and AR demo versions. This outcome is probably a consequence of the comparison with the last ADVR solution that found a significant reduction in the naturalness of interaction (item Q5) and the perceived ease of learning (item Q8). The ADVR prototype, in fact, requires that the CAD model of the compressor is shown on a different device from the handled device. Therefore, the intervention of 2 users is needed for a comfortable interaction: one has to run the application in which only AR instructions are included, while the other has to move the 3D model in a virtual environment. However, both during the demonstration and after the questionnaire, it was found that this solution guarantees the user to be able to distinguish more easily the virtual elements of the true AR from the simulated AR, unlike the previous approaches. Furthermore, an unexpected result, obtained by the participants’ comments, was observed concerning the stimulation of information sharing between users in this last solution. Probably the need to use a secondary large screen to interact with the 3D model of the compressor encourages users to use more handheld devices at the same time-sharing opinions on the choice of visual assets. After the user study, the three solutions were evaluated together with the company. Although the user study participants overall preferred the AR prototype, the company chose the ADVR solution. The main motivation could be the fact that users interviewed in the case study expressed a fairly high level of familiarity with AR and VR technologies, so it was not found a significant difference in the conception of what is real and what is augmented between all the three prototypes (item Q6). Conversely, in an industrial field, the company needs also to consider managers who are not familiar with these innovative technologies and for whom it may be more difficult to distinguish visual assets from the virtual representation of the real object. Furthermore, they are less able to perceive little tracking inaccuracies that users detected in the ADVR solution. Therefore, taking this into account, for the company the ADVR demo version allows a better perception of what would be the final result of the true AR application, unless the type of tracking chosen when the product will be physically available, and encourages sharing ideas on choosing the best visual assets for the AR manual.

4 Conclusion and Future Works In this work, we proposed a case study that refers to the development of an AR manual consisting of defining how to convey instructions to operators exploiting AR. We compared three solutions (AR, DVR, and ADVR prototypes) for displaying a demo version of the AR manual when the real product is not available during the design phase of its

Displaying Augmented Reality Manuals in the Design Phase

1325

lifecycle, opting to replace it with its CAD model. We considered the results from a user experience questionnaire, participants’ feedback, and the company choice. The AR solution allows to show the product that is not physically available in a real scale with the drawback of needing a physical printed image as natural feature. The DVR prototype overcomes the problem of managing different devices and real-life locations, but without a real scale and a natural interaction. The ADVR prototype allows to distinguish the virtual elements of the true AR more easily from the simulated AR, but with a less natural interaction due to the use of a secondary screen. Although the user study participants overall preferred the AR demo version, the company chose the ADVR solution due to a better perception of what would be the final result of the true AR application unless the type of tracking chosen when the product will be physically available. In future works, we plan to perform a second user study to compare the user experience between the three proposed demo versions and the true AR application with the real product to confirm the best solution for future products not physically available.

References 1. Nee, A.Y.C., Ong, S.K., Chryssolouris, G., Mourtzis, D.: Augmented reality applications in design and manufacturing. CIRP Ann. Manuf. Technol. 61, 657–679 (2012). https://doi.org/ 10.1016/j.cirp.2012.05.010 2. Carmigniani, J., Furht, B., Anisetti, M., Ceravolo, P., Damiani, E., Ivkovic, M.: Augmented reality technologies, systems and applications. Multimed. Tools Appl. 51, 341–377 (2011). https://doi.org/10.1007/s11042-010-0660-6 3. Inversini, A., Schegg, R. (eds.): Information and Communication Technologies in Tourism 2016. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-28231-2 4. de Souza Cardoso, L.F., Mariano, F.C.M.Q., Zorzal, E.R.: A survey of industrial augmented reality. Comput. Ind. Eng. 139, 106159 (2020). https://doi.org/10.1016/j.cie.2019.106159 5. Ong, S.K., Yuan, M.L., Nee, A.Y.C.: Augmented reality applications in manufacturing: a survey. Int. J. Prod. Res. 46, 2707–2742 (2008). https://doi.org/10.1080/00207540601064773 6. Agati, S.S., Bauer, R.D., Hounsell, M.D.S., Paterno, A.S.: Augmented reality for manual assembly in Industry 4.0: gathering guidelines. In: Proceedings - 2020 22nd Symposium on Virtual and Augmented Reality, SVR 2020, pp. 179–188 (2020). https://doi.org/10.1109/SVR 51698.2020.00039 7. Azuma, R.T.: A survey of augmented reality. Presence 6, 355–385 (1997). https://doi.org/10. 1162/pres.1997.6.4.355 8. Palmarini, R., Erkoyuncu, J.A., Roy, R., Torabmostaedi, H.: A systematic review of augmented reality applications in maintenance. Robot. Comput. Integr. Manuf. 49, 215–228 (2018). https://doi.org/10.1016/j.rcim.2017.06.002 9. Scurati, G.W., Gattullo, M., Fiorentino, M., Ferrise, F., Bordegoni, M., Uva, A.E.: Converting maintenance actions into standard symbols for Augmented Reality applications in Industry 4.0. Comput. Ind. 98, 68–79 (2018). https://doi.org/10.1016/j.compind.2018.02.001 10. Ceruti, A., Marzocca, P., Liverani, A., Bil, C.: Maintenance in aeronautics in an Industry 4.0 context: the role of Augmented Reality and Additive Manufacturing. J. Comput. Des. Eng. 6, 516–526 (2019). https://doi.org/10.1016/j.jcde.2019.02.001 11. Komonen, K.: A cost model of industrial maintenance for profitability analysis and benchmarking. Int. J. Prod. Econ. 79, 15–31 (2002). https://doi.org/10.1016/S0925-5273(00)001 87-0

1326

E. Laviola et al.

12. Gattullo, M., Evangelista, A., Uva, A.E., Fiorentino, M., Gabbard, J.L.: What, how, and why are visual assets used in industrial augmented reality? A systematic review and classification in maintenance, assembly, and training (from 1997 to 2019). IEEE Trans. Vis. Comput. Graph. 28, 1443–1456 (2020). https://doi.org/10.1109/TVCG.2020.3014614 13. De Amicis, R., Ceruti, A., Francia, D., Frizziero, L., Simões, B.: Augmented Reality for virtual user manual. Int. J. Interact. Des. Manuf. (IJIDeM) 12(2), 689–697 (2017). https:// doi.org/10.1007/s12008-017-0451-7 14. Lampen, E., Teuber, J., Gaisbauer, F., Bär, T., Pfeiffer, T., Wachsmuth, S.: Combining simulation and augmented reality methods for enhanced worker assistance in manual assembly. Procedia CIRP 81, 588–659 (2019). https://doi.org/10.1016/j.procir.2019.03.160 15. Lee, E.A.L., Wong, K.W.: Learning with desktop virtual reality: low spatial ability learners are more positively affected. Comput. Educ. 79, 49–58 (2014). https://doi.org/10.1016/j.com pedu.2014.07.010 16. Olsson, T.: Human Factors in Augmented Reality Environments, pp. 203–232 (2013). https:// doi.org/10.1007/978-1-4614-4205-9 17. Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology: a comparison of two theoretical models. Manage. Sci. 35, 982–1003 (1989). https://doi.org/10.1287/ mnsc.35.8.982 18. Jeffri, N.F.S., Rambli, D.R.A.: Problems with physical simulation in a virtual lego-based assembly task using Unity3D engine. In: Proceedings - 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality, AIVR 2020, pp. 298–300 (2020). https://doi. org/10.1109/AIVR50618.2020.00060 19. Wither, J., Tsai, Y.T., Azuma, R.: Indirect augmented reality. Comput. Graph. 35, 810–822 (2011). https://doi.org/10.1016/j.cag.2011.04.010 20. Alce, G., Hermodsson, K., Wallergård, M., Thern, L., Hadzovic, T.: A prototyping method to simulate wearable augmented reality interaction in a virtual environment - a pilot study. Int. J. Virtual World Hum. Comput. Interact. (2015). https://doi.org/10.11159/vwhci.2015.003

From Therbligs to Visual Assets: A Technique to Convey Work Instructions in Augmented Reality Technical Documentation Michele Gattullo , Enricoandrea Laviola(B)

, and Antonio Emmanuele Uva

Department of Mechanics, Mathematics, and Management, Polytechnic Institute of Bari, via Orabona, 4, 70125 Bari, Italy [email protected]

Abstract. In this work, we present a technique to simplify the authoring of Augmented Reality Technical Documentation, allowing technical writers with limited knowledge in Augmented Reality (AR) to produce this new type of documentation in their companies. Contrarily to Traditional Technical Documentation, AR offers the opportunity to provide the exact amount of information needed through a careful design of the AR interface. However, in the literature, there are no established techniques to break work instructions down into elemental pieces of information and define how to convey this information through visual assets in AR. In this work, we proposed identifying the information contained in work instructions of technical documentation, extending the method of “Therbligs,” already used in the literature to describe assembly tasks. First, we defined six classes of information types: identity, location, order, way-to, notification, and orientation. Then, we showed how to use these information types to break down the work instructions of an assembly manual used as a case study. We found that the six information types were enough to analyze the complete manual. The second contribution of this work is the proposal of the most suitable visual asset and its properties for each information type. This goal was accomplished through a technical discussion in a focus group with ten experts in the design of AR technical documentation. Keywords: Augmented reality · Technical documentation · Work instructions · Authoring · Information presentation

1 Introduction Over the last twenty years, numerous studies confirmed that Augmented Reality (AR) is a powerful tool to assist industrial operators, helping them in assembly tasks, maintenance applications, context-aware assistance, quality control, data visualization, and interaction. Therefore, industrial companies are increasingly asking for converting their technical documentation from a traditional version, usually delivered in the form of printed manuals, to a new AR version. Traditional technical documentation (TTD), either in the form of printed manuals or in the form of Electronic Work Instruction [1], is a simple collection of work instructions, and much information is often conveyed through © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1327–1339, 2023. https://doi.org/10.1007/978-3-031-15928-2_116

1328

M. Gattullo et al.

large texts or complex images hard to relate to the working products. In AR technical documentation (ARTD) instead, work instructions can be placed in precise positions on real products, and the information can be chunked and conveyed through tailored visual assets [2] (e.g., CAD models, drawings, videos). This new kind of technical documentation leads, from one side to potential improvements of operator performance, but, from the other side, new issues in the authoring of technical documentation appear. They are mostly due to the interconnection with the real environment and the larger variety of usable visual assets. Consequently, the design of instructions for ARTD requires highly qualified developers [1]. Nevertheless, there is still a lack of expert knowledge of AR in the companies, and this is one of the factors that limit the adoption of AR in the industry [3]. Therefore, contrarily to TTD, research studies are needed to provide clear guidelines and case studies to allow the authoring of ARTD also to users not skilled in AR. In the literature, there is no agreement on the parameters to be used to analyze work instructions and then choose the correspondent visual assets for elemental information. Some studies propose visual assets with a different level of information detail based on the operator’s experience or preference [4, 5]. In other studies [6, 7], the assembly task is decomposed into subtasks or basic operations without a unique and standard method. In some cases, there is also discrimination about task complexity [8, 9]. However, the complexity of a task is determined by various factors (e.g., number and shapes of pieces to assemble) and is therefore difficult to define [6]. In this work, we proposed a technique to help convert a TTD into an ARTD, aiming to answer the following research questions: 1) “What is the elemental information contained in work instructions?” 2) “Which virtual content is recommended for each information type?”. We answered the first research question, proposing an information model to analyze work instructions based on separating the information they contain. We started from the method of “Therbligs” [10] that, according to Oyekan et al. [11], can be used to describe any task. Then, we extended this previous work by proposing a clustering of the information needed to perform a Therblig motion (e.g., find, grasp, assemble) into six information types: identity, location, order, way-to, notification, and orientation. Finally, based on these information types, we proposed a suitable visual asset for each to answer our second research question. This goal was obtained through a focus group with experts in the design of ARTD, aiming at transferring their knowledge to technical writers not skilled in AR.

2 The Information Model for the Analysis of Work Instructions 2.1 From the Method of Therbligs to the Information Types The Therblig method represents a way to describe the motions involved in performing a task [10]. The method was invented and refined by Frank and Lillian Gilbreth in the early 1900s with the scope of identifying unnecessary or fatigue-producing motions for each type of activity. In [10], it is possible to find a detailed list of the eighteen Therbligs that can be used to describe any task: search, find, select, grasp, hold, transport loaded, transport empty, position, assemble, use, disassemble, inspect, pre-position, release load, unavoidable delay, avoidable delay, plan and rest.

From Therbligs to Visual Assets

1329

Oyekan et al. applied the Therblig method to describe fitting instructions for assembling a ‘flat-pack’ table [11]. They further extended the method presenting the model of a Therblig: humans examine the environment and, based on “information to start” and “information to end,” can perform the action described by the Therblig. They listed all the “information to start” and “information to end” for each Therblig. For example, for the Therblig “search,” the operators receive the information about the “identity of the searched part” and then start the action of “looking around.” When they receive the information of the “location of the searched part,” the action ends. The model proposed in [11] could be a valid method to analyze all the work instructions in an assembly manual. Consequently, in an ARTD, each resulting information (to start and to end) could be conveyed through a visual asset. However, we observed that many of the information listed in [11] are similar even if they belong to different Therbligs. Thus, the visual asset could be the same. For example, the Therblig “search” has as information to start “identity of the search part.” Similarly, the Therblig “use” has as information to start “identity of the object to use,” but for both Therbligs, the information could be generalized as “identity of an object” and can be conveyed through the same type of visual asset. Then, in this work, we further extended the model proposed by Oyekan et al. [11], clustering the information to start and to end associated with different Therbligs. As a result, we obtained six mutually exclusive and jointly exhaustive classes that we called “Information Types:” identity, location, way-to, notification, order, and orientation. The result of the clustering is presented in Table 1. We proposed then to analyze work instructions, performing the following steps: 1) decomposing work instruction into single small tasks; 2) for each task, determining the set of information needed from Table 1; 3) associating each information to one of the six information types. 2.2 Case Study We showed how to apply the proposed information model for the case study of the assembly instructions of a “model pick-up truck.” The entire production cycle of the pick-up truck is carried out in the C-Factory, a laboratory of the University of Applied Sciences Würzburg-Schweinfurt, where it is possible to simulate in miniature an entire production cycle in a smart factory. The assembly instructions of the pick-up truck contain all six information types. The original manual is composed of nine instructions which are presented to the operator in the C-factory through text and drawings in a Microsoft PowerPoint document. Each instruction is composed of one elemental task. We analyzed each of these instructions, extracting the information set needed to accomplish the task using the six information types proposed in this work. In the analysis presented in Table 2, we did not consider the information for which a visual asset is not needed, such as the notification that an object was assembled. This information is directly acquired by the operator while doing the task.

1330

M. Gattullo et al.

Table 1. Result of the clustering of the Therblig and relative information (information to end has a gray background to distinguish from information to start) into information types IDENTITY Identity of the searched part ASSEMBL. Recognizable characteristic of the FIND FIND searched part TR. LOADED Identity of the part to move DISASSEM. SELECT Criteria of selection INSPECT GRASP Identity of the object to grasp PREPOS. HOLD Identity of the object to hold INSPECT USE Identity of the object to use REL. LOAD POSITION Identity of the part to position LOCATION TR. LOADED Location of the part TR. LOADED Location of the part that matches the TR. EMPTY SELECT criteria SEARCH WAY-TO GRASP The way to grasp it USE ASSEMBL. The way to assemble them DISASSEM. NOTIFICATION Confirmation that the object is GRASP REL. LOAD grasped Confirmation that the part has TR. LOADED INSPECT reached its destination Confirmation that the hand has TR. EMPTY POSITION reached its destination UN. DELAY Problem occurring: shortage, delay AV. DELAY Confirmation that the parts are asASSEMBL. UN. DELAY sembled Confirmation that the use motion is USE AV. DELAY finished Confirmation that the parts are disDISASSEM. PREPOSIT. assembled REST Need for the worker to rest REST ORDER HOLD Order to hold PLAN HOLD Order to do the next motion ORIENTATION POSITION Initial orientation of the part PREPOSIT. POSITION Desired orientation PREPOSIT. SEARCH

Identities of the Parts to assemble Matched characteristic of a part in the search area Identities of the Parts to disassemble Identity of the object Identity of the part to position predetermined standard object to release

Destination of the part Destination of the hand Location of the searched part The way to use it The way to disassemble them Confirmation that the object has been released Result of the comparison between the object and the standard Confirmation that the part is oriented as desired Problem occurring Information that the problem has been solved Information that the problem has been solved Confirmation that the part is oriented as desired End of the resting period Structured sequence defining how things are going to happen Initial orientation of the part Desired orientation

From Therbligs to Visual Assets

1331

Table 2. Result of the analysis of the work instructions contained in the case study of the pick-up truck manual. Step 1

2

3

4

5

Work instruction To start the assembly you should have the following components:

TASK: “check to have the components” Please fit the assembly device to your pick-up size. Adjust the platform support to 17.5mm by turning the lower knurled nut 1. Clamp the position by turning the upper knurled nut 2.

TASK: “fit the assembly device to the pick-up size” Move the slide into the displayed position.

TASK: “move the slide” Insert your truck bed into the assembly device as shown.

TASK: “insert the truck bed” Place the cab into the displayed position.

Information types IDENTITY of the ladder frame IDENTITY of the truck bed IDENTITY of the 3 wheel axles IDENTITY of the cab IDENTITY of the 6 axle carriers IDENTITY of the 12 screws

IDENTITY of the lower knurled nut 1 LOCATION of the lower knurled nut 1 WAY TO adjust the platform support (by turning the lower knurled nut 1) IDENTITY of the height of 17.5 mm (predetermined standard) NOTIFICATION that the height is reached IDENTITY of the upper knurled nut 2 LOCATION of the upper knurled nut 2 WAY TO clamp the position (by turning the upper knurled nut 2) NOTIFICATION that the assembly device is clamped IDENTITY of the slide LOCATION of the slide (initial position) WAY TO move the slide (translation) LOCATION of the slide (final position) NOTIFICATION that the final position is reached

IDENTITY: the truck bed LOCATION of the truck bed on the assembly device ORIENTATION of the truck bed WAY TO insert the truck bed

IDENTITY: the cab LOCATION of the cab on the assembly device ORIENTATION of the cab WAY TO place the cab

TASK: “place the cab”

(continued)

1332

M. Gattullo et al. Table 2. (continued)

6

7

8

Place the ladder frame onto the cab and truck bed with the inscription facing upwards. Make sure to align bores.

TASK: “place the ladder frame” Add the axles by inserting the tires into the wheelarches provided.

TASK: “add the axles” Position the axle carriers on the axles. Tighten all screws on the truck, starting from rear to front.

2

IDENTITY of the ladder frame LOCATION of the ladder frame on the cab and truck bed ORIENTATION of the ladder frame (inscription upwards) WAY TO place the ladder frame NOTIFICATION: inspect that bores are aligned

IDENTITY of the axles LOCATION of the (three) wheel-arches WAY TO add the axles

IDENTITY of the screws LOCATION of the (twelve) screws on the axle carrier WAY TO tighten the screws ORDER of tightening (from rear to front)

1

3

9

TASK: “position the axle carriers” You completed the assembly of your pick-up truck. Please remove your truck from the assembly device.

NOTIFICATION: the assembly is completed IDENTITY of the truck WAY TO remove the truck

TASK: “remove the truck from the assembly device”

3 Recommendation for the Choice of the Visual Assets We set up a focus group of ten people (2 females, 24 to 48 years old, mean = 35.7, SD = 9.42), selecting them among experts in the design of ARTD. First, we presented the six information types through examples of instructions taken from real manuals, including the case study. Then, we presented a list of eight types of visual assets that can be used to convey information in industrial AR interfaces: text, signs, photographs, video, drawings, technical drawings, product models, auxiliary models. Each type of visual asset can provide different information according to its design properties, such as the frame of reference (screen/world fixed), color coding, and animations. The list

From Therbligs to Visual Assets

1333

of the visual assets and their properties derive from a literature survey on industrial AR interfaces [12], where it is possible to find more details. This taxonomy was presented to the users in the focus group to avoid misunderstandings among them. Based on their previous experience in authoring AR interfaces, they were asked to discuss the visual assets for every information type. After the discussion, each member of the focus group was asked to make a proposal of the most suitable visual asset and its relative properties. Users were asked to justify their choice and not reveal it to other users before everyone had chosen. Then, the two (or more, in case of tie) most preferred visual assets were revealed, and each user was asked to decide for one or the other. In this way, the most voted visual asset, for each information type, was defined as the one proposed by the focus group. We reported below the main points covered in the discussion and the proposal of the visual asset for every information type. In the examples provided in the figures we highlighted the proposed visual asset with a yellow sketch to distinguish it from the CAD model of the assembly device used to simulate the real object that was unavailable, due to COVID-19. 3.1 Identity According to the focus group, the most suited visual asset for the information type identity is a drawing screen-fixed, static, with the colors of the real object to be identified (Fig. 1). Users said that there is no need to use a world-fixed visual asset, and for this reason, they excluded the product model, which further requires the availability of the CAD model. An alternative to the drawing could be the photograph. However, users considered that there could not be the possibility to acquire a picture of real objects (e.g., for safety or privacy policies). Furthermore, with the drawing, there is also the possibility to convey the identity of a measure (e.g., a tightening torque) by adding text to the drawing. As to the video, users did not find the dynamic content offered by a video as an exploitable advantage for this information type. The text is the visual asset used in the TTD for this information type, but it requires a higher cognitive load for operators to understand a component than a drawing. Finally, the users in the focus group agreed that some visual

Fig. 1. Example of displaying identity information through a drawing (highlighted in yellow) screen-fixed, static, with the colors of the real truck bed.

1334

M. Gattullo et al.

assets could not convey the information type identity. For example, signs and technical drawings could express only a limited information set, whereas auxiliary models only acquire sense if coupled to a real object. 3.2 Location The most suited visual asset for the location information type turned out to be the auxiliary model world-fixed, without color coding and static (Fig. 2). Users agreed that color coding and animation do not contribute significantly to conveying the information. On the contrary, using a world-fixed visual asset is crucial to highlight the location in the real world. For this reason, they excluded all the visual assets that may occlude a large portion of the real world: photograph, drawing, technical drawing, video. As to the product model, users pointed out that it could convey more information than the simple one about location, which may cause an information overload. For text and sign, users made similar considerations to those expressed for the identity information type.

Fig. 2. Example of displaying location information through an auxiliary model (highlighted in yellow) world-fixed, without color coding and static.

3.3 Way-to For the information type way-to, the focus group chose a product model, world-fixed without color coding (Fig. 3). The use of animations provides a powerful preview of the operation to accomplish. Operators watch the animation and then must only replicate what they have watched. For this reason, all the visual assets that could not produce this result were rejected: drawing, technical drawing, photograph, sign, and text. The operation preview could also be achieved through a video, but CAD models benefit from being registered to the real components to be handled. Finally, an animated auxiliary model could not convey this type of information for complex tasks where the reproduction of the real components is needed.

From Therbligs to Visual Assets

1335

Fig. 3. Example of displaying way-to information through the animation of a product model (highlighted in yellow) world-fixed, without color coding; on the left, it is displayed the initial position of the animation while on the right it is displayed the final position.

3.4 Notification The information type notification is the one that required a higher effort to reach an agreement among users in the focus group. They argued that a visual asset is not needed to convey this information in many cases because the operator can autonomously perceive if an operation was carried out well. As an example, for step 2 in the proposed case study, the notification that “the assembly device is clamped” does not require a visual asset, but it is left to the operator senses. However, there are some cases where the notification requires a visual asset. For example, in step 6 of the proposed case study, the instruction “make sure to align bores.” For this and similar cases, the focus group decided that a screen-fixed 2D text without color coding and not animated is enough (Fig. 4).

Fig. 4. Example of displaying notification information through a text (highlighted in yellow) screen-fixed, without a color coding and static.

3.5 Order To specify the order information type, i.e., an assembly sequence, the focus group proposed the use of a text (2D or 3D) world-fixed, without color coding, and animated in a way to be always in front of the device camera even if it moves respect to the real world (Fig. 5).

1336

M. Gattullo et al.

Fig. 5. Example of displaying order information through a text (highlighted in yellow) worldfixed, without a color coding and animated.

However, users said that this information is usually coupled with the way-to information in work instructions. In these cases, the animation of the product models may implicitly convey the order information, too. Then, the text could be a piece of redundant information. The second most preferred visual asset was the auxiliary model, in the form of animated arrows pointing from one real component to the following in the sequence. However, the focus group considered that if two components are far away, the information conveyed by the auxiliary model may be too hard to understand. All the other visual assets were not considered. 3.6 Orientation To convey the orientation information type, the focus group argued that a product model, world-fixed, without color coding and static (Fig. 6), was the best solution. It denotes a preview of the orientation that a component should have in the assembly. Users in the focus group made similar considerations to those about the way-to information type.

Fig. 6. Example of displaying orientation information through a product model (highlighted in yellow) world-fixed, without a color coding and static.

From Therbligs to Visual Assets

1337

However, the two information types could not be merged because the way-to is richer than the orientation, including the way to grasp or use a particular tool. For this reason, it required the animation of the product model, according to the focus group.

4 Conclusion and Future Work We presented a technique to convert work instructions in TTD to visual assets for ARTD. We aimed at answering the following research questions: 1) “What is the elemental information contained in work instructions?” 2) “Which virtual content is recommended for each information type?”. We answered our first research question proposing an information model based on the decomposition of a work instruction into six information types: identity, location, order, way-to, notification, and orientation. This information model, extending the method of “Therbligs” presented by Ferguson [10] and already used by Oyekan et al. [11], was successfully applied to the case study of an assembly manual for the model of a pick-up truck. Interestingly, the six information types were enough to analyze all the instructions contained in the TTD to be converted. The answer to the second research question derived from the discussion and evaluation provided by a focus group of experts in the design of ARTD. Their technical considerations allowed to restrict the possible choices of virtual content. We asked them to propose a candidate visual asset for each of the six information types, and the results are reported in Table 3. Table 3. Visual assets proposed by the focus group for the six information types Information type

Visual asset

Properties Frame of reference

Color coding

Animation

Identity

Drawing

Screen-fixed

Yes

No

Location

Auxiliary model

World-fixed

No

No

Way-to

Product model

World-fixed

No

Yes

Notification

Text

Screen-fixed

No

No

Order

Text

World-fixed

No

Yes

Orientation

Product model

World-fixed

No

Yes

This work revealed that the use of product models in ARTD could not be generalized to all the instructions. Even if previous works showed that AR interfaces based on product models are the most common [12] the most engaging [13], and reduce error rates [9], the literature on this topic also revealed some disadvantages for their use [14, 15]. Furthermore, in some cases, a product model may convey a higher quantity of information than needed to accomplish a task correctly [16]. The technical considerations that we collected from the experts showed some information for which it is advisable to use other visual assets.

1338

M. Gattullo et al.

In this work, we did not consider combining the visual assets in a complete work instruction. As we can see from Table 2, an instruction is usually composed of more than one piece of information. Therefore, according to the proposed technique, in the ARTD, it would be converted through many visual assets. However, most of them may be redundant. For example, an instruction may contain both the information about the wayto assemble an object and the identity of that object. Using the product model to convey the way-to information makes useless the information about the identity, conveyed by the drawing. Furthermore, there could be some cases where the information does not require the use of a visual asset and can be obtained through object affordance or operator experience, as pointed out in [16]. Then, in future work, we will consider the previous considerations, showing how to apply the technique proposed in this work to a complete ARTD.

References 1. Geng, J., et al.: A systematic design method of adaptive augmented reality work instruction for complex industrial operations. Comput. Ind. 119, 103229 (2020). https://doi.org/10.1016/ j.compind.2020.103229 2. Lechner, M.: ARML 2.0 in the context of existing AR data formats. In: 2013 6th Workshop on Software Engineering and Architectures for Realtime Interactive Systems, SEARIS 2013; Co-located with the 2013 Virtual Reality Conference - Proceedings (2013). https://doi.org/ 10.1109/SEARIS.2013.6798107 3. Davila Delgado, J.M., Oyedele, L., Beach, T., Demian, P.: Augmented and virtual reality in construction: drivers and limitations for industry adoption. J. Constr. Eng. Manag. (2020). https://doi.org/10.1061/(asce)co.1943-7862.0001844 4. Tainaka, K., et al.: Guideline and tool for designing an assembly task support system using augmented reality. In: Proceedings - 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2020, pp. 486–497 (2020). https://doi.org/10.1109/ISMAR5 0242.2020.00077 5. Chu, C.H., Liao, C.J., Lin, S.C.: Comparing augmented reality-assisted assembly functionsa case study on Dougong structure. Appl. Sci. 10 (2020). https://doi.org/10.3390/APP101 03383 6. Stork, S., Schubö, A.: Human cognition in manual assembly: theories and applications. Adv. Eng. Informat. 24, 320–328 (2010). https://doi.org/10.1016/j.aei.2010.05.010 7. Macallister, A., et al.: Comparing visual assembly aids for augmented reality work instructions. In: Interservice/Industry Train, pp. 1–14 (2017) 8. Radkowski, R.: Investigation of visual features for augmented reality assembly assistance. In: Shumaker, R., Lackey, S. (eds.) VAMR 2015. LNCS, vol. 9179, pp. 488–498. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-21067-4_50 9. Jasche, F., Hofmann, S., Ludwig, T.: Comparison of different types of augmented reality visualizations for instructions. In: Conference on Human Factors in Computing Systems Proceedings (2021). https://doi.org/10.1145/3411764.3445724 10. Ferguson, D.: Therbligs: the keys to simplifying work. The Gilbreth Network (2000). https:// gilbrethnetwork.tripod.com/therbligs.html. Accessed 25 Jan 2022 11. Oyekan, J., Hutabarat, W., Turner, C., Arnoult, C., Tiwari, A.: Using Therbligs to embed intelligence in workpieces for digital assistive assembly. J. Ambient. Intell. Humaniz. Comput. 11(6), 2489–2503 (2019). https://doi.org/10.1007/s12652-019-01294-2

From Therbligs to Visual Assets

1339

12. Gattullo, M., Evangelista, A., Uva, A.E., Fiorentino, M., Gabbard, J.: What, how, and why are visual assets used in industrial augmented reality? A systematic review and classification in maintenance, assembly, and training (from 1997 to 2019). IEEE Trans. Vis. Comput. Graph. (2020). https://doi.org/10.1109/TVCG.2020.3014614 13. Gattullo, M., et al.: Design preferences on Industrial Augmented Reality: a survey with potential technical writers. In: Adjunct Proceedings of the 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR-Adjunct 2020, pp. 172–177 (2020). https://doi.org/ 10.1109/ISMAR-Adjunct51615.2020.00054 14. Radkowski, R., Herrema, J., Oliver, J.: Augmented reality-based manual assembly support with visual features for different degrees of difficulty. Int. J. Hum. Comput. Interact. 31, 337–349 (2015). https://doi.org/10.1080/10447318.2014.994194 15. Gattullo, M., Scurati, G.W., Evangelista, A., Ferrise, F., Fiorentino, M., Uva, A.E.: Informing the use of visual assets in industrial augmented reality. In: Rizzi, C., Andrisano, A.O., Leali, F., Gherardini, F., Pini, F., Vergnano, A. (eds.) ADM 2019. LNME, pp. 106–117. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31154-4_10 16. Laviola, E., Gattullo, M., Manghisi, V.M., Fiorentino, M., Uva, A.E.: Minimal AR: visual asset optimization for the authoring of augmented reality work instructions in manufacturing. Int. J. Adv. Manuf. Technol. 119, 1769–1784 (2021). https://doi.org/10.1007/s00170-02108449-6

Immersive Virtual Reality as a Training Tool for Safety Working Procedure in Confined Spaces Vito Modesto Manghisi1(B) , Alessandro Evangelista1 , Veronica Rossano2 , Claudia Giliberti3 , Raffaele Mariconte3 , Maurizio Diano4 , Valerio Galasso5 , and Antonio Emmanuele Uva1 1 Department of Mechanics, Mathematics, and Management, Polytechnic University of Bari,

Bari, Italy [email protected] 2 Department of Computer Science, University of Bari, Bari, Italy 3 Dipartimento Innovazioni Tecnologiche e Sicurezza degli Impianti, Prodotti ed Insediamenti Antropici Inail Rome, Rome, Italy 4 Direzione Regionale Calabria, Unità Operativa Territoriale Inail Catanzaro, Catanzaro, Italy 5 Direzione Regionale Puglia, Unità Operativa Territoriale Inail Taranto, Taranto, Italy

Abstract. Confined spaces cause fatal and serious injuries that tragically recur with similar dynamics every year. A survey carried out on the Italian territory by the National Institute for Insurance against Accidents at Work on the years from 2001 to 2019 shows that a total of 184 accidents occurred in confined spaces mostly because workers were poorly informed and trained, the risk assessment lacking, and that the provisions of the law were not respected. Often fatal events affect those involved in the primary incident and those who intervene in an attempt to assist. Consequently, it is necessary to apply a proper training process and extend it to all those involved in operations both inside and next to confined spaces. Traditionally, the training approach consists of classroom lectures and simulations in real environments. However, real simulation scenarios are costly and timeconsuming as real scenarios training simulations require expensive purpose-built physical simulators. To overcome these limitations, we designed and implemented an Immersive Virtual Reality based platform to support and enforce the traditional confined space safety training approach. Finally, we defined an experimental validation procedure. Keywords: Immersive Virtual Reality · Safety training · Confined spaces · Work safety · Usability

1 Introduction The Occupational Safety and Health Administration (OSHA) defines a confined space as an area with a limited opening that is large enough for workers’ entry and exit and that © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1340–1351, 2023. https://doi.org/10.1007/978-3-031-15928-2_117

Immersive Virtual Reality as a Training Tool for Safety Working Procedure

1341

is not designed for continuous occupancy. Working in confined spaces may expose to hazards that could potentially exist before work begins. For example, a confined space could have an oxygen-deficient and/or a flammable/explosive atmosphere due to the lack of natural ventilation. In addition, other risks could occur during work execution, such as falls, contact with inadequately protected sources of matter and energy, and gases and vapors released during the work activity. Most of these risks are related to the lack or improper use of Personal Protective Equipment (PPE), and communication and rescue issues. Although the identification and classification of confined space hazards resulted in high regulatory and safety standards, fatal accidents involving confined spaces continue to occur. A recent survey carried out on the Italian territory by the National Institute for Insurance against Accidents at Work (INAIL) on the years from 2001 to 2019 provides useful information on the scale and circumstances of the phenomenon. This survey reveals that a total of 184 accidents occurred in confined spaces. Furthermore, 103 accidents out of 184 were fatal and caused the death of 151 workers [1, 2]. Most of these accidents occurred inside cisterns, or storage tanks, sometimes releasing poisonous gases. Based on these statistics and the most recent events, five confined spaces are causing the death of eight workers in Italy every year. Fatal events often involve people who do not work directly in the confined space but, from the outside, attempt to rescue the injured worker. The use of incorrect and improvised rescue procedures causes secondary accidents. Unfortunately, confined space accidents occur in every highly industrialized nation in the same patterns. According to a fact sheet developed by the US Bureau of Labor Statistics [3], from 2011 to 2018, 1,030 workers died in the US due to occupational injuries involving a confined space. Among deaths, 156 were caused by falls to a lower level, 126 by inhalation of a harmful substance, 39 by depletion of oxygen, 29 by drowning, 98 by engulfment in collapsing materials, 56 by fire and explosions, 56 by being caught in running machinery. Taking all necessary precautions, such as preparing the work area, and providing all appropriate PPE for that type of confined space activity, is not enough to avoid all risks, especially the unpredictable ones. For this reason, the responsiveness of workers and their ability to quickly recognize and analyze the magnitude of the hazard is crucial when an incident occurs to make appropriate real-time decisions [4]. Analysis of confined space accidents, which tragically recur with similar dynamics, shows that workers are poorly informed and trained, lacking risk assessment and standard safety procedures fulfillment. In order to reduce the frequency of confined space accidents, the workers involved must be properly trained and informed about the hazards associated with the specific activity. Furthermore, they must be familiar with the emergency procedures to be implemented in imminent danger. Therefore, the training process should cover both the workers who enter confined spaces and the assistants who monitor the worker from the outside and intervene in rescue. Traditionally, the education and training process consists of face-to-face lectures in classrooms or training for maintenance and emergency procedures, using simulation in real environments [5]. In the latter case, it requires real resources (people and equipment)

1342

V. M. Manghisi et al.

that may not be immediately available. Moreover, even when people and equipment are available, the number of exercises is usually limited by the time required to organize and execute a practical training session as well as the high hourly rate of people and equipment. Nowadays, thanks to the technological development of Immersive Virtual Reality (IVR), educational platforms can be custom-designed, allowing the user to learn training procedures while experiencing different dangerous scenarios with the possibility of riskfree failure. Several studies show that virtual training environments improve knowledge transfer and retention outcomes [6–8]. However, results in the literature are conflicting, highlighting the need for thorough validation of the effectiveness of these technologies in the field of occupational safety [9]. Many works in the literature investigated the IVR as a tool to train workers to deal with emergencies that occur in different industrial scenarios. For example, Colombo et al. [10] studied the effectiveness of IVR as an education and training tool by simulating a chemical plant accident. The results of this preliminary work show the benefits of using IVR to make operators more responsive to react to emergencies by improving and speeding up the decision-making process after the incident. Similarly, Nazir and Manca [11] exploited IVR to combine a process and an incident simulator to dynamically replicate both normal and incident scenarios in an industrial plant. The authors highlighted the potential of IVR for training, spatial learning, knowledge enhancement, and performance evaluation. Recently, IVR training solutions were developed specifically for confined spaces scenarios. Di Donato et al. [12, 13] designed a joint system based on IVR and a physical simulator to train operators in confined spaces. Based on a serious game approach, Lu et al. [14] developed an IVR confined space rescue training scenario, showing good acceptance by the workers and significantly improving their knowledge about confined space rescue. Although IVR technology allows overcoming the limitations of traditional training safety practices for operations in confined environments, analyzing the literature reveals the need to evaluate the acceptability and effectiveness of this innovative technology as a tool supporting the traditional training approach. To this end, we designed and implemented the SPIRiT platform (Secure Personalized Immersive Reality innovative Trainer). SPIRiT aims to overcome the limitations of the traditional training model by enabling virtual training sessions in synthetic environments that accurately simulate real-world confined space interventions. This training approach has two main advantages. First, educational platforms can be specifically developed, allowing the user to learn training procedures simulating dangerous situations with riskfree failure. Second, its use can be widespread as it can be used with cost-effective VR room-scale systems. In addition, we designed a specific validation procedure to evaluate the effectiveness of such a platform as an educational and training tool. In the remaining sections of this paper, we present our approach to the platform design, the implementation choices, and the key features. We also describe the experiment design for the planned validation.

Immersive Virtual Reality as a Training Tool for Safety Working Procedure

1343

2 The SPIRiT Design The SPIRiT platform is a research project funded by INAIL BRIC grant 2019. This project aims to provide and validate an alternative tool to support training on safety procedures in confined spaces that goes beyond the traditional method based on the face-to-face lesson. Finally, once the system is validated, it can be scaled up to allow widespread use by the training institution. The structure of the training path was defined in collaboration with the staff of the Service for Prevention and Safety in the Workplace (SPESAL) of the Apulia Region in order to define the most effective training methods and scenarios. The IVR training process has three main phases: After a preliminary introduction to safety measures in confined spaces, the first phase includes presenting Personal Protective Equipment (PPEs) and other equipment for risk exposure reduction. The presentation’s virtual scene consists of a custom industrial container where PPEs are positioned around the user and are described by a guiding voice that explains the PPEs’ functionalities and use (Fig. 1).

Fig. 1. The virtual scene for the PPEs description.

The second phase involves training in three different immersive virtual scenarios. The scenarios were chosen to provide an example for each of the risk categories provided by the DHHS (NIOSH) Publication Number 80-106 – “Criteria for a Recommended Standard: Working in Confined Spaces”. In these scenarios, a guiding voice provides instructions on the operational procedure to be performed. The operator must perform those instructions, whereas the system automatically checks the correct execution order and notifies errors. The third phase is a test phase used to evaluate the skills acquired by the users and aims at validating the effectiveness of the proposed training platform. In this phase, the user must perform the operations foreseen in the intervention operating procedure without the support of a guiding voice. The system automatically notifies the operator errors and records them together with the execution times.

1344

V. M. Manghisi et al.

2.1 Scenarios and User Tasks Identification Criteria The three virtual scenarios were identified and developed in order to cover the three types of confined space identified in the document: “Criteria for a Recommended Standard: Working in Confined Spaces” DHHS (NIOSH) Publication Number 80-106 [15]. The categories of confined spaces and their respective virtual scenarios are: 1. A Class A environment: a confined space that presents a high and im-mediate risk to the health and life of the worker. It includes lack of oxygen, flammable or explosive atmospheres, high concentrations of toxic materials. The chosen virtual scenario corresponds to an underground technical compartment where it is necessary to carry out operations to close a post valve on a steam pipeline. In addition, the spray painting of a pipe inside the technical compartment is required. 2. A Class B environment: a confined space that can lead to injury and illness if preventive measures are not taken but is not immediately dangerous to life and health. The chosen virtual scenario corresponds to a silo for the fermentation of wine must where it is necessary to carry out operations of removal of processing residues by manual washing. The access to the silo is through a trapdoor from the top, and the material is removed by a water jet. 3. A Class C environment: a confined space where the risk is negligible, does not affect the normal course of work, and is not expected to worsen. The chosen virtual scenario corresponds to an edible oil storage silo. The silo must be cleaned by manual washing and inspection for its subsequent revamping. The space is accessed from the floor level and is cleaned with a high-pressure water jet. User tasks implemented within the virtual scenario were defined according to the best practices provided by INAIL and SPESAL. For each of the scenarios listed above, we created storyboards that included the following critical tasks: • • • • • •

Hazard recognition for a specific type of confined space; Recognition and correct use of PPE; Use of gas detector systems; Verification of availability of external assistance and communication systems; Reaction to hazard warnings; Self-rescue and non-entry rescue procedures;

2.2 VR Technologies and Scenario Implementation The IVR-based training platform was developed to be experienced using a VR Head Mounted Display (HMD), and the interaction system was designed to exploit the features of its handheld controllers. In particular, the IVR system used for the training activity is the HTC Vive Pro Eye (https://www.vive.com/us/product/vive-pro-eye/specs/), a VR room-scale system. The IVR training software was developed using the Unity game engine (https://unity.com/) and the XR Interaction Toolkit framework. The interactive functionalities as well as the user performance logging system (times and errors) were implemented in C#.

Immersive Virtual Reality as a Training Tool for Safety Working Procedure

1345

The virtual scenarios were implemented using 3D assets that accurately reproduce the previously identified confined spaces. Some of the 3D assets were modeled from technical drawings provided by the manufacturing companies (e.g., the silo for wine production in Fig. 2). The virtual scenarios also include rigged 3D humanoid avatars that interact with the user. Specific sets of avatar animations were created using a KinectV2 [16] based motion capture system. Finally, special care was taken to reproduce in IVR the audio inputs of real confined spaces; they were collected and included in the virtual scenarios using spatialized 3D audio simulation.

Fig. 2. The 3D model of the silo for wine production in the virtual scenario for the site preintervention arrangement.

2.3 Locomotion and User Interaction Since in IVR, navigation in the virtual environment is one of the most important interaction tasks, we have implemented natural and effective navigation metaphors: walking [17] and teleporting [18]. The most basic and natural form of locomotion is bipedal walking. Compared to other locomotion techniques, bipedal walking engages multiple human senses during interaction. In fact, in addition to sight, bipedal locomotion involves the auditory apparatus through the proprioceptive and vestibular senses. Therefore, when other locomotion techniques are used in IVR, such as moving through a controller-based system with two-dimensional input, a user receives conflicting motion stimuli from proprioception, the vestibular sense, and sight. According to sensory conflict theory, although the eyes perceive movement in the virtual environment, the auditory system, which is sensitive to acceleration and orientation, does not perceive the same information, generating a sensation of physical discomfort (such as dizziness and nausea) called cybersickness. Therefore, to avoid cybersickness and to not affect the sense of presence in the virtual scenario, we implemented locomotion based on bipedal walking by leveraging the tracking system of the HTC Vive Pro Eye.

1346

V. M. Manghisi et al.

However, locomotion with bipedal walking is limited to the available tracking space, about 10 × 10 m of walkable area. Therefore, the bipedal walk-only locomotion system is insufficient to satisfy the locomotion need of an environment larger than 10 × 10 m. In order to overcome this limitation, an additional locomotion system based on the point and teleporting technique was implemented. In particular, Point & Teleport (Fig. 3) is a locomotion technique that overcomes the hardware problem of limited tracking areas by allowing users to teleport to selected target locations using a controller.

Fig. 3. Point & Teleporting: on the left the pointing action, on the right the user is teleported to the destination point.

Similar to locomotion, interaction and object manipulation are key tasks in any interactive IVR system. Therefore, we implemented two standard metaphors: virtual-hands [19] and Raycasting-based techniques [20]. The virtual-hands technique provides an isomorphic mapping between the user’s real and virtual hands, allowing for natural interactions such as grabbing. The virtual hands (Fig. 4) consist of 3D models accurately reproducing in size and appearance to the human ones and providing animations of opening and closing of the fingers to visually simulate operations of grasping and releasing objects in the virtual scene. The user employs two handheld controllers (one for each hand) for positioning, orientating, and grasping operations.

Fig. 4. Virtual Hands interaction.

Although the virtual-hands metaphor allows for a large number of interactions, unfortunately, interaction with distant or non-naturally graspable objects (such as large objects)

Immersive Virtual Reality as a Training Tool for Safety Working Procedure

1347

is not very effective. For this reason, we have developed a laser-pointer metaphor based on the Raycasting technique (Fig. 5).

Fig. 5. Raycasting Laser Pointer interaction.

Raycasting (not to be confused with rendering technique) is a group of interactive techniques used in IVR for distant target selection. We implemented a Raycasting-based virtual pointer as a laser beam whose origin and orientation are defined by the input device (controller) with 6 degrees of freedom. When the laser beam intersects a target object, it can be selected. If the laser beam intersects multiple targets, the one closest to the user will be selected.

3 The Design of Experiment To evaluate the effectiveness of the IVR-based training platform as a support tool for education and training, we designed a specific validation procedure based on the Kirkpatrick model [21]. Kirkpatrick designed a specific approach to evaluate the evidence presented supporting any particular training program. He presented a 4-level evaluation model that comprises a reaction level, a learning level, a behavior level, and a results level. The first level of assessment –reaction– aims to collect and evaluate the impressions of learners completing a post-course evaluation of the training program. This evaluation measures participants’ interest, motivation, and attention levels. The second level –learning– involves measuring what participants have learned in terms of knowledge and/or skills. The third level –behavior or performance– involves assessing the trainee’s ability to use the learned knowledge or skills in the workplace. The fourth level –results– measures the overall impact of the training program, including financial or morale impacts. According to this approach, we designed our experimental procedure to evaluate the first three levels of the model. In particular, we aim to assess the effectiveness of the SPIRiT platform as a tool supporting the traditional training approach. To this end, we planned a between-subject experimental design enrolling 20 volunteer participants

1348

V. M. Manghisi et al.

that already attended a traditional training program dealing with safety procedures in confined spaces. We planned to randomly split participants into two groups, a test group and a control group with homogeneous distribution in terms of knowledge in the specific application field. The test group will undergo the full IVR training program, and the control group will undergo only its third phase (i.e., the test phase).

4 Metrics and Procedure We will apply a differential assessment between the two groups at each of the three Kirkpatrick model levels to gain evidence supporting the IVR approach. We planned to use the following measurement tools. • For Kirkpatrick model level one: The System Usability Scale (SUS) [22]. The SUS is a widely used ten-item standardized questionnaire to assess perceived usability. The User Experience Questionnaire (UEQ) [23]. The UEQ allows measuring both pragmatic and hedonic quality aspects of the user experience and allows gathering further feedback on the system acceptability. The Intrinsic Motivation Inventory questionnaire (IMI) [24]. It determines the subjects’ interest/enjoyment, perceived competence, effort, value/usefulness, (v) felt pressure, and perceived choice while performing a given activity. • For Kirkpatrick model level two: A custom questionnaire (CQ), with multiple-choice questions assessing the participants’ knowledge of confined space safety procedures. Comparing the results between the two groups will allow evaluating what participants have learned in terms of knowledge and/or skills. • For Kirkpatrick model level three: Execution errors and task completion times during the test phase of the IVR experience. Comparing execution errors and execution times between the two groups will allow assessing the trainee’s ability to use the learned knowledge or skills in the workplace. In addition, we will administer the Simulator Sickness Questionnaire SSQ [25] to assess the possible occurrence of the cybersickness side effect. We will collect data during the education and training experience according to the following workflow (Fig. 6): 1. At starting time T0, all participants will be administered the CQ questionnaire to split them into two groups homogeneously distributed with respect to it. 2. Subsequently, at time T1, before beginning the IVR experience, test group participants will complete the SSQ and will experience the first 2 phases of the education and training experience (i.e., safety measures introduction, PPEs presenting, and training experiences). Upon completion of these two phases, at time T2, test group and control group participants will complete the SSQ. 3. After this, both groups’ participants will experience the test phase in IVR where they will have to perform the operational procedures without the help of the guiding voice.

Immersive Virtual Reality as a Training Tool for Safety Working Procedure

1349

4. At the end of this phase, at time T3 we will collect data on execution errors and task accomplishment time and we will administer the SSQ, the SUS, the UEQ, the IMI, and the CQ questionnaires to both groups.

Fig. 6. Data collection workflow

5 Discussion and Conclusion This paper presents the study and the development of an IVR platform for the education and training on safety procedures in confined spaces. If, on the one hand, the use of these technologies for innovative purposes allows to overcome some limitations of the traditional approach to training, on the other hand, it presents some challenges. For example, training requires the implementation of proper safety practices within virtual scenarios through the detailed definition of specific storyboards. This requirement calls for a multidisciplinary approach that involves the direct involvement of both VR developers and occupational safety experts. Another challenge consists in finding an effective trade-off between the requirement of recreating a realistic confined space virtual scenario and the constraints imposed by the IVR technology. From this point of view, it is very important to be able to implement interaction metaphors as natural and intuitive as possible, and despite the technological readiness of IVR, the available ones need further development in order to guarantee optimal usability and user experience. Despite such challenges, we successfully implemented the training platform prototype, and we defined the experiment design that will allow us to conduct a quantitative study to evaluate the effectiveness of the proposed training approach. Acknowledgements. The SPIRiT project was supported by the BRIC 2019 grant funded by the Italian National Institute for Insurance against Accidents at Works.

1350

V. M. Manghisi et al.

References 1. Istituto Nazionale per l’Assicurazione contro gli Infortuni sul Lavoro (INAIL): AMBIENTI CONFINATI E/O SOSPETTI DI INQUINAMENTO E ASSIMILABILI PRODOTTI DI RICERCA DELL’ISTITUTO. https://www.inail.it/cs/internet/comunicazione/pubblicaz ioni/catalogo-generale/pubbl-ambienti-confinati-aspetti-legislativi-caratterizzazione.html. Accessed 25 Jan 2022 2. Istituto Nazionale per l’Assicurazione contro gli Infortuni sul Lavoro (INAIL): Infor.MO, Gli ambienti confinati - INAIL. https://www.inail.it/cs/internet/comunicazione/pubblicazioni/cat alogo-generale/pubbl-informo-gli-ambienti-confinati.html. Accessed 25 Jan 2022 3. US Bureau of Labor Statistics: Fatal occupational injuries involving confined spaces (2018). https://www.bls.gov/iif/oshwc/cfoi/confined-spaces-2011-18.htm. Accessed 25 Jan 2022 4. Zhao, D., Lucas, J.: Virtual reality simulation for construction safety promotion. Int. J. Inj. Contr. Saf. Promot. 22(1), 57–67 (2015). https://doi.org/10.1080/17457300.2013.861853 5. Wilkinson, T., Burns, K., Simpson, A., Walker, K., Hunter, M.: Improving the control of confined-space entry through the implementation of an operational standard and competencebased training. In: SPE/APPEA International Conference on Health, Safety and Environment in Oil and Gas Exploration and Production 2012: Protecting People and the Environment Evolving Challenges, vol. 3, pp. 2401–2408. Society of Petroleum Engineers (2012). https:// doi.org/10.2118/157551-ms 6. Li, H., Chan, G., Skitmore, M.: Visualizing safety assessment by integrating the use of game technology. Autom. Constr. 22, 498–505 (2012). https://doi.org/10.1016/j.autcon.2011. 11.009 7. Pedro, A., Le, Q.T., Park, C.S.: Framework for integrating safety into construction methods education through interactive virtual reality. J. Prof. Issues Eng. Educ. Pract. 142(2), 04015011 (2016). https://doi.org/10.1061/(ASCE)EI.1943-5541.0000261 8. Sacks, R., Perlman, A., Barak, R.: Construction safety training using immersive virtual reality. Constr. Manag. Econ. 31(9), 1005–1017 (2013). https://doi.org/10.1080/01446193.2013. 828844 9. Makransky, G., Borre-Gude, S., Mayer, R.E.: Motivational and cognitive benefits of training in immersive virtual reality based on multiple assessments. J. Comput. Assist. Learn. 35(6), 691–707 (2019). https://doi.org/10.1111/jcal.12375 10. Colombo, S., Nazir, S., Manca, D.: Immersive virtual reality for training and decision making: preliminary results of experiments performed with a plant simulator. SPE Econ. Manage. 6(4), 165–172 (2014). https://doi.org/10.2118/164993-PA 11. Nazir, S., Manca, D.: How a plant simulator can improve industrial safety. Process Saf. Prog. 34(3), 237–243 (2015). https://doi.org/10.1002/prs.11714 12. Di Donato, L., Longo, F., Ferraro, A., Pirozzi, M.: An advanced solutions for operators’ training working in confined and/or pollution suspected space. Procedia Manuf. 42, 254–258 (2020). https://doi.org/10.1016/j.promfg.2020.02.080 13. Longo, F., et al.: Integrating physical and virtual game-based simulation for operators’ training to enhance learning effectiveness: an application in hazardous industrial spaces. Int. J. Simul. Process Model. 16(2), 130–146 (2021). https://doi.org/10.1504/IJSPM.2021.115866 14. Lu, S., Xu, W., Wang, F., Li, X., Yang, J.: Serious game: confined space rescue based on virtual reality technology. In: PervasiveHealth: Pervasive Computing Technologies for Healthcare, pp. 66–73 (2020). https://doi.org/10.1145/3442705.3442716 15. NIOSH-Issued Publications: Working in Confined Spaces (80-106) | NIOSH | CDC (1979). https://www.cdc.gov/niosh/docs/80-106/default.html. Accessed 25 Jan 2022 16. Manghisi, V.M., Uva, A.E., Fiorentino, M., Gattullo, M., Boccaccio, A., Evangelista, A.: Automatic ergonomic postural risk monitoring on the factory shopfloor - the Ergosentinel tool. Procedia Manuf. 42, 97–103 (2020). https://doi.org/10.1016/j.promfg.2020.02.091

Immersive Virtual Reality as a Training Tool for Safety Working Procedure

1351

17. Ward, M., Azuma, R., Bennett, R., Gottschalk, S., Fuchs, H.: A demonstrated optical tracker with scalable work area for head-mounted display systems. In: Proceedings of the Symposium on Interactive 3D Graphics, vol. Part F1296, pp. 43–52 (1992). https://doi.org/10.1145/147 156.147162 18. Bozgeyikli, E., Raij, A., Katkoori, S., Dubey, R.: Point & Teleport locomotion technique for virtual reality. In: Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play, CHI PLAY 2016, pp. 205–216 (2016). https://doi.org/10.1145/2967934. 2968105 19. Argelaguet, F., Hoyet, L., Trico, M., Lécuyer, A.: The role of interaction in virtual embodiment: Effects of the virtual hand representation. In: Proceedings of the IEEE Virtual Reality, July 2016, pp. 3–10 (2016). https://doi.org/10.1109/VR.2016.7504682 20. Lee, S., Seo, J., Kim, G.J., Park, C.-M.: Evaluation of pointing techniques for ray casting selection in virtual environments. In: 3rd International Conference on Virtual Reality and Its Application in Industry, vol. 4756, pp. 38–44 (2003). https://doi.org/10.1117/12.497665 21. Kirkpatrick, D.L.: The four levels of evaluation. In: Brown, S.M., Seidner, C.J. (eds.) Evaluating Corporate Training: Models and Issues, pp. 95–112. Springer, Dordrecht (1998). https:// doi.org/10.1007/978-94-011-4850-4_5 22. Brooke, J.: SUS—a quick and dirty usability scale. In: Usability Evaluation in Industry, vol. 189, no. 194, pp. 4–7 (1996) 23. Laugwitz, B., Held, T., Schrepp, M.: Construction and evaluation of a user experience questionnaire. In: Holzinger, A. (ed.) USAB 2008. LNCS, vol. 5298, pp. 63–76. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89350-9_6 24. Manghisi, V.M., Uva, A.E., Fiorentino, M., Gattullo, M., Boccaccio, A., Monno, G.: Enhancing user engagement through the user centric design of a mid-air gesture-based interface for the navigation of virtual-tours in cultural heritage expositions. J. Cult. Herit. 32, 186–197 (2018). https://doi.org/10.1016/j.culher.2018.02.014 25. Kennedy, R.S., Lane, N.E., Berbaum, K.S., Lilienthal, M.G.: Simulator sickness questionnaire: an enhanced method for quantifying simulator sickness. Int. J. Aviat. Psychol. 3(3), 203–220 (1993). https://doi.org/10.1207/s15327108ijap0303_3

Tintoretto Unveiled: Interactive Virtual Experience for Artworks Beatrice Aruanno1(B) , Sandro Barone1 , Francesco Tamburrino1 and Mario Covarrubias Rodriguez2

,

1 Department of Civil and Industrial Engineering, University of Pisa, Largo Lucio Lazzarino 1,

56122 Pisa, PI, Italy [email protected] 2 Department of Mechanical Engineering, Polo Territoriale di Lecco, Politecnico di Milano, Via G. Previati, 1/c, 23900 Lecco, Italy

Abstract. Conventionally, visitors of exhibitions cannot directly interact with artworks but remain mainly passive. This work presents a new way to discover paintings through an extended reality application with the aim of enhancing visitors’ engagement. The workflow consists of digitally recreating a painting. All its constitutive elements are contoured and then removed by the artwork. These elements will be unveiled by the visitors with gesture interaction. The discovery of each element is followed by additional information on the subject. The application also proposes an alternative experience where the user paints the components of the artwork with gestures. The hands’ position was tracked by the Leap Motion Controller, an optical sensor by Ultraleap. The process has been applied to one of the Tintoretto’s pieces of art: “L’Annunciazione del Doge Grimani”. The extended reality application has been experienced by over twenty thousand visitors at the dedicated exhibition. Numerous positive comments received from both visitors and guides are encouraging. Keywords: Extended reality · Cultural heritage · Exhibition · User interaction

1 Introduction In the majority of exhibitions, visitors cannot interact directly with artworks and remain passive. The traditional museum experiences are based on showing artifacts and displaying information to the visitors. Gradually, museum spaces are extending to incorporate also educational, entertainment and gaming areas [1]. All over the world, more and more museums are adopting information and communication technologies (ICTs) to personalize and enrich cultural visits. Audio guides, since their introduction in the 1950s, and interactive displays have been among the first technological solutions used by tourists [2]. More recently, mobile phone tours and QR codes have become widely spread in the cultural heritage environment. Nowadays, museums and art galleries are continually exploring new ways to attract visitors and enrich their experience [3]. Making content appealing and relevant through the adoption of these new technologies could be difficult. However, state-of-the-art technologies could enhance the museums’ experiences © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1352–1363, 2023. https://doi.org/10.1007/978-3-031-15928-2_118

Tintoretto Unveiled: Interactive Virtual Experience for Artworks

1353

by transforming them into high-tech cultural heritage spaces and providing high-quality interactive content to the visitors [4]. In 1994 Paul Milgram introduced the reality-virtuality continuum [5]. It defines the transition between the real physical world and a completely digital environment generated by the computer. The umbrella term XR (eXtended Reality) aims to include all the technologies utilized in the reality-virtuality continuum. It includes: Augmented Reality (AR), Augmented Virtuality (AV), Virtual Reality (VR), and Mixed Reality (MR). AR expands the real world with virtual elements. For instance, a person can use mobile phones, tablets, or smart glasses to frame a real spot and virtual elements overlay the physical world viewed through the device’s camera. AV, on the other hand, adds to a virtual environment some real elements captured in real-time from the physical world. A common example of AV is the addition of the user’s hands to the virtual world to improve the embodiment of the experience. VR includes computer-generated virtual environments that the user can explore through fully immersive head-mounted displays. MR has been introduced more recently. It blends VR and AR with the difference that virtual and physical elements can interact with each other in real-time. For instance, a MR application can recognize physical surfaces, and virtual objects can hit or be occluded by them. Museums are taking advantage of ICTs to support physical artifacts with digital content using XR solutions creating an outstanding blend of novel experiences for the audience [6, 7]. As evidence of the visitors’ need to feel more involved in the cultural experience, in the last decades, there has been a considerable increment of XR applications in the cultural heritage field. An example of an AR application that combines real-life with history can be seen in the “Streetmuseum 2.0”. Thanks to the vast collection of historical photographs of the Museum of London, the user can select a location on a map and visualize the ancient image of the place. The AR application also recognizes real sites and overlay the historic photograph over the current view [8]. The “Chicago 00 Project” is another example of bringing art outside the museum walls. The project is divided into multiple episodes, each describing one site of Chicago during a historical event or period (e.g. the Great Chicago Fire - 1871, A Century of Progress, or the World’s Columbian Exposition – 1893). For each episode, a team of the Chicago History Museum digitalized images of the site, researched the exact spot and viewpoint of the photographer when these images were taken, and matched them with present streets and perspectives to describe with AR the place from the historical point of view [9]. However, the majority of XR solutions can be found in museums and art galleries. At the Van Gogh Museum (Amsterdam), the ARLab developed an interactive experience where the user could examine the multi-spectral captures of the Van Gogh’s painting “The bedroom” (infrared, ultra-violet, and x-ray pictures) through an interactive display using a digital spray can [10].

1354

B. Aruanno et al.

The Cleveland Museum of Art transformed the visitors’ mobile phones into a pocket guide by the “ArtLens” application. It allows the users to select their favorite artworks to define a personalized tour of the museum. Plus, the app can scan and recognize some pieces of art and provide additional multimedia content [11]. Similar approaches, that exploit XR to navigate and enrich the visits with additional and interactive content, can be also experienced with the “MRsive” app designed for the Art Gallery of Ontario [7], the “MuseumEye” application for the Egyptian Museum in Cairo [1] and the “Imaging Novecento” mobile application for the Museo Novecento (Florence, Italy) [13]. The National Museum of Natural History in Paris launched a new immersive exhibition named “Revivre: les animaux disparus”. By using the Microsoft Hololens 2 MR headset, visitors can interact with numerous species of extinct animals [14]. AR can be used also to produce intuitive and natural ways to discover heritage. This is the solution adopted by the Almonia archeological museum in Valencia. It combined historical reconstruction with theatrical techniques to create narrative storytelling that accompanies the visitor through the exhibition [15]. Some solutions use immersive VR in exhibitions. According to the literature, VR solutions present some drawbacks: they can isolate the visitors, distract them from the real pieces of art, and require trained staff to assist inexperienced users. However, it could offer also different pros to museums and art galleries. They include the enhancement of the visitors’ engagement and experience, the capability to attract new visitors and younger audiences, and the possibility to offer customized experiences [16]. One example of the usage of VR technology for cultural heritage can be found in the British Museum of London. The visitors can explore the reconstruction of numerous 3D-scanned pieces of the museum’s Bronze Age collection [17]. Also, the Tate Modern of London created a VR experience, focused on the artist Amedeo Clemente Modigliani, where the visitor can explore a virtual reconstruction of Modigliani’s Parisian studio using a head-mounted display. Another aspect to consider is that adding the “fun factor” with the gamification in cultural heritage could generate engagement and new motivational factors to learn efficiently [2]. An example of the usage of gamification in exhibitions could be seen at the Bovington’s Tank Museum in the United Kingdom. Due to the impossibility to obtain the real World War II German tank Sturmtiger, the museum used XR technology to show it to the visitors. Moreover, the users can disassemble the tank, explore the inside, or even shoot its rockets [18]. As can be seen from the literature review, museums and art galleries around the world are investigating XR technologies and looking for new approaches to improve the visitors’ experience. The usage of gamification and XR technology to involve the user in interactive experiences is still an open challenge in the literature. The usage of gamification can reinforce the engagement of the experience. It consists of a playful experience that helps the user in finding engagement and personal motivation with serious content [19]. The work developed in this paper presents an interactive XR application to explore artworks innovatively and enhance the visitors’ engagement. The application was purposely designed to be simple according to the museum’s directives. The aim is to allow

Tintoretto Unveiled: Interactive Virtual Experience for Artworks

1355

the user to discover immediately the piece of art and the details of the painting interactively using also gamification in a natural, more appealing and active way compared to the traditional approach used in the majority of museums and art galleries.

2 Digital Painting Workflow The preliminary step of the workflow consists in digitally recreating a painting. It starts from an image of the actual painting taken in high resolution. Then, all its constitutive elements are manually contoured using an open-source image editor (GIMP), saved in singular files, and then removed from the initial image. The background of the painting is reconstructed based on the actual textures and colors present in the areas that remain visible. Unity3D has been used to develop and build the XR application. It is a game-engine development tool that gives the creator the ability to produce multiplatform experiences and games. To let multiple visitors experience or observe the application at the same time, it was decided to make use of a widescreen monitor as an output device for playing the two activities. The application proposes to the visitor two different activities. The first one starts by showing the empty background of the painting without all the constitutive subjects. The user is then required to unveil one detail at a time sequentially. The detail will appear in the scene, followed by further information about that subject related to other pieces of art (i.e., the role of that element in a set of other famous paintings in which it is present). The second activity starts with a black background. Instead of revealing details one at a time, the user is required to color each character of the painting. Similar to the first activity, also in this task further information regarding the colored characters appear on the screen, thus deepening the user’s knowledge. In this activity, the subject is explained and described (e.g., its pose, the colors used by the artist, or the relevance to the painting). Art history experts provided feedback and suggestions during the development of the application. In particular, the pieces of advice focused on the digital recreation of the paintings, the process of identifying its main parts, the design of the activities’ structure, the selection of the additional material, and the descriptions of each element of the piece of art. The Ultraleap Leap Motion Controller (LMC) was instead used to interact with the artworks naturally. The LMC is an optical non-contact tracking input device that captures the position and movements of the hands. It illuminates the hand with three infrared LEDs and captures the images with two CCD cameras. Combining them with a virtual model of the human hand, it can reconstruct a virtual skeleton of the upper limb’s joints and reproduce the movements of the user in the XR application. Even if the sensor presents some limitations, such as influences of bright light conditions, there are also pieces of evidence in the literature that applications designed for LMC show a good usability rating [20]. This way, if a visitor moves the hand over the sensor, a virtual hand, that mimics their movements, can interact with the elements in the virtual experience. In particular, if the virtual hand waves where an element is depicted in the painting, the user will reveal (or color) the specific subject. The LMC can be integrated with the Unity 3D development software through its dedicated SDK and plugins.

1356

B. Aruanno et al.

To unveil (or color) each element it is required to cover a percentage of the image with the movements of the virtual hand. Once that percentage is reached, the experience proceeds with the new content. Considering that the majority of the user would have never used the LMC before, the two activities provide two different levels of difficulty each to proceed into the experiences. The default value is 50% of the image, but there is also a simplified version with a percentage of 20%.

3 Case Study 3.1 The Painting A special occasion gave the possibility to implement the method in an actual exhibition named “Il Tintoretto rivelato” [21]. The event focused on the Renaissance painter Jacopo Robusti, better known as Tintoretto, and it was dedicated to one of his lessknown paintings of the XVI century: “L’Annunciazione del Doge Grimani”. The event was held in Lecco, Italy, at the “Palazzo delle Paure”. The opportunity to admire the real artwork and combine it with XR technologies inspired the “Tintoretto unveiled” application. Unlike traditional exhibitions, “Il Tintoretto rivelato” wanted to be innovative and actively involve the visitor in discovering art. The painting “L’Annunciazione del Doge Germani” (Fig. 1) depicts the moment when the Archangel Gabriel appears to Mary to give her the annunciation. The scene takes place inside a large and elegant room, in Renaissance style, with a glass window from which mountains in the distance can be glimpsed. Mary is kneeling, intent on praying on a sumptuous kneeler and with an open “Libro d’Ore”. The scene presents itself to the visitor, who sees the basket with the clothes in the foreground and the chair with the embroidery cushion. The archangel Gabriel, who appears with a white dove surrounded by a halo of dazzling light, a symbol of the Holy Spirit, holds a white lily in his left hand, representing purity.

Fig. 1. “L’Annunciazione del Doge Grimani” by Tintoretto [21]

Tintoretto Unveiled: Interactive Virtual Experience for Artworks

1357

The painting has been firstly digitally recreated. Based on the suggestions of an art history expert, ten main items in the room have been identified (Fig. 2). Each of them has been manually contoured using the image-editor software GIMP, saved as a distinct image, and then removed by the global picture. The image with the remaining empty room, with only the floor and the walls, has been reconstructed based on its textures.

Fig. 2. The ten constitutive elements of the painting: the basket (1), the chair (2), the bed (3), the kneeler (4), the Libro d’Ore (5), Mary (6), the open window (7), the archangel Gabriel (8), the Holy Spirit (9), and the white lily (10).

3.2 Activity a: Unveil The application is developed once all the elements are contoured. It is divided into two activities: “Unveil” and “Color”. Each of them can be played with one of the two levels of difficulty (the hand has to move over the specific subject and cover the 50% or the 20% of the image). Unveil starts with the background of the painting: the empty room with dark walls and a tiled floor. Then, a visitor has to unveil one of the ten removed elements following the order illustrated in Fig. 2. To unveil it the visitor has to put their hand over the LMC and move the virtual hand on the monitor. Once the virtual hand finds the position of the element in the painting, this begins to reveal itself. The LMC has a limited interaction volume that can be represented with a hemisphere with a field of view of 140°×120° that can extend in height from 10 cm to 60 cm over the sensor [22] The LMC was placed below a widescreen, facing upward slightly tilted towards the user as shown in the virtual representation of Fig. 3a. The application requires that the virtual hand generated by the LMC should be able to easily reach each point of the image on the monitor. But, reaching the corners was not always simple with the default scale of the virtual hand. For this reason, the 3D model has been scaled. However, the resulting size would have covered a relevant part of the painting. To minimize its impact, as can be seen in Fig. 3b, the design is minimal and is white-colored. The shape of the virtual hand controlled by the input device has

1358

B. Aruanno et al.

been decided in collaboration with the person in charge of the exhibition “Il Tintoretto rivelato”. The palm is a flat ring and the fingers are schematized with round sticks. Once the user unveils the required percentage of the image of the element, it appears entirely in the room. Then, new pieces of information about the subject are displayed. For instance, as shown in Fig. 3b, when the visitor reveals the chair, the application gives the possibility to learn more about two other Tintoretto’s paintings in which the chair is present: “Annunciazione” (The Annunciation, 1582–88) in Fig. 3c, and “Ultima cena” (The last supper, 1561–62) in Fig. 3d.

(a)

(b)

(c)

(d)

Fig. 3. The setup of the system, obtained with the Ultraleap [22] camera position visualizer (a), and screenshots from activity A: Unveil. The virtual hand reveals the chair (b). The application displays a first illustration: the painting “Annunciazione” with an explanation of the role of the chair in the paintings (c). When the user proceeds by clicking the white button, the application shows the second one: “Ultima cena” (d).

A picture of each painting is shown, and a short description of the role of the chair in those other art pieces is given. An interactive white button closes the illustrations, when clicked with the virtual hand, and lets the user proceeds with the activity and with the discovery of the next element to reveal. 3.3 Activity B: Color The second activity starts with a black background and one element of the painting colored in black and white. The aim is to move the hand over the image to color the subject. Once the visitor reaches the required percentage fixed by the level of difficulty, the

Tintoretto Unveiled: Interactive Virtual Experience for Artworks

1359

original colors used by Tintoretto will appear. A description of the subject complements the illustration. Figure 4 shows an example of this activity. Initially, the visitor colors the Archangel Gabriel with the virtual hand controlled by the LMC (Fig. 4a). Once 50% of the image has been colored, the application completes the remaining percentage and a side text describes its pose and the reasons behind this Tintoretto’s decision (Fig. 4b).

(a)

(b)

Fig. 4. Screenshots from activity B: Color. By moving the virtual hand on the Archangel Gabriel, the user colors it (a). Once reached the required percentage, a description of the subject appears (b)

3.4 The Experience at the Exhibition The actual setup of the interactive experience was built into a dedicated room of the “Palazzo delle Paure”, where the exhibition took place, just before the room with the actual painting. It included a personal computer to run the XR application connected to a widescreen of 60 in. and a LMC. The LMC was placed below the widescreen, at its base. So, by moving a hand in front of the screen, the system captured the movement and the visitor could interact in a more intuitive and natural approach. During the exhibition, the application was used following a double approach. The first was as support of the guides. During tour visits, before moving to the real painting, the guides utilized the application to describe the piece of art and the importance of each element. The second approach consisted in leaving the visitors to discover the painting autonomously by using the two activities. There was always a trained guide present next to the system to explain or assist the visitor in case of curiosity or necessity. In this second case, the trained guide firstly showed how to use the LMC to the visitor. During the experience, the visitor was encouraged to think aloud to express their impressions about the activities. Then, after the experience, the guide administered a brief oral questionnaire to the user and took note of the responses. Questions regarded the global evaluation of the application, the usability of the system, and if they had any observations or suggestions about the experience. It was decided to use this oral approach during the activities to avoid any kind of restriction on the subject of the visitor’s observations. This way each voluntary participant who wanted to try the application was able to use it as much as they desired before moving

1360

B. Aruanno et al.

to the real painting. On the other hand, it gave us the possibility to obtain preliminary feedback to evaluate the system from all the users who experienced the application. During the 2-month exhibition “Il Tintoretto rivelato” over 22,000 visitors attended the exposition and the system was experienced by the majority of them.

4 Discussion 4.1 Guides’ Experience Analyzing the outcomes of the exhibition it is possible to distinguish between the guides’ use of the application and the visitors’ experience. Guides highlighted that the application assisted them during their exposition, especially the Unveil activity. They noticed also that by unveiling one part of the painting at a time, visitors could find it immediately into the painting, and stay more focused on the explanation of the specific element. Some people asked them if they could try the application or help them by revealing the parts with the LMC. Guides noticed a higher interest in the exhibition due to the innovative interaction between people and artworks, manifested also in questions related to the numerous pieces of information proposed by the application. Furthermore, guides observed that the gamification of the painting attracted numerous people and they spent, on average, a higher time with the painting compared to traditional exhibitions. This aspect was relevant also with children. 4.2 Visitors’ Experience From the visitors’ point of view, the majority of them were curious about the application. Guides asked visitors, with the oral questionnaire, their opinions on the application and to think aloud during the experience. During autonomous or guided group visits, even though not all of them directly used the LMC, the usage of a widescreen monitor was fundamental to allow all members of the group to attend the activities and read about the further details related to the piece of art. Some visitors appreciated the ease of use of the application and the additional pieces of information correlated with each interactive element present in the painting. During the experience, the trained guides, who assisted the visitors with the system, invited them to express their impressions about the application. Guides also asked if they had already tested the input device, but almost all of them said they had never heard of it. However, they provided in general positive feedback, especially on the ease of use of the system and the engagement generated by the application. Almost all the visitors who decided to use the LMC were able to play with the activities after a brief explanation of the technology and a few tries to understand how to use it. Some visitors stated that the possibility to reduce the level of difficulty, moving the percentage to cover with the hand from 50% to 20%, was important to allow also people who had higher difficulties with the technology to proceed with the experience. Many visitors emphasized that, playing with the application, they spotted a higher number of details (i.e. the white lily) present in the painting compared to traditional exhibitions. Some visitors stated that, in the original painting, they were looking for the details revealed by our application on the Renaissance painter.

Tintoretto Unveiled: Interactive Virtual Experience for Artworks

1361

4.3 Future Works The “Tintoretto Unveiled” project represents an exploratory case study that highlighted the potentiality of XR technologies to improve the visitors’ experience in the cultural heritage field. The same approach can be adapted to different pieces of art. Visitors appreciated the ease of use of the application and the new interaction method. However, even though the majority of them were able to use the application without any difficulty, someone needed a few attempts. According to the visitors’ feedbacks, they appreciated the solution to reduce the percentage of the image to unveil or color. Another suggestion, that will be evaluated for future developments, was to insert some digital or physical reference about where the user should put the hand over the sensor, in the real world. Furthermore, adding an increment of the gamification aspect could be also an interesting path to follow, if it is compatible with the museum’s regulations. Finally, the exhibition “Il Tintoretto rivelato” was a particular occasion to apply the XR technology to an actual painting in a real exhibition framework. It allowed us to obtain numerous feedbacks from visitors and guides and observe their behavior during the experience.

5 Conclusions This work presents a new interactive approach to discover artworks. The scientific literature describes the use of some XR solutions to provide more engaging experiences in the cultural heritage field. However, in the majority of exhibitions, visitors mainly remain passive in the learning process. The application “Tintoretto unveiled” exploits XR technology and gamification to let the visitor interactively explore a painting and its details. Using the LMC the user can interact with a digital replica of the piece of art and reveal or color its constitutive elements. Each component is supplemented by additional explanations and information. The workflow has been applied to the Tintoretto’s painting: “L’Annunciazione del Doge Grimani” at the dedicated exhibition. Numerous visitors attended and used the application. Both guides and visitors expressed positive feedback about the experience. Interesting observations emerged especially about enhanced engagement and interaction. It is relevant to emphasize that the presented interactive approach can also be extended to numerous pieces of art in the field of cultural heritage. Acknowledgments. This paper and the work behind it would not have been possible without the financial and technical support of the Prime-VR2 H2020 project [grant number 856998] and its members. The authors would also thank Laura Polo d’Ambrosio for making this experience feasible within the exhibition and for her pieces of advice about art history.

References 1. Hammady, R., Ma, M., Strathern, C., Mohamad, M.: Design and development of a spatial mixed reality touring guide to the Egyptian museum. Multimedia Tools Appl. 79(5–6), 3465– 3494 (2019). https://doi.org/10.1007/s11042-019-08026-w

1362

B. Aruanno et al.

2. tom Dieck, M.C., Jung, T.H., tom Dieck, D.: Enhancing art gallery visitors’ learning experience using wearable augmented reality: generic learning outcomes perspective. Current Issues Tourism 21(17), 2014–2034 (2018). https://doi.org/10.1080/13683500.2016.1224818 3. Ding, M.: Augmented Reality in Museums, Pittsburgh, PA, USA, May 2017 4. Ioannides, M., Davies, R.: ViMM - Virtual Multimodal Museum: A Manifesto and Roadmap for Europe’s Digital Cultural Heritage (2018). https://doi.org/10.1109/IS.2018.8710556 5. Milgram, P., Takemura, H., Utsumi, A., Kishino, F.: Augmented reality: a class of displays on the reality-virtuality continuum. In: Telemanipulator and Telepresence Technologies, vol. 2351, December 1994. https://doi.org/10.1117/12.197321 6. Margetis, G., Apostolakis, K.C., Ntoa, S., Papagiannakis, G., Stephanidis, C.: X-reality museums: unifying the virtual and realworld towards realistic virtual museums. Appl. Sci. (Switz.) 11(1), 338 (2021). https://doi.org/10.3390/app11010338 7. Caggianese, G., de Pietro, G., Esposito, M., Gallo, L., Minutolo, A., Neroni, P.: Discovering Leonardo with artificial intelligence and holograms: a user study. Pattern Recogn. Lett. 131, 361–367 (2020). https://doi.org/10.1016/j.patrec.2020.01.006 8. Zolfagharifard, E.: Streets of London now... and then: stand still and picture yourself in history with app that creates hybrid images of present and past. Daily Mail, London, 27 February 2014. https://www.dailymail.co.uk/sciencetech/article-2567739/Streetmuseum-app-createshybrid-images-London.html. Accessed 18 Jan 2022 9. Rhodes, G.A., Russick, J.: The Chicago 00 Project. Chicago History Museum. https://www. chicago00.org/. Accessed 20 Jan 2022 10. van Eck, W., Kolstee, Y.: The augmented painting: playful interaction with multi-spectral images. In: 11th IEEE International Symposium on Mixed and Augmented Reality 2012 Arts, Media, and Humanities Papers, ISMAR-AMH 2012, pp. 65–69 (2012).https://doi.org/ 10.1109/ISMAR-AMH.2012.6483990 11. ArtLens App. The Cleveland Museum of Art. https://www.clevelandart.org/artlens-gallery/ artlens-app. Accessed 20 Jan 2022 12. Al Rabbaa, J., Morris, A., Somanath, S.: MRsive: an augmented reality tool for enhancing wayfinding and engagement with art in museums. In: Stephanidis, C. (ed.) HCII 2019. CCIS, vol. 1034, pp. 535–542. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-235253_73 13. Becattini, F., Ferracani, A., Landucci, L., Pezzatini, D., Uricchio, T., Del Bimbo, A.: Imaging novecento. A mobile app for automatic recognition of artworks and transfer of artistic styles. In: Ioannides, M., et al. (eds.) EuroMed 2016. LNCS, vol. 10058, pp. 781–791. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-48496-9_62 14. Saola: Revivre (2022). https://www.saolastudio.com/revivre. Accessed 22 Mar 2022 15. Martí-Testón, A., Adolfo Muñoz, J., Solanes, E., Gracia, L., Tornero, J.: A methodology to produce augmented-reality guided tours in museums for mixed-reality headsets. Electronics 10(23), 2956 (2021). https://doi.org/10.3390/electronics10232956 16. Shehade, M., Stylianou-Lambert, T.: Virtual reality in museums: exploring the experiences of museum professionals. Appl. Sci. (Switz.) 10(11), 4031 (2020). https://doi.org/10.3390/ app10114031 17. Rae, J., Edwards, L.: Virtual reality at the British Museum: what is the value of virtual reality environments for learning by children and young people, schools, and families? In: Museums and the Web, MW2016, Los Angeles, CA, USA, 28 January 2016. https:// mw2016.museumsandtheweb.com/paper/virtual-reality-at-the-british-museum-what-is-thevalue-of-virtual-reality-environments-for-learning-by-children-and-young-people-schoolsand-families/. Accessed 20 Jan 2022 18. Singletary, C.: Bovington Tank Museum is Using VR/AR for an Exhibit. UploadVR, 18 July 2017. https://uploadvr.com/bovington-museum-ar-vr-tank-exhibit/. Accessed 20 Jan 2022

Tintoretto Unveiled: Interactive Virtual Experience for Artworks

1363

19. Reiners, T., Wood, L.C. (eds.): Gamification in Education and Business. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-10208-5 20. Aruanno, B., Covarrubias, M.: Handy novel hand exoskeleton for personalized rehabilitation. Comput. Aided Des. Appl. 19(3), 405–425 (2022). https://doi.org/10.14733/CADAPS.2022. 405-425 21. Il mistero dell’arte Tintoretto rivelato (2019). https://tintorettorivelato.it/. Accessed 21 Jan 2022 22. Ultraleap for developers. https://docs.ultraleap.com/. Accessed 28 Mar 2022

X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process of Large Products Alessandro Greco(B)

, Antimo Cardilicchio , and Salvatore Gerbino

University of Campania “L. Vanvitelli”, Via Roma 29, 81031 Aversa, CE, Italy [email protected]

Abstract. Over the years, high attention has been paid to Reverse Engineering (RE) as a methodological approach for reconstructing 3D models of existing objects to carry out analyses aimed to estimate accurate products performance and to conduct accurate inspections and quality controls. Although these techniques are widely used, RE has limitations related to both the user’s operations and the range of high-quality acquisition. Consequently, large objects exclude the possibility of preserving scanning accuracy, unless alternative solutions are found, and require high repeatability, making necessary well-trained personnel. For these purposes, Extended Reality (XR) could pave the way for the development of advanced training systems and the implementation of optimized interfaces for testing the adopted design solutions. This research deals with the development of an XR application for simulating the 3D scanning process of the vertical stabilizer torque box of an airplane and providing both an effective virtual training tool for the operators and a supporting tool for testing design solutions. Keywords: Extended Reality · 3D scanning · Virtual training

1 Introduction Manufacturing industries are continuously focused on searching methodological solutions for improving and optimizing production lines, reducing risks, times and costs, and preserving the quality and the efficiency of their products. Over the years, high attention has been paid to Reverse Engineering (RE) as a methodological approach for reconstructing 3D models of existing objects to carry out analyses aimed to estimate products performance more and more accurately. Among the technologies enabling RE, laser scanning has also found great application in standalone use, in order to perform accurate inspection and quality control. Literature reports research about the industrial use of 3D laser scanning. Guo et al. [1] presented an automated geometric quality inspection technique to estimate the geometric properties of prefabricated modules using 3D laser scanning. In [2], the authors developed an online geometry estimation methodology for continuous monitoring during Additive Manufacturing (AM) process by using a laser line scanner. Helle and Lemu © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1364–1375, 2023. https://doi.org/10.1007/978-3-031-15928-2_119

X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process

1365

[3] used 3D laser scanning to evaluate dimensional errors of AM products. Regarding the aeronautical field, in [4] the authors combined laser-based Reverse Engineering with ultrasonic tests for assessing the quality of composite components. Laser scanning is also used for reconstructing large and complex buildings, such as metro stations, for detecting design errors [5]. RE techniques are also used for performing accurate analyses of the performance of the products, creating high fidelity 3D models of components. Che et al. [6] carried out a Finite Element simulation on a laser scanned 3D model of a milling tool for analyzing the working performance. Liu et al. [7] used laser scanning and Finite Element Analysis (FEA) for evaluating deformations in spot-welded panels. Reverse Engineering is also used for estimating mechanical properties of yarns [8] or for predicting the gap volume for aircraft wing assembly [9], similarly to the case study of this article, described in the next sections. However, laser scanning has limitations related to both the user’s operations and the range of high-quality acquisition. In fact, high performance scanners, especially those based on tracking systems, require great precision from the operator in order to guarantee high acquisition accuracy. In addition, ideal operating conditions should ensure the device operates within its optimal range of distance from the tracker beyond which the performance drastically reduces. Consequently, large objects seem to exclude the possibility of preserving scanning accuracy as they can prevent laser scanning from working under optimal conditions along the whole session, requiring multiple acquisitions that need a subsequent registration phase. Modern laser scanner systems tend to simplify the 3D scanning in case of large objects, but there is still a need for well-trained staff to use the device in the best possible way, especially in such applications that strongly stress the device. Consequently, the testing of design solutions and the training of operators are high time-consuming processes, which could be accelerated through the use of Virtual Reality (VR), representing a great support tool. In literature, many research deals with the proposal of VR as a supporting tool for working environment design and workers’ training. Simonetto et al. [10] considered a combined use of VR and motion capture for designing assembly workplaces. Pérez et al. [11] proposed a fully immersive VR environment for training and simulation for robot controlling, by using commercial gaming technologies. Also concerning the construction field, in [12] the authors implemented VR technology to train safety protocols in the prestressed/precast concrete industry. Similarly, Gazzotti et al. [13] used immersive VR and AR for studying the design and ergonomics of fusion engineering workspaces. In some researches, such as [14], VR has been coupled with 3D laser scanning for improving the communication of construction projects. Considering the current state of the art, the potential of Virtual Reality (in its various declinations) applied to new industries stands out. It can pave the way for the development of advanced training systems and the implementation of optimized interfaces for testing the adopted solutions. For industrial applications, XR not only provides new tools for learning and deepening own’s knowledge: it enables an unparalleled immersive experience in which the

1366

A. Greco et al.

operator can tangibly perceive dangers that otherwise he would only know exist, without being aware of how (s)he interacts with them. XR environments also offer a huge advantage in the field of prototyping, a process that can be extremely costly and timeconsuming, even if it is necessary to achieve the quality standards required by the customer. Through XR it is possible to experiment and test new ways of achieving the same result in very different ways without having to physically manufacture the final product. To effectively manage the limitations of the 3D scanning process for large objects, this paper deals with the development of an XR application for simulating the 3D scanning process of the vertical stabilizer torque box of an airplane, providing an effective tool for both training of the operators and testing more design scenarios, which could allow to overcome the issues related to scanning accuracy above described. The application has been realized by using Unity® graphic engine and deployed by means of HTC® Vive Pro Head Mounted Display (HMD). Experimental tests have been carried out in order to test both an alternative workstation layout solution and the effectiveness of the application of XR as a training tool.

2 Test Case The research focuses on studying the 3D scanning process of a vertical stabilizer torque box of an airplane (Fig. 1A) to improve the quality and speed of the measurement task through the use of XR, so enabling technology of Industry 4.0. The torque box is the main component of the vertical stabilizer, and in particular, the assembly of the right and left skins with the main central structure (made by spars and ribs). Laser scanning is adopted to capture the actual position of mating surfaces when part-to-part gaps often occur and need to be filled with customized shims (Fig. 1B), necessary to guarantee structural performance of the stabilizer.

Fig. 1. A) Frame of the vertical stabilizer torque box of an airplane; B) Rib-skin coupling and shim inserted in the gap.

Figure 2 shows the whole frame-skin assembly process. Firstly (Parts Positioning) components are positioned on the station according to the layout of Fig. 3, which represents only one side of the workstation for making clear the description. Once positioned, coupling areas of the framework and skins are scanned (Parts Scan) by means of Leica®

X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process

1367

Absolute Tracker AT960 [15] scanner bundle, whose acquired data are used for numerically evaluating the exact shim shape (Virtual Shim Simulator) that are subsequently 3D printed (Shim Fabrication) and then positioned (Shim Positioning). So, the skin is assembled onto the frame (Frame-Skin Assembly).

Fig. 2. Frame-skin assembly process.

Fig. 3. Layout of the workstation.

This study focuses on Parts Scan, the second step of the assembly process of Fig. 2. In the current layout (Fig. 3), the Leica Absolute Tracker (LAT) is located at one side of the workstation. Due to the workpiece size, a movable lateral platform allows the worker to scan the top area of the structures. Leica Absolute Tracker is combined with Leica Absolute Scanner (LAS) (Fig. 4), providing for a high-performance metrology system. The receiving station (LAT) is capable of rotating its head on two axes to directly communicate with LAS and track the scan coordinates. The communication takes place via antennas (Fig. 4A) located on the LAS surface whose visual contact with the LAT must not be interrupted. According to technical specifications, the scanning process is interrupted if less than 4 antennas are visible. Both the structures to be scanned and the operator’s body contribute to obstructing the scan. Another aspect that contributes to decreasing the quality of the acquisition is the distance between the laser scanner (LAS) and the receiving station (LAT): when the distance exceeds 7 m, the accuracy of the acquisition is poor. Moreover, Leica Absolute Scanner is characterized by a beam with a 180 mm stand-off and a working range equal to ±40 mm (Fig. 4B). All these aspects require a highly trained scanning staff, able to handle all the described issues and reduce the scanning process times. At the current state, about

1368

A. Greco et al.

Fig. 4. A) Leica Absolute Tracker AT960 Scanner Bundle (modified from [16]); B) Leica Absolute Scanner working range.

20 h are necessary to complete the workpiece scanning process. Indeed, to avoid the number of communication interruptions and to overcome the issues related to the low accuracy, due to the distance between the laser scan and the receiving station exceeding 7 m, the worker needs to change the LAT positioning several times. So, the need to perform multiple scans require also the alignment (registration phase) of them, leading to an accumulation of errors even if the scans use common reference elements. This results in a time-consuming process, which justifies the number of hours needed to complete the actual scanning process. To this end, defining the best layout of the workstation to ensure, or eliminate, the minimum number of tracker repositionings, becomes fundamental as well as the development of an effective training tool. In this context, XR can be extremely advantageous.

3 XR App Development This section focuses on the development of an XR application with a dual function: a tool for testing alternative layout solutions and a tool for training the involved personnel. 3.1 Objectives From the analysis of the issues described in the previous section, it emerges the need to opt for an XR application with two objectives. From a design point of view, the virtual scenario could allow the analysis of more design solutions, supporting the designer in the decision-making process. At the current state, as depicted in Fig. 3, the Leica Absolute Tracker is located on the floor on one side of the workstation. A possible alternative solution could consider the LAT held by a robotic arm at the top center of the assembly station, where it should always be in visible contact with the Leica Absolute Scanner (LAS), with a quite uniform distance between the two. As anticipated, the working tasks require well-trained personnel to obtain a good level of accuracy in the scanning process. So, the XR app could represent an effective training tool for workers.

X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process

1369

3.2 App Development The whole app has been developed in Unity® graphic engine. The whole scenario has been modeled by importing the 3D models of the workpiece, the assembly jig and the Leica Absolute Tracker scanner bundle (Fig. 4). In order to reproduce the operating behavior, the kinematic of the LAT has been modeled according to the actual device. The LAS is linked to the LAT, which follows the laser scanner positions involving rotation up/down and right/left (Fig. 5A). Moreover, as depicted in Fig. 5B, a visual connection, not visible from the user, between the tracker (LAT) and the scanner (LAS) antennas has been modeled. In this way, it is possible alerting the user in case of connection interruption (less than 4 visible antennas).

Fig. 5. A) Kinematic elements of the Leica Absolute Tracker scanner bundle; B) visual connection between the tracker and the scanner antennas.

Fig. 6. Laser beam reproduction and visual alerts about the distance between the LAS and the scanned surface.

About Leica Absolute Scanner (LAS), it scans objects through a laser beam. The beam has been reproduced to give a visual feedback about the area of the workpiece is detected, taking into account possible occlusion area. Figure 6 shows the visual feedback related to the distance between the laser scanner and the scanned surface, alerting the operator when the distance is outside (too close or too far) the best working range.

1370

A. Greco et al.

The whole workstation (Fig. 3) has dimensions exceeding the gaming area available at the laboratory (4 m × 2.5 m). So, to allow users easily move in the scene, a teleport action (Fig. 7A) has been implemented to apply an instantaneous position change around the scene. This action also allows the user to move on the movable bilateral platform for scanning the top area of the workpiece (Fig. 7B), otherwise not accessible.

Fig. 7. A) Teleport action; B) task execution on the top of the movable platform (in red).

Fig. 8. Set-up of the wearable tracking device.

Finally, a full-body dummy, visible in Fig. 7, has been modeled. It was considered necessary since the human body parts could obstacle the communication between the scanner (LAS) and the tracker (LAT). The dummy is linked to the user via wearable trackers which allow reproducing user’s movements, realizing the so-called Digital Twin. The kinematics of the dummy has been coded in order to obtain the rotations of certain joints (elbow, knees) by inverse kinematics. 6 trackers have been considered for tracking the human movements: the HTC Vive Pro Head Mounted Display (head tracker), the two HTC Hands Controllers (hand trackers), and three HTC Vive trackers located on the pelvis and on the two feet, according to the setup shown in Fig. 8. From simulation it is possible obtaining the following output concerning the scanning process performance: – Time needed to complete the task;

X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process

1371

– Distance between the laser scanner (LAS) and the tracker (LAT) over the time; – Number of times the distance between LAS and LAT exceeds 7 m, affecting the measurement accuracy; – Average number of LAS antennas captured by the tracker (LAT); – Number of times more than 5 antennas are obstructed, interrupting data acquisition; – Number of times the laser scanner (LAS) is out of the best working range.

4 Simulations and Results Two experimental tests sessions in the laboratory have been carried out to achieve the objectives described in the previous Sect. 3.1. During the first, simulations of the whole working task have been carried out in two different scenarios, the current one and the alternative one, in which the tracker (LAT) is held by a robotic arm and located in the top center of the workpiece. Subsequently, in a second session a participants’ experiment demonstrated the effectiveness of XR app as a training tool. 4.1 XR App for Scenarios Comparison Figure 9 shows the two tested scenarios: the current one (Fig. 9A) and the alternative one (Fig. 9B), in which the LAT could be held by a robotic arm, such in this case, or positioned on an overhead crane.

Fig. 9. Alternative scenarios: A) current layout (LAT at the side of the workstation); B) alternative layout (LAT at the top center of the workstation).

One of the app developers tested the two scenarios by completing the scan process of the whole workpiece. Table 1 resumes the results, in terms of output data about the working performance, from simulation of the whole scanning process, carried out in the two alternative scenarios. Data have been acquired at 4 Hz. Even if the working times are not comparable with the real scanning process (about 20 h), due to no scan feedback available, the table shows how the solution with the LAT at the top of the workstation overcomes every limitation of the scanning procedure. The average distance between devices reduces by about 20%, so it was no longer necessary to move the LAT each time, resulting firstly in scanning time reduction by more than

1372

A. Greco et al.

Table 1. Comparison between working performance in two alternative workstation layouts. Current layout Alternative layout Difference [%] Time [min]

12.36

8.57

−31

Average distance between LAT-LAS [m]

4.55

3.62

−21

Low resolution alerts (distance exceeding 11 7 m)

1

−91

Average number of visible antennas

7

9

+28

Lost connections (less than 5 visible antennas)

22

3

−87

30%. Thanks to the new position of the tracker, the number of disconnections reduces by almost 90%, the average number of visible antennas increases by 28%, and practically the number of low-resolution alerts is nearly zero. 4.2 XR App for Training To prove the effectiveness of the XR app as a training tool, a participants’ experiment has been conducted in a laboratory aimed at demonstrating how consecutive repetitions of the working task make it possible to reduce the number of errors in its execution. The investigated error concerns the number of times the laser scanner (LAS) operated outside the best working range (see Fig. 4B and Fig. 6). Tests have been conducted at the Department of Engineering of the University of Campania Luigi Vanvitelli and involved 25 volunteers between students and researchers. A protocol has been adopted to make all the test sessions as uniform as possible, according to the following steps: • Information and consent: the staff gives information about the aim of the tests, as well as an introduction about Extended Reality and the employed equipment. Before asking for consent, participants are informed about safety measures in case of discomfort during the experiment and the possibility to stop the experiment at any time; • Preparation of the test session: the participant is helped to wear the XR equipment by the staff; • Test execution: the participant is asked to simulate the scanning process of a selected area of the workpiece, following the trajectory shown in Fig. 10, for 10 consecutive repetitions with slow and accurate movements. Between consecutive trials, a resting time of 30 s is provided, during which the application is restarted and the participant returns to the starting position. The virtual laser scanner (LAS) is held in the hand via HTC Vive Controller and the participant has continuously visual feedback about the goodness of the scanning, according to Fig. 6; • Test ending: the participant is helped to un-wear the XR equipment; • Equipment sanitation: before proceeding with the next participant, the equipment is properly sanitized to comply with anti-Covid19 provisions.

X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process

1373

Fig. 10. Trajectory for laser scanning simulation.

All the 25 experiments completed without any kind of problem (e.g. application crash, participant withdrawal, participant’s psycho-physical disorders). Data of each participant, acquired at 4 Hz, have been stored and then analyzed by using Microsoft Excel. Figure 11 shows the trends about the average number of errors (i.e. the LAS operates outside the best working range) of the sample of participants per each repetition. It is possible observing that the most errors relate to the laser scanner beam being too far away from the surface to be scanned (curve in red). However, the tests show that by repeating the simulation several times the number of errors is drastically reduced, tending towards the possibility of defining a real learning curve.

Fig. 11. Trend about the average number of times the LAS operates outside the optimal working range: laser scanner beam too close to the surface (blue); laser scanner beam too far from the surface (red); laser scanner beam outside the optimal working range (yellow).

For the sake of completeness, Fig. 12 depicts the average number of times the LAS operates outside the optimal working range and the standard deviation per each repetition. Standard deviation values do not exceed 1.88, obtained for the third repetition, even if they seem to not decrease with the number of repetition. Probably this is due to the participants’ sample not so large, which could bring out the different trends in the use of immersive VR technology by each participant.

1374

A. Greco et al.

However, it is possible to deduce that a sample of 25 participants, although not large and not experienced in the scanning tasks, may represent a good basis for defining a future learning protocol.

Fig. 12. Trend about the average and standard deviation values about the number of times the LAS operates outside the optimal working range.

5 Conclusions This study proposed the development of an immersive Extended Reality application aimed at simulating the 3D scanning process for large products, providing an effective tool for both training of the operators and testing different design scenarios. A series of features have been implemented, such as the kinematic and operating behavior of the laser scanner, the teleporting, and the full-body virtual dummy linked to the human via wearable trackers. Although at present the procedure is limited to some activities and does not include feedback on the real scanning conditions that the user can experience, in terms of acquisition time and actions to do, the executed tests have shown the potential of the developed app to become an effective tool, able to support engineers in experimenting with different design solutions and workers in training the working tasks. Future development will be addressed to include visual feedback for a successful scanning process. Finally, an experimental campaign is planned and carried out involving industrial personnel working with laser scanners to test whether an XR training tool is really effective in the scanning procedure of the real piece for expert users.

References 1. Guo, J., Wang, Q., Park, J.H.: Geometric quality inspection of prefabricated MEP modules with 3D laser scanning. Autom. Constr. 111, 103053 (2020) 2. Binega, E., Yang, L., Sohn, H., Cheng, J.C.P.: Online geometry monitoring during directed energy deposition additive manufacturing using laser line scanning. Precis. Eng. 73, 104–114 (2022) 3. Helle, R.H., Lemu, H.G.: A case study on use of 3D scanning for reverse engineering and quality control. Mater. Today Proc. 45(6), 5255–5262 (2021)

X-Reality for Preliminary Evaluation and Training of 3D Laser Scanning Process

1375

4. Author, F., Author, S.: Title of a proceedings paper. In: Editor, F., Editor, S. (eds.) CONFERENCE 2016, LNCS, vol. 9999, pp. 1–13. Springer, Heidelberg (2016) 5. Romanovich, M., Kuzmenkova, M., Breskich, V., Kulakoc, K.: Using the laser scanning method in the reconstruction of metro stations. Transp. Res. Procedia 54, 819–826 (2021) 6. Che, J., et al.: A novel method for analyzing working performance of milling tools based on reverse engineering. J. Petrol. Sci. Eng. 197, 107987 (2021) 7. Liu, X., Wei, Y., Wu, H., Zhang, T.: Factor analysis of deformation in resistance spot welding of complex steel sheets based on reverse engineering technology and direct finite element analysis. J. Manuf. Process. 57, 79–90 (2020) 8. Yuan, Z., Zeng, H., Xu, W., Qiu, J., Xu, Y., Chen, X.: Reverse engineering for estimation of shear modulus for yarn models in finite element modelling for ballistic impact. Compos. Struct. 274, 114371 (2021) 9. Yang, Y., Jin, Y., Price, M., Abdelal, G., Maropoulos, P., Higgins, C.: Gap volume prediction for aircraft wing assembly. Procedia Manuf. 54, 227–232 (2021) 10. Simonetto, M., Arena, S., Peron, M.: A methodological framework to integrate motion capture system and virtual reality for assembly system 4.0 workplace design. Saf. Sci. 146, 105561 (2022) 11. Pérez, L., Diez, E., Usamentiaga, R., García, D.F.: Industrial robot control and operator training using virtual reality interfaces. Comput. Ind. 109, 114–120 (2019) 12. Joshi, S., et al.: Implementing Virtual Reality technology for safety training in the precast/prestressed concrete industry. Appl. Ergon. 90, 103286 (2021) 13. Gazzotti, S., et al.: Virtual and augmented reality use cases for fusion design engineering. Fus. Eng. Des. 172, 112780 (2021) 14. Dinis, F.M., Sanhudo, L., Martins, J.P., Ramos, N.N.M.: Improving project communication in the architecture, engineering and construction industry: coupling virtual reality and laser scanning. J. Build. Eng. 30, 101287 (2020) 15. Leica Absolute Tracker AT960. https://www.hexagonmi.com/products/laser-tracker-systems/ leica-absolute-tracker-at960. Accessed 25 Jan 2022 16. Leica Absolute Tracker AT960 and Leica Absolute Scanner LAS. https://www.hexagonmi. com/products/laser-tracker-systems/leica-absolute-tracker-at960-scanner-bundle. Aaccessed 25 Jan 2022

From Virtual to Augmented Reality Training System: An IC.IDO-Based Approach Sara Buonocore1(B)

, Pietro Giovannoli2 , and Giuseppe Di Gironimo1

1 Department of Industrial Engineering (DII), University of Naples Federico II, 80125 Napoli,

Italy [email protected] 2 ESI Italia Srl, 40126 Bologna, Italy

Abstract. This paper proposes a methodology to exploit virtual prototypes for the development of both VR and AR-based Training systems, with the same SoftwareHardware architecture. This architecture provides a unique collaborative environment to enable the interaction between several users even if geographically distant, allowing to switch easily and rapidly between VR, AR and mixed technologies. The proposed Training Systems concern a simple but structured workcycle about the assembly of some Lego bricks. The selected development platform is IC.IDO: a commercial software with an industrial footprint, produced by ESI Group. Despite IC.IDO is originally designed for VR and not compatible for AR applications, it resulted fully compliant with the requisites of the proposed methodology. The paper firstly describes the path outlined from the conversion of a Training session from VR to AR compatible, within IC.IDO. Thanks to the adoption of a colour-filtering software as OBS Studio, the IC.IDO session easily becomes ARcompatible, using a unique hardware architecture: the HTC Vive Pro HMD and a VIVE tracker. In this work, also the Oculus Rift S has been employed for the VR applications to demonstrate the system’s compatibility with other VR devices. For both VIVE and Oculus, the Virtual Desktop feature allowed to display the OBS Desktop window in the HMD. Furthermore, an innovative solution is proposed: thanks to IC.IDO’s Cooperative Environment, trainer and trainee will cooperate within the same scene using simultaneously VR and AR. Finally, the role of the trainer is discussed, to outline a process of progressive independence for the trainee. Keywords: Virtual prototype · Virtual training system · Cooperative environment · Virtual reality · Augmented reality

1 Introduction For over twenty years, VR1 and AR2 technologies are gaining ground in the industrial field for several applications, breaking the barriers of geographical distance and maximizing communication between individuals and company departments [1, 2]. Among 1 VR: Virtual Reality. 2 AR: Augmented Reality.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1376–1388, 2023. https://doi.org/10.1007/978-3-031-15928-2_120

From Virtual to Augmented Reality Training System

1377

the industrial applications, strong interest is found in the personnel training. The purpose is to develop a Virtual Training platform that could be properly updated and reduce the effort of imagination required to the trainees understand the paper manual content. In addition, the execution of a VR-based Training session is independent from the availability of real objects and work area, avoiding any risk for the users’ safety [3]; while an AR-based Training can decrease the user’s pressure about proper remembering the procedures. Despite the advantages, the use of AR technology rather than VR is still far to be widespread in the industry, especially for the training scope, due to difficulties in both hardware and software aspects [4]. The management of several users appears much easier with VR technology, since AR currently requires more investment to connect and track simultaneously multiple devices that should interact with both the virtual and physical objects [5, 6]. For this reason, a valid solution could be the employment of a unique device for both VR and AR applications: a very rare solution. In fact, nowadays, companies are still struggling to find a unique Software-Hardware system that offer the possibility of implementing solutions with both VR and AR technologies, without compromising on the industrial approach (i.e., use of robust software, well integrated with the companies’ PLM3 ) [7]. Therefore, two main strategies are frequently adopted: some companies decide to bear the cost of two distinct commercial software, one VR and the other AR-based; others prefer to implement both the systems within a unique platform compatible with VR and AR devices, even if not designed for the industrial purpose. Specifically, in the first case, companies experience countless difficulties to export the 3D models and the implemented session from one software to the other [8]. On the other hand, the employment of graphics engines (free with some limitations) such as Unity 3D4 and Vuforia5 , which are not designed for industrial purposes, causes a remarkable lengthening of the design phase and the need of highly qualified personnel about coding adapt the software to the company’s needs.

2 Methodology The proposed methodology is based on four crucial aspects: • • • •

the reuse of the same 3D models for several Training configurations, the coexistence of these configurations within a unique platform, the definition of a path for a gradual increase of the trainee’s autonomy, the possibility of conducting an AR Training session on-site, with a remote support of the trainer immersed within the simulated virtual environment.

In this work, the software architecture is based on IC.IDO: a virtual immersive environment conceived for Cooperative Design and Design Review activities, offering interesting tools for ergonomics and safety evaluations, as well as simulation of rigid 3 PLM: Product Lifecycle Management. 4 Unity 3D: a powerful cross-platform 3D engine adopted to create 3D games and applications

for mobile, desktop, the web, and consoles. 5 Vuforia: a platform for AR applications development, with support for the majority of phones,

tablets, and eyewear.

1378

S. Buonocore et al.

bodies dynamics [9–11]. The choice fell on it since each of the four main points of the proposed methodology (which are listed below) found a response in an IC.IDO feature. Picking Mode. This functionality enables the reuse of the same 3D models for different training sessions. In fact, the selected platform should allow to create some copies of every 3D model imported within IC.IDO and assign to each of them a different Picking mode, depending on the specific use of that virtual object. For instance, if the imported object is an “Assembly node” by default, its copy could be and defined as “Simulation node” if it is subjected to a simulation for kinematic or static analysis. Assisted and Autonomous Modes. In the perspective of a gradual learning process, the “Assisted mode” is intended for a trainee’s first approach, with the constant presence and intervention of the trainer from the Desktop view, while the “Autonomous” is designed for a second training phase in which the trainer may be absent, and the trainee is the only able to interact with the session. The IC.IDO software provides for two interactions modes with the virtual environment: the Operator and Mirror. The first allows to operate from both Desktop and Immersive modes (with HMD6 and controllers) simultaneously, while the second excludes the interaction from Desktop, allowing an active experience only to the user that wears the HMD. This IC.IDO interaction modes were used to implement respectively the “Assisted” and “Autonomous” training modes. Cooperative Environment. The selected platform should allow the cooperation between geographically distant sites, interacting within a common virtual session with the possibility of choosing between VR and AR. IC.IDO software has a specific module for this purpose called “Cooperate”, based on a Server-Client relationship and a proper Internet communication. The Server uploads the entire IC.IDO session on the common environment, while one or more Clients can download it at the beginning of the Cooperative session. To enable the Client connection, only two information are required: the IP address of the Server (mandatory) and a password (optional).

3 Case Study The proposed work cycle consists of a brief subsequence of assembly and disassembly operations with the Lego bricks (Fig. 1). Although it is not directly inherent to the industrial field, this workcycle is a good representative of a precise procedure, allowing to verify if the users will obtain the required levels of knowledge, skills, and abilities after the training sessions. In fact, the cycle perfectly simulates all the types of actions that users can perform within a VR-AR Training System: move within the physical and/or virtual environment, interact with physical and/or virtual content and other users in the physical and/or virtual world, choose one object over others, grab and drag the object, mount it in the correct position.

6 HMD: Head Mounted Display.

From Virtual to Augmented Reality Training System

1379

Fig. 1. Reproduction of the workcycle: a. Remove and place the two Grey Bricks; b. Pick and mount the two White Bricks, c. Mount the Special Grey Brick and then the Red Brick.

4 Experimental Setup 4.1 Hardware Architecture The hardware architecture required for VR and AR configurations is the following: VR Configuration. A workstation with a high-performance graphics card (minimum NVIDIA GeForce line with 6GB of dedicated memory) and an HMD for VR. AR Configuration. Starting from the VR configuration requirements, two additional aspects must be outlined. First, the selected HMD must be necessarily an HTC Vive device. In fact, the implementation of the AR configuration is based on the exploit of this HMD’s dual front-camera system, which has been employed for the real-time video capture of the real environment. In addition, at least one Vive Tracker must be introduced as a tool for a quick system calibration each new start (Sect. 5.1). The specific hardware architectures employed for the present case are described below: Workstation 1: VR. A laptop equipped with a dedicated NVIDIA RTX 3080 graphics card has been connected to the “Oculus Rift S”. Workstation 2: AR. A workstation equipped with a dedicated NVIDIA Quadro M6000 graphics card, has been connected to the “HTC Vive Pro”. As previously outlined, this AR configuration is currently not compatible with Oculus devices, since they do not allow to employ their cameras as video capture tool. Finally, a second 19-in. monitor was added to the trainer’s workstation as a visualization support tool. 4.2 Software Architecture From the software point of view, three fundamental instruments were employed:

1380

S. Buonocore et al.

• the IC.IDO software and an available license on the both the PCs; • the Oculus and Steam VR apps respectively on the PCs for the VR and AR configurations. These apps allow to open IC.IDO in Immersive mode and provide for the Virtual Desktop7 feature to display the OBS desktop window in the HMD; • the OBS Studio program on the AR workstation. It is a free tool for online teaching and video editing. Figure 2 shows how OBS was exploited to superimpose the IC.IDO’s digital content on the real-time video capture of the Vive cameras. To obtain the AR session, a “Chroma key” filter must be added to the IC.IDO window: only a part of the virtual content is visible, since OBS can filter the whole scale of one or more customizable colours. Therefore, the user, who is wearing the HMD, will see OBS window, which is filtering the IC.IDO session.

Fig. 2. Left: the OBS mixes the colour-filtered IC.IDO session with the HMD’s real-time video capture. Right: the Virtual Desktop feature allows to see the OBS Desktop window inside the HMD.

5 Implementation of the Training Sessions Import the Models. The first action of the implementation workflow was the import of all the 3D models within IC.IDO: the work area elements, the items for the assembly procedure, and the digital content for the instructions (3D texts, buttons, and arrows). Organise the Objects’ Hierarchy. The models have been organised into a hierarchical structure and the specific “picking modes” have been assigned. In fact, while all the elements of the work area and the items have been defined as “Assembly nodes”; the digital instructions and buttons “Go back” and “Next” were set as “Group nodes”. Define Objects’ Properties. Finally, aesthetical, and geometrical characteristics have been defined for all the objects: colours, visibility, size, initial position. 7 Virtual Desktop: a feature that allows to manage all the windows of the PC desktop in immersive

mode, displaying them in the HMD for a more engaging user experience.

From Virtual to Augmented Reality Training System

1381

Offline Animations. The following step was the implementation of the instruction sequence: first, the offline animations as the item’s movements and visibilities were created; then, they were divided in the respective folders. Events Management. Every folder was associated to the relative event: the selection of a button with a specific “picking mode” activates the corresponding animations. Creation of the State. Finally, the implementation of the VR session has been completed with the creation of the respective State for its activation.

5.1 The Conversion to the AR Session With reference to the implementation of the VR session previously described, only the objects’ properties and the events management have been modified for the AR one. All the models referred to the virtual work area were turned to invisible, leaving the digital instructions, buttons, arrows and Lego bricks visible. For the events management, the animations employed for the VR system have been reused to activate the same instructions. The only difference consisted in the objects’ “picking mode”, since it was set to “Assembly node” in this case. The main aspect of this configuration is the use of the IC.IDO’s “Tracking Editor”, that allows to calibrate the digital and real environment using the VIVE tracker (Fig. 3). Tracking Editor is a node that includes all the objects that must be tracked: in this case, the Vive Tracker mentioned in Sect. 4.1 stayed in a fixed position to track a reference frame for the worktable and Lego bricks. At every start of IC.IDO, Steam VR app detects the tracker and the virtual work area simulated within IC.IDO will appear as a digital copy of the real one (Fig. 4). Anyway, it must be outlined that the tracker can be attached to one object in movement, to detect its position and orientation. Finally, the respective “AR” State was created and ready to be tested. 5.2 The VR-AR Cooperative Session Since the last Training system involves both VR and AR technologies, only the colours and visibility of the digital content have been modified. Starting from the VR session, the appearance of the objects that must be invisible for the trainee were turned into the blue scale of colours, whose color is customizable. The blue color-filtering process of the IC.IDO session is entrusted to the OBS software, allowing simultaneously a fully immersed experience for the trainer and the overlap of digital content on the real work area for the trainee. Finally, the “VR-AR” State was created, since the activation of the Cooperative Environment on the part of the “Server” user and the respective “Client” user connection must be executed directly at the start of the training session. Only for the Cooperative sessions, a collaboration platform for online meetings was necessary to share the OBS window with the other workstation. In this case, Team Viewer was used, a software for remote access and control of several devices.

1382

S. Buonocore et al.

Fig. 3. a. The “Tracking Editor” node in IC.IDO associates the tracker to the objects (worktable and Lego Bricks) fixed position and orientation; b. Steam VR app recognizes the presence of the tracker

Fig. 4. a. The tracker is put in a fixed position and orientation within the real work area; b. The tracker’s digital copy is displayed in the same position and orientation within IC.IDO

6 The Proposed Training Systems In this section, we discuss the implemented Training systems based on VR and AR. 6.1 Assisted Virtual Training The first step of the trainee’s learning process is the full immersion within the virtual environment, supported by the presence of the trainer from the Desktop view. In this case, as showed in Fig. 5, the trainee wears the HMD, starting to familiarize with the simulated environment and operations. The trainee conducts the tasks by interacting with the virtual items, while the trainer is the only one capable of managing the sequence of instructions, by interacting with the virtual buttons “Go back” and “Next”.

From Virtual to Augmented Reality Training System

1383

6.2 Autonomous Virtual Training Once the trainee has become confident enough within the virtual environment, it is necessary to put into practice what he/she has learnt. With reference to the previous phase, the trainer’s presence is no longer necessary, considering that the trainee should be now capable of conducting autonomously the tasks (Fig. 6). However, in case of doubt, the presence of the option “Go back” allows to repeat any instruction. 6.3 Assisted Augmented Training For the AR configuration, the virtual content is employed as additional information overlapped on the real environment. In fact, the digital content only consists in the buttons to manage the instructions and visual signals as arrows or translucent items to highlight a specific zone of the work area or procedure. The trainer’s duty is to verify the right execution of the tasks, interacting with the IC.IDO session in Desktop view and controlling the trainee’s actions by the second monitor with the AR view (Fig. 7). Therefore, the trainee wears the HMD only as immersive visualization instrument, without using its controllers, executing the tasks within the real world.

Fig. 5. Assisted Virtual Training: a. The trainee conducts the procedures in first person within the virtual environment; b. The trainee’s point of view; c. The trainer’s point of view in Desktop mode

1384

S. Buonocore et al.

Fig. 6. Autonomous Virtual Training: a. The trainee performs autonomously the tasks and manages the sequence of virtual instructions without external interferences; b. The trainee’s point of view

Fig. 7. Assisted Augmented Training: the trainer supervises the trainee’s execution of the tasks, using two monitors; a. The first monitor displays the IC.IDO session in Desktop mode; b. The second monitor shows the trainee’s point of view, displaying the work area video capture with the IC.IDO digital content overlapped on it; c. Overview of trainer and trainee physically present in the same site

6.4 Autonomous Augmented Training The second step in the AR-based training process is characterized by the absence of the trainer. In fact, with reference to the previous AR configuration, the trainee needs to use one VIVE controller to move to the next or previous instruction personally. Once the trainee has become confident enough with the procedures, he/she is capable of monitoring autonomously the ongoing real tasks and manage the sequence of digital instructions simultaneously (Fig. 8). However, in case of doubt, the option “Go back” allows to repeat a specific instruction.

From Virtual to Augmented Reality Training System

1385

Fig. 8. Autonomous Augmented Training: a. The trainee conducts the AR Training session autonomously; b. The trainee’s point of view, showing the use of VIVE controller in the right hand to interact with the digital buttons and manage the sequence of instructions

6.5 Assisted Cooperative Training The IC.IDO’s Cooperative Environment is exploited for the realization of this mixed VR-AR Training configuration. Despite the geographical distance, trainer and trainee are able to interact within the same platform. The trainer, who is immersed in a fully virtual environment that simulates the real work area, can easily show the assembly procedure to the trainee by selecting and moving all the virtual items. On the other hand,

Fig. 9. Assisted Cooperative Training: the trainee is the only physically present in the work area. a. The trainer explains the procedures from a remote site, immersed within the simulated virtual environment; b. The trainee executes the tasks on-site, watching the digital content (digital items pointed and highlighted by the trainer) overlapped on the real work area

1386

S. Buonocore et al.

the trainee operates on the real environment supported by the view of the AR content, which consists in the same IC.IDO session of the trainer with a visibility filter on the digital content, since only a part of the 3D models are visible for the trainee (Fig. 9). The power of this configuration is the realization of a cooperation between a VR (trainer) and AR (trainee) experiences, only colour-filtering the same virtual platform and exploiting the Virtual Desktop feature of most of the VR-AR devices on the market. 6.6 Autonomous Cooperative Training The trainee interacts with both the real (environment and items) and virtual content (digital instructions) via the Vive controller. However, thanks to the Cooperative Environment, it is possible to connect one (or more) user from a remote site to assist, control and intervene in the training session. The word “trainer” used in Fig. 10 is an example of a possible user remotely connected thanks to the IC.IDO Cooperative Environment. In fact, this configuration paves the way also to the introduction of a novel trainee, instead of the trainer, that may follow the ongoing tasks executed by the expert trainee. In this way, the novel can visualize the IC.IDO session without interact with it, starting the learning process by a first familiarization with the assembly procedures.

Fig. 10. Autonomous Cooperative Training: the trainer is not physically in the work area; a. The trainer visualizes IC.IDO session in Desktop mode and the trainee’s point of view on a second monitor; b. The trainee executes the tasks, with digital content (3D text and buttons) overlapped on real work area

In the light of the above, Table 1 illustrates a brief overview of differences and commonalities for the implemented systems. The star symbol for Oculus Rift S reminds its current limited use to VR applications, excluding the AR and Cooperative.

From Virtual to Augmented Reality Training System

1387

Table 1. Summary of the VR and AR configurations’ main characteristics. The green and orange boxes imply that the user interacts respectively from Desktop and Immersive mode (HMD). ROLE Activates digital content Mounts real items Manages instructions Checks task execution HW Compatibility

Assisted VR

Autonomous VR

Assisted AR

Autonomous AR

Assisted Cooperate

Autonomous Cooperate

Trainer Trainee

Trainee

Trainer Trainee

Trainee

Trainer Trainee

Trainee

Trainer

Trainee

Trainee Trainer

Trainee Trainee

Trainee Trainer

Trainee Trainee

Trainer

Trainee

Trainer

Trainee

Trainer

Trainee

HTC Vive Pro

HTC Vive Pro

HTC HTC Vive HTC Vive Vive Pro Pro Pro (VR/AR)

Oculus Rift S

Oculus Rift S

HTC Vive Pro Oculus Rift S

(VR/AR)

Oculus Rift S

Oculus Rift S*

Oculus Rift S*

7 Conclusion and Future Works The proposed methodology provides a valid approach to exploit the same 3D models for the implementation of VR and AR Training systems, based on a unique SoftwareHardware architecture. Furthermore, the possibility to create an easy-to-use procedure also with a software as IC.IDO and a device as HTC VIVE Pro, which are not strictly designed for AR applications, has been demonstrated. Currently, the main ideas for future improvements of the proposed system are: • the realization of tests with mixed participants in terms of gender and age about the long-term use fatigue of the described HW architecture in industrial contexts, • the employment of devices to track the items’ movements for AR applications, • the evaluation of the compatibility of this system to other VR-AR devices.

References 1. Boud, A.C., Haniff, D.J., Baber, C., Steiner, S.J.: Virtual reality and augmented reality as a training tool for assembly tasks. In: 1999 IEEE International Conference on Information Visualization, pp. 32–36 (1999) 2. Di Gironimo, G., Lanzotti, A.: Designing in VR. Int. J. Interact. Des. Manuf. (IJIDeM) 3(2), 51–53 (2009)

1388

S. Buonocore et al.

3. Büttner, S., et al.: The design space of augmented and virtual reality applications for assistive environments in manufacturing: a visual approach. In: 10th International Conference on Pervasive Technologies Related to Assistive Environments, New York, NY, USA, pp. 433–440 (2017) 4. Makris, S., Pintzos, G., Rentzos, L., Chryssolouris, G.: Assembly support using AR technology based on automatic sequence generation. CIRP Ann. 62, 9–12 (2013) 5. Hoˇrejší, P.: Augmented reality system for virtual training of parts assembly. Procedia Eng. 100, 699–706 (2015) 6. Gattullo, M., Evangelista, A., Uva, A.E., Fiorentino, M., Gabbard, J.L.: What, how, and why are visual assets used in industrial augmented reality? A systematic review and classification in maintenance, assembly, and training (from 1997 to 2019). IEEE Trans. Vis. Comput. Graph. 28(2), 1443–1456 (2022) 7. Gironimo, G., Buonocore, S., Fariello, A., Carpentiero, F., Lanza, M.R., Tarallo, A.: Systems engineering approach for the development of a virtual training platform: case study in the missile systems sector. In: Paolis, L.T.D., Arpaia, P., Bourdot, P. (eds.) AVR 2021. LNCS, vol. 12980, pp. 655–669. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-875954_47 8. Lorenz, M., Spranger, M., Riedel, T., Pürzel, F., Wittstock, V., Klimant, P.: CAD to VR – a methodology for the automated conversion of kinematic CAD models to virtual reality. Procedia CIRP 41, 358–363 (2016) 9. Giannuzzi, M., Papadia, G., Pascarelli, C.: IC.IDO as a tool for displaying machining processes. The logic interface between computer-aided-manufacturing and virtual reality. Procedia CIRP 88, 145–150 (2020) 10. Rueckert, A., Niemann, M., Kam, E.: Human-centric assembly cell & line validation. In: 6th International Digital Human Modeling Symposium. Advances in Transdisciplinary Engineering, vol. 11, pp. 279–28724 (2020) 11. T˚uma, Z., Kotek, L., Kroupa, J., Blecha, P., Bradáˇc, F.: Possibility of automated ergonomic evaluations in virtual reality environments. In: 30th European Safety and Reliability Conference, ESREL 2020 and 15th Probabilistic Safety Assessment and Management Conference, PSAM, Venice, pp. 4980–4984 (2020)

An AR Tool to Support Warehouse Operations in the Context of Industry 4.0 Edoardo Casciotta1

, Riccardo Karim Khamaisi1 , Roberto Raffaeli2(B) and Margherita Peruzzini1

,

1 INTERMECH-DIEF, University of Modena and Reggio Emilia, 41125 Modena, Italy 2 INTERMECH-DISMI, University of Modena and Reggio Emilia, 41121 Reggio Emilia, Italy

[email protected]

Abstract. Augmented Reality (AR) is one of the leading technologies of the Industry 4.0 revolution, offering innovative interfaces to promote the diffusion of digital contents into industrial processes, thanks to flexible and robust solutions and cost-effective devices. In this context, this paper explores the adoption of AR in industrial logistics where several open issues still discourage its effective use in everyday scenarios. After a review of objectives, approaches and technics of AR integration in logistics operations, the paper presents a framework to identify goods in a warehouse, retrieve data relative to the package, display info to the user to drive operations. The approach aims at easing and speeding up the activity of the warehouseman to identify goods, check the relative information and to put each good on the correct shelf. A prototypal application was developed within the Unity platform and integrated with the company ERP system to manage data on the products and retrieve images of the identification labels. A real use case involving a primary company producing agricultural tractors is proposed to test usability of the prototype. Results showed that the developed application allows relevant benefits in terms of process effectiveness, error prevention, aiming at reducing the operator mental workload. Keywords: Augmented Reality · Logistics · Warehouse operations · Industry 4.0

1 Introduction Since the introduction of the Industry 4.0 concept and the identification of its implications on the modern operator 4.0 [1], Augmented Reality (AR) has been established as one of its nine pillars, concerned with the display of augmented contents in the real-world through a wide range of devices, such as a mobile phone, tablet, or special eyeglasses. Several AR applications have been proposed in the manufacturing Industry [2–4] for different purposes: for training session in order to speed up the internalization of working or safety protocols or to fast the equipment recovering on the plant floor; for on-time process supervision by displaying digital information, to increase operator awareness of the process itself or to guide quality check operations with product manufacturing information (PMI) derived from Computer-Aided Design (CAD) models [5]; in maintenance © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1389–1400, 2023. https://doi.org/10.1007/978-3-031-15928-2_121

1390

E. Casciotta et al.

tasks for potential risks ‘identification, machine’s service history or to preventatively schedule maintenance interventions; for assembly or disassembly and in logistics operations to progressively shift from manual checks for orders or shipments to an automated system, thus reducing human errors and saving time and resources. Among the several applications, despite the broad possibilities offered by such technology, still scarce attention is given to the user perspective, arousing several questions related to system adaptability, interface’s architecture and technology’s capacity to be effectively integrated within the productive system in a least intrusive way possible for the operator from an ergonomic point of view [6]. Analogously, multiple boundary conditions must be considered when implementing AR solutions for an industrial environment: operator working conditions and available equipment, working area information in relation to illumination and machinery, task requirements and user technology acceptance level. It becomes clear thus as the development of smart assistive solutions is transversal to informatics and required a deep understanding of the overall system to be augmented to accelerate the industrial shift to a technologically efficient and flexible system. In particular, this paper will analyze the impact of an AR application to simplify logistic operations (order picking and sorting) in collaboration with a primary company involved in the design and production of agricultural tractors providing the test case. The paper is organized as follows. Section 2 provides a general overview on the AR as an Industry 4.0 enabling tool, tackling open issue in logistics (device’s ergonomics, scalability of proposed solutions and tracking instabilities). Section 3 describes the methodological approach to implement the final solution, considering prerequisite and boundary conditions. Section 4 deals with the specific use case, explaining the reason behind each design choice. Section 5 will eventually propose a brief description of the preliminary test’s results.

2 Background of AR Technology in Logistics AR is a technology aiming to augment the surrounding world with digital information [7]: an example of augmentation consists of providing auditory information to give advice to the operator at the shop floor level or “smart” haptic feedbacks. The first case adapts to quiet environments where there is no need to interfere with the field of view; on the contrary, the second case is related to highly noisy environment to promptly get the attention of the user while performing specific tasks. In framing AR, it is important to list three of its properties [8]: 1. Combination of reality and virtuality, 2. On-time interaction, 3. Three-dimensional registration, In [9] the authors summarized the previous concepts coining the term reality virtuality continuum. By now AR is intercepting an increasing interest in the digital factory to support several processes, such as assembly, maintenance, training, and inspection. Even if AR has not shown all its potentiality yet, this technology is already playing a decisive role throughout all the supply chain, driving towards the “next big wave of change” [10] and especially in logistics.

An AR Tool to Support Warehouse Operations

1391

Logistics is fundamentally concerned with “managing the upstream and downstream relationship with exchange partners to deliver customer value at the least cost to the chain as a whole” [11]: a well-organized and competent logistic system is strategic for the company to achieve and maintain a competitive advantage. This has to be associated to the fact that logistics costs account for a significant portion of the overall company supply chain expenditure‚ often exceeding 10% of business. In [12] the necessity of a simplification of logistics activities is advocated with the aim of reducing the error rate of object selection, decision-making time and to make operations more humane where human are involved. Modern technologies can help improving those processes by tackling stressful situations and depressing warehouse worker routines. In [12] the authors stated that AR can play a consistent role in making the daily tasks of logistics pickers less stressful and simpler, suggesting functional solutions to the order picking problems and supporting the process when new issues arise during the picker’s route. For example, a combined use of AR and Radio Frequency identification (R-FID) for an outdoor logistics application is reported in [14], introducing the idea of enhancing AR capabilities with further available technologies if precise boundary conditions are to be set. Above all, in the last years the variety of products released on the market is exponentially grown. This has also direct consequences for the attention that operators have to put during the pick-up phase in the warehouse. On this side, AR can help workers to keep up with the daily tasks by reaching the right product rapidly and efficiently: a decrease in the number of errors during the delivery phase could be reached affecting on a long-term view the economic performance of the company [15]. On the other side, system complexity should go hand in hand with ergonomics considerations, involving user physical and cognitive demand in tackling diverse tasks. During the pick-up phase for example, current AR developed systems are more focused on displaying on screen the maximum amount of information regarding the item’s generalities in reference to the warehouse context rather than designing an efficient working interface to speed up the process. Furthermore, still great attention is paid to wearable devices like smart glasses (e.g., Microsoft Hololens, Google Glasses) which bring forward relevant ergonomics considerations in term of a foreseen daily use by user (e.g., Microsoft Hololens net weight is around 580g). The market inaccessibility and the technological limitations of such devices has prevented companies from implementing a large-scale equipment strategy to provide employees with modern tools. On the other hand, smart glasses free the operator’s hand compared to alternative solution implemented for tablets or devices. The paper aim is to critically report and discuss the development of a mobile application thought for an everyday use with proper devices accessible to a wider public like smartphones or tablets. This choice was made in accordance with the greater opportunity offered in the development of a more complete interface considering the operator’s needs. Smart glasses’ interfaces often limit the user’s field of view in showing pick-up list or in guiding the operator along the process, thus overloading user mental effort and creating a sense of discomfort [16]. Difficulties usually arise with regard to scalability of AR applications: diverse operative system and hardware specifications discourage from an equally functional augmented experience. Moreover, software and hardware

1392

E. Casciotta et al.

prerequisites of current AR libraries (i.e., PTC Vuforia, ARCore, ARKit, Wikitude etc.) represent another limit at hardware choice’s level. Eventually, each tracking algorithms present boundary conditions (such as optimal illumination, high hardware specifications or specific CAD format integration) which needs to be satisfied for a successful fruition of the experience. The correct choice should hence pursue the technology which offers a consistent matching and tracking according to the task’s analysis and requirements.

3 Research Approach The research approach adopted in the study follows a user-centered design cycle, starting from the context analysis and then the user analysis, to properly define the most suitable technological set-up, and finally including with user testing on a relevant use case, to conduct a reliable UX assessment (Fig. 1).

Logistics Organization Analysis

User Research

Definition of Technological Setup

Use Case Testing

UX Assessment

Fig. 1. The methodological approach adopted to implement the proposed solution.

An initial analysis of the as-is process is conducted, involving work organization on its broader meaning: current work distribution and the general system structure, tasks’ sequence, description and requirements, personnel equipment and involvement through sessions of team building, analysis of work documentation and images or videos.

An AR Tool to Support Warehouse Operations

1393

On a more specific level, the task’s impact on the user could be then determined through NASA-TLX questionnaires [17] and operator interviews from which inferring strengths and weaknesses of the present approach. User suggestions and experience are considered as relevant input from which start implementing the prosed AR solution: user required functionalities, task’s cognitive demand and actual limitations are thus explored. On this basis, a picture of the package’s label will be proposed in the following paragraph, explaining each of its related features. This information will be embedded in the final solution and graphically re-designed to be more user interactive. From the depicted scenario, a hardware and software review are performed in order to identify the best technological solution: device research according to foundational functionalities and which tracking algorithm adopting relatively to chosen AR library. On this last point, several development toolkits are available on the market (e.g., Wikitude SDK, Vuforia, ARCore, ARKit, VisionLib), inevitably leading to a necessary performance comparison in terms of tracking stability, software sustainability and features availability with respect to net cost, workflow scalability and flexibility and finally developing time. Once all set up, a decisive testing session is accomplished and a conclusive UX assessment worked out to double check application effectiveness. Mental workload, tool ergonomics evaluation, interface usability and the technology level of acceptance [18] are the main variables researched in such phase.

4 Use Case The use case has been developed in collaboration with CNH Industrial [19], focusing on the warehouse logistics. The proposed use case has been defined in collaboration with the CNH Industrial to overcome the current problems in their logistics, analyzed by the observation of the main critical issues in the distribution phase within company’s warehouse. The research aims at demonstrating how AR can help the company organization and the logistics’ process efficiency, in the context of Industry 4.0. This section presents the overall system architecture, as shown in Fig. 2. The AR application was built within the Unity platform integrating the PTC Vuforia Engine software development kit (SDK) for the implementation of the AR environment. Vuforia offers several tracking algorithms thought for different types of AR experience. Tracking stability and scalability of this software library have been considered a discriminant factor for the choice of the best library to adopt. Furthermore, portability of the designed application was considered a must have characteristic since for a future industrial implementation the solution will be distributed through usual app store platform available on personal device. According to the information gathered during the first stage of the presented methodology, the “Image Target” algorithm was chosen in order to recognize the package label.

1394

E. Casciotta et al. Supply Chain

Suppliers

Purchasing

Production

Distribution

Customers

ERP - Package information - Customer information - Items' number - ...

Package tracking information

AR Fig. 2. System architecture in relation to the company supply chain organization.

This choice was made to implement a Proof of Concept of the described solution which will be then ported to more reliable tracking technologies, such as QR code detection or the design of proper Vuforia VuMarks. In fact, such methods would require changing company logistical procedures, switching from the use of the actual package barcodes to VuMark ones, thus going beyond the aims of this research, focused on the benefits deriving from the conscious adoption the AR within the company warehouse. The operation consists in the detection by the Vuforia Engine of an image, comparing the natural features extracted from the camera image against the ones stored in a known target resource database. First, the label image of each package and all its related metadata in a form of XML (eXtensible Markup Language) files will be retrieved from the Enterprise Resource Planning (ERP) software to be subsequently uploaded in the Vuforia Database.

Pick up phase

Sort phase

Details

Details

- Warehouse part description - Picking location

- Warehouse part description - Customer code - Amount (n. scan)

Fig. 3. AR Application embedded functionalities.

Then, once the Image Target is correctly detected, the AR application will read the corresponding.xml file to display package’s information and augment the content. The XML format is metalanguage that allows the definition of customized markups,

An AR Tool to Support Warehouse Operations

1395

simplifying the management and retrieve of specific data thanks to its tree structure which is easily accessible. According to the Fig. 3, the use case is divided in two sections, the left one regarding the initial steps (“Pick up”) during which the operator needs to reach the packing into the warehouse. Then, the operator has to scan the label with the AR application obtaining a generic information of the part to pick and a detailed description of the current packing location. The picker will read an 11-digit code in which the details that appear concern: 1. 2. 3. 4. 5.

Exact packing location within the warehouse (first 3 digits) Number of the warehouse lane where the item is placed (digits 4-5) Span consisting of 3 packing (digits 6-7) Level or shelf floor from B (first ground floor) to L (top floor), digit 8 Packing position within the span (digits 9-10-11)

The current paper label used for the picking operations is presented in Fig. 4. The study focused on removing secondary information to make the picking and shipping tasks clearer for the use, as shown in Fig. 5. Identification code for the packing location

Package quantity Customer and part description

Part number

Identification code for the Picking location

Fig. 4. Package label with the relative description of the main features.

1396

E. Casciotta et al.

Warehouse Part Description

Warehouse Part Description

DISCO FRIZIONE

DISCO FRIZIONE

Picking Location

Customer Code

06A4113C001

T.T.003

06A

Warehouse

41

Lane

13

Span

C 001

Amount 8

Floor (from B to L) Box position within the Span

If Amount < 5 or =5, scan the item one time per Amount quantity If Amount >5, scan the item one time only

(a)

(b)

Fig. 5. Redesigned label for the AR environment: (a) Pick up interface. (b) Sort interface.

PICK UP

SORT

Log In Help

(a)

(b)

Fig. 6. Graphical user interface of the developed app: (a) Log-in page. (b) Activity selection panel.

An AR Tool to Support Warehouse Operations

1397

Once the operator reaches the package, the ship phase can start. This step is described in the right section of the scheme, called “Sort phase”, during which the operator scans again the label with the AR application obtaining additional information about where the package must be shipped thanks to the customer code (Fig. 6). Furthermore, it is still possible to read a brief description about the selected package component and the item amount withing the package.

Fig. 7. Operator during the sorting phase.

Ultimately, the operator reports the successful package delivery through a specific application functionality and proceed on to the next label: the following label is hence downloaded from the ERP system according to the on-time evolution of the daily work objectives (Fig. 7). In this way, the interoperability and interconnection of the single user with the entire supply chain is granted, providing a continuous tracking of the resources allocated in the warehouse.

5 Study Results The preliminary testing campaign involved 14 users (i.e., 8 male and 6 female operators), aged between 24 to 35. All operators already work in the logistic sector, have a solid knowledge of the specific task and previous experience with mobile devices. Prior to testing, a briefing session was planned to instruct users on how to operate with the AR application, explaining each embedded functionality. A warm-up session of 10 min was organized for each user to freely navigate the proposed solution and familiarize with the tool. A sequence of a picking and sorting operations supported by the AR application and in ordinary working conditions (each sequence represents a single session) were performed by each user on 5 different packages, for a total of 10 sessions per person. Each operator underwent the same sequence of operations on identical packages. Task times were recorded for each subject while a UX expert observed the user and collected data about comments, number of errors committed, and requests of assistance. At the end of each session, a NASA-TLX questionnaire [17] was administrated to the user to evaluate the subjective impression about the effort required, where a final area of the questionnaire was reserved for user considerations.

1398

E. Casciotta et al.

Task execuƟon Ɵme (min)

As emerged from the analysis of results, a widespread consensus was reached on the effectiveness of the AR solution in supporting logistical tasks where the human operator plays a fundamental role. The usability and organization of the proposed AR interfaces and the clear understanding of task requirements was highly appreciated from all the users, as highlighted from NASA-TLX outcome. Several comments suggest specific improvements on the graphical user interface (GUI) strictly connected to the implementation of other useful functionalities such as a time record with all the package processed by the operator during the day. Tracking stability issues in extreme environmental condition were reported too, due to the high variability of light exposure on the shop floor level. Other comments address application bugs on loading the target information or application slowdown when tracking occurs, due to different hardware specifications.

8 6 4 2 0 Op1 Op2 Op3 Op4 Op5 Op6 Op7 Op8 Op9 Op10Op11Op12Op13Op14 Operator Picking

SorƟng

Picking-AR assisted

SorƟng-AR assisted

Fig. 8. Operator execution times for the picking and sorting tasks with and without the support of the proposed solution.

200 150 100 50 0

Fig. 9. NASA TLX questionnaire’s results referring to picking and sorting operations completed with standard procedures.

An AR Tool to Support Warehouse Operations

1399

180 160 140 120 100 80 60 40 20 0

Fig. 10. NASA TLX questionnaire’s results referring to picking and sorting operations completed with the aid of the AR support.

As a demonstration of the achieved results, Fig. 8 reports tasks’ the average completion time computed for the 5 sessions respectively with the AR support and the ones performed in ordinary conditions, differentiating the computation between the sorting and picking tasks. Furthermore, the NASA TLX questionnaire’s results concerning a full session are shown in Figs. 9 and 10, highlighting the differences between the user perceived workload with and without the AR tool.

6 Conclusions The proposed solution demonstrates that the application of AR in logistics presents several cues for discussion both in terms of overall system efficiency and user interaction optimization in the framework of the Industry 4.0 digitalization process. A complete and reliable UX assessment procedure is furthermore missing as a clear interpretation of collected data: future detailed studies involving physiological parameters and human monitoring sessions will be performed. Although clear indications on AR usefulness in logistics are presented from many studies, few large-scale applications are already used on field. Several technological advancements are expected to solve actual tracking problems and computational lack of resources in relation to mobile applications, as logistics processes and use cases become more sophisticated and cognitively demanding. In terms of future development, the integration with modern productive system and relative technical issues must be considered as well as a wider testing campaign for a statistical validation.

1400

E. Casciotta et al.

References 1. Romero, D., Bernus, P., Noran, O., Stahre, J.: The Operator 4.0: human cyber-physical systems adaptive automation towards human-automation symbiosis work systems. In: IFIP International Conference on Advances In Production Management Systems (APMS), pp. 677–686, Iguaussu Falls (2016) 2. Chang, M.M.L., Ong, S.K., Nee, A.Y.C.: AR-guided product disassembly for maintenance and remanufacturing. Procedia CIRP 61, 299–304 (2017) 3. Eschen, H., Kötter, T., Rodeck, R., Harnisch, M., Schüppstuhl, T.: Augmented and virtual reality for inspection and maintenance processes in the aviation industry. Procedia Manuf. 19, 156–163 (2018) 4. Runji, J.M., Lin, C.Y.: Markerless cooperative augmented reality-based smart manufacturing double-check system: case of safe PCBA inspection following automatic optical inspection. Robot Comput. Integr. Manuf. 64, 101957 (2020) 5. Grandi, F., Khamaisi, R.K., Peruzzini, M., Raffaeli, R., Pellicciari, M.: A reference framework to combine model-based design and AR to improve social sustainability. Sustainability 13, 1–16 (2021) 6. Khamaisi, R.K., Prati, E., Peruzzini, M., Raffaeli, R., Pellicciari, M.: UX in AR-supported industrial human–robot collaborative tasks: a systematic review. Appl. Sci. 11, 1–17 (2021) 7. Wahid, D., Ray, G., Habiba, F.: A solution procedure for minimum convex-cost network flow problems. Global J. Sci. Front. Res. 12(10), 23–30 (2012) 8. Billinghurst, M., Clark, A., Lee, G.: A survey of augmented reality. Found. Trends Hum.Comput. Interact. 8, 73–272 (2014) 9. Azuma, R.T.: A survey of augmented reality. Presence Teleoperators Virtual Environ. 6(4), 355–385 (1997) 10. Wang, W., Wang, F., Song, W., Su S.: Application of augmented reality (AR) technologies in inhouse logistics. E3S Web Conf. 145(1), 02108 (2020) 11. Rejeb, A.: The challenges of augmented reality in logistics: a systematic literature review. World Sci. News 134, 281–311 (2019) 12. Cirulis, A., Ginters, E.: Augmented reality in logistics. Procedia Comput. Sci. 26, 14–20 (2013) 13. Plakas, G., Ponis, S.T., Agalianos, K., Aretoulaki, E., Gayialis, S.P.: Augmented reality in manufacturing and logistics: lessons learnt from a real-life industrial application. Procedia Manuf. 51, 1629–1635 (2020) 14. Ginters, E., Cirulis, A., Blums, G.: Markerless outdoor AR-RFID solution for logistics. Procedia Comput. Sci. 25, 80–89 (2013) 15. Reif, R., Walch, D.: Augmented & virtual reality applications in the field of logistics. Vis. Comput. 24, 987–994 (2008) 16. Stoltz, M.H., Giannikas, V., McFarlane, D., Strachan, J., Jumyung, U., Rengarajan, S.: Augmented reality in warehouse operations: opportunities and barriers. IFAC-PapersOnLine 50, 12979–12984 (2017) 17. Hart, S., Staveland, L.: Development of NASA-TLX (Task Load Index): results of empirical and theoretical research. Human Mental Workload 52, 139–183 (1988) 18. Davis, F.D., Bagozzi, R.P., Warshaw, P.R.: User acceptance of computer technology: a comparison of two theoretical models. Manage Sci. 35, 982–1003 (1989) 19. CNH Industrial Homepage. https://www1.cnhindustrial.com/it-IT/Pages/homepage.aspx. Accessed 28 Mar 2022

A Novel Approach to Use Marker-Less Mixed Reality Applications with In-Motion Systems Agnese Brunzini(B) , Marianna Ciccarelli , Mikhailo Sartini , Giacomo Menchi , Alessandra Papetti , and Michele Germani Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy [email protected]

Abstract. Extended Reality (XR) technologies can be a valid tool for supporting operators with assembly instructions in real-time, directly superimposed on the product. A strategy to face dynamic effects, such as sudden environmental changes and objects in movement, still needs to be defined to implement effective XR applications in moving production lines. The physical environment must be tracked and recognized to determine the position and the orientation of digital content in space. This paper aims to address these open issues by proposing a Mixed Reality (MR) application to support workers in the wire harness process for the tractor’s drivelines production, which requires the product to be in motion. For this aim, the Microsoft Azure Spatial Anchors with Microsoft HoloLens 2 were adopted. The designed and developed MR application allows the operator to display, step-bystep, consecutive work instructions provided through textual indications, pictures, videos, and animations. Through the Azure Spatial Anchors, the digital content can be anchored over the driveline parts. By knowing the speed of the production line and the duration of each wiring task, a time-controlled application has been developed, to assure the visualization of the instruction in correspondence of the relative driveline elements, in the right time. The application has been tested on-field with expert operators. Despite the presence of some drawbacks related both to the driveline motion and technology, the operators’ feedback pointed out satisfactory and promising results since the application allowed them to reduce errors and forgetfulness. Keywords: Mixed Reality · Azure Spatial Anchor · In-motion tracking · Industry 4.0 · Wire harness process

1 Introduction and Research Background With the advent of Industry 4.0, the factory is increasingly moving towards the concept of “smart factory”, where innovative technologies enable to enhance the entire production process, improving safety at work, maintenance operations, and quality control activities. The integration of these technologies in manufacturing processes changes the operator’s job. If on the one hand, the worker is required to have more different skills and knowledge, on the other, smart devices can support the worker during his © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1401–1412, 2023. https://doi.org/10.1007/978-3-031-15928-2_122

1402

A. Brunzini et al.

activities reducing human errors and mental demand [1]. The fast-growing demand for mass-customized products, which implies an increased product variety, requires worker assistive systems to be context-sensitive and meet high requirements in terms of contextualization and visualization of assembly information [2]. Operators must be able to retrieve and consume useful and understandable information when and where necessary, without increasing their cognitive load [3]. In this context, a key role is played by extended reality (XR) systems (i.e., virtual, augmented, mixed reality technologies), whose use in the industrial world has been investigated for several years. In the product design and development phases, XR can be useful for the creation of three-dimensional sketches to replace traditional prototypes, for collaborative design, and the design evaluation, obtaining feedback from the digital environment [4]. In the manufacturing field, XR systems are currently implemented for process simulations, quality assurance, complex manual assembly, servicing and maintenance, and operator training [5]. Within the World Class Manufacturing (WCM) paradigm, XR technologies can be a valid tool for optimizing standard operation procedure (SOP) and one point lesson (OPL) towards the standardization of task execution mode, procedures, and work instructions [6]. Recent research shows how the idea of using augmented reality (AR) and mixed reality (MR) applications has been embraced by large and medium-sized companies in several sectors and for different purposes. For example, Rodriguez et al. [7] presented a MR assistance system that allows real-time support in manual production guiding the operator without prior knowledge through a specific assembly task. Muñoz et al. [8] proposed a MR-based interface for quality control inspection to reduce workers’ stress and improve their productivity. Bottani et al. [9] presented a MR solution to support fault detection and troubleshooting in manufacturing plants to improve productivity, reduce downtimes and enhance the employees’ safety. As stated by the authors, some technical limitations highlighted in these works could be addressed by the newly released Microsoft HoloLens 2. While XR technologies will be quite widespread in the coming years and the hardware technology will be evolved, there is still a lot of research work to be done to develop applications that are truly supportive for operators in the industry [10]. While AR supplements the real world with virtual objects, that appear to coexist in the same space as the users’ physical reality, in MR objects are not simply superimposed on the surrounding reality, but anchored to it, so that the user can interact with them, creating a dynamic and immersive experience. However, this involves that the physical environment has to be tracked and recognized. It is necessary to determine the position and the orientation of an object in space and time by tracking systems. Although several methods are available for object tracking and spatial mapping, which can be mainly distinguished in marker-based and marker-less [11], tracking is recognized as one of the main challenges to be faced for industrial AR/MR usage. It is mainly due to the ease of obstruction of the markers and insufficient maturity of the marker-less methods [12]. There is also a need for a strategy to face dynamic effects, such as sudden environmental changes and objects in movement [13]. To the best of the authors’ knowledge, existing XR applications are rarely implemented and tested in moving production lines. Therefore, this paper aims to address these open issues by proposing an MR application to support

A Novel Approach to Use Marker-Less Mixed Reality Applications

1403

workers in the wire harness process for the drivelines production, which requires the product to be in motion. For this aim, the Microsoft Azure Spatial Anchor was adopted, which has still been very little investigated in the literature [14].

2 Tracking Solutions The present work has been developed to overcome an issue raised by CNH Industrial, a global manufacturer of agriculture and industrial vehicles, concerning the wire harness of the tractor’s driveline. This activity is characterized by a high degree of manual work and the customization of final products implicates a large variability of wire harness configurations. Since the company detected several errors during this process, the necessity was identified in supporting the operator with step-by-step assembly instructions in real-time, directly superimposed on the in-motion tractor’s driveline. Thus, (I) a physical object (i.e., the tractor’s driveline) must be recognized, (II) specific digital contents must be associated with its different parts, and (III) the operator must be able to interact with them in a MR immersive experience. To this end, technological scouting of the main tracking systems has been accomplished. First, the most common tracking methods can be divided into two main categories [15]: outdoor methods and indoor methods. The latter are the ones used in this context. They are categorized into sensor-based, vision-based, and hybrid techniques [16]. The proposed work is focused on evaluating the vision-based techniques; they use computer vision image processing algorithms to perform the tracking, exploiting the camera’s functionalities [17], and using two main mechanisms: marker-based and marker-less (using features). 2.1 Marker-Based Tracking Using markers means identifying distinct elements such as a figure rich in features, a QR code, or a model; the digital content is then placed in the environment in relation to the marker (i.e., anchored to it). If the marker exits from the field of view of the AR/MR device, the digital content is lost. This recognition method requires little production time, minimum resources, and works even on low-end devices. In the context of this work, the marker-based tracking method finds a possible twofold application. Markers Applied on the Product on the Production Line Applying the markers directly on the areas of the product where the operator works, the digital content would be superimposed on the specific parts needing attention. However, even if this solution offers reliable recognition and accurate tracking with a low margin of error, some issues arise. Indeed, for example, operators may misplace or forget to attach all the markers on the product or remove them after the activities. Moreover, the use of markers does not ensure that the operator always carries out all the tasks because, if the marker is out of his/her field of view, the related instruction is not displayed. Also, the markers application on the product means gluing and removing them, leading to an unacceptable increment of times and costs, and loss in the product’s aesthetic quality. Markers on the Operator’s Gloves This technique involves 3D cameras installed at the workstation where the product

1404

A. Brunzini et al.

passes. The cameras map it and, through the markers on the operator’s gloves, they track the operator’s hands on the product. Sending this information to the AR/MR device, the digital content can be displayed in real-time in the correct position. This solution allows reliable, precise recognition and tracking with a low margin of error. However, since the product is moving on the production line, it is necessary to develop a system capable of adjusting the relative speed between the product and the camera. In addition, a dialogue interface between the 3D camera and the AR/MR device would be required: communication between the two devices can lead to signal delays and consequently to visualization issues (in terms of position and time). Furthermore, beyond the markerrelated issues explained in the previous paragraph, the required installation of several 3D cameras along the production line and the problems raising from the eventual cameras obstruction make this solution difficult to be implemented. 2.2 Marker-Less Tracking Marker-less tracking methods are based on the research of real-world elements or features, such as geometrical features, to be registered and employed in the tracking without any marker. The use of depth sensors and cameras, available on some AR/MR devices, combined with different solutions like SLAM (Simultaneous Localization and Mapping) algorithms, allows the real-time spatial mapping of the environment [18] and, consequently, the digital content visualization based on the recognized features. Model Target by Vuforia Vuforia is a software development kit (SDK) for creating AR/MR apps, that provides several kinds of tracking methods. The 3D Model Target method exploits the 3D digital model of an object to track the same physical objects in the environment and allows the positioning of the digital content in relation to it. Thus, it allows physical object recognition by comparing it with its 3D representation and displaying 3D augmentations without requiring any physical markers on the product. Even if Vuforia is used in several sectors, its application in the manufacturing context is not widespread. Indeed, it presents some limitations, especially related to the characteristics of the product to be tracked (e.g., it should be monotonic and on a background with distinct colors [19]). Moreover, the Model Target by Vuforia needs to have rigid, still objects, non-polished surfaces, and light CAD models [20]. In this case study, the presence of non-rigid cables, glossy black paint on the driveline, moving objects on the assembly line, and heavy CAD models, made the use of this tracking method impossible. Azure Object Anchor Azure Object Anchor allows detecting an object in the physical world using a 3D model and estimates its 6 degrees of freedom (DoF) pose [21]. The workflow is effortless: starting from an existing 3D asset, it is uploaded to the “Azure Object Anchors Conversion Service”, which converts the asset to the object anchors model. The 3D model is linked to its physical equivalent, allowing it to be tracked. In this case, the restriction concerns the requirements of the 3D asset: all the dimensions must be greater than 1 m [21]. Since

A Novel Approach to Use Marker-Less Mixed Reality Applications

1405

the tractor’s driveline has a maximum width of 0.6 m, it results impossible to create a recognizable 3D asset. Azure Spatial Anchor A spatial anchor is a fixed coordinate system generated by an AR/MR application and tracked by an operating system [22]. Anchors are conventionally used in Microsoft HoloLens applications to stabilize or persist holograms in the physical space or onto a dedicated surface [23, 24]. Spatial anchors rely on the accurate localization performed by the device, which is primarily achieved through camera-based tracking. For example, the HoloLens 2 implements its visual-inertial SLAM algorithm using visible light and depth cameras [25]. This kind of tracking solution is entirely unconstrained and feasible also for the CNH case study. One or more spatial anchors can be placed over specific areas of the tractor’s driveline, and associate to it/them the pertinent digital content (i.e., the instructions for the operator). This tracking system has been deeply investigated in this work to develop and implement a smart solution to overcome motion-related issues.

3 Design and Development of the MR Application 3.1 Requirements The MR application had to be interactive (leaving the operator’s hands free), easy-touse, clear to understand, with the final aim of improving the production process reducing human errors. To satisfy these requirements, two cutting-edge software for the MR were used, Unity and Azure Spatial Anchor. The first one enabled the design and development of the MR application, while the second one was used for spatial tracking. Concerning the hardware, after a careful analysis of the commercially available products, the Microsoft HoloLens 2 was selected for its advanced technological features for tracking and holograms display, perfectly superimposed on the surrounding environment. Before designing the application content, it has been necessary to determine and create a standard work procedure for the wire harness activity. Together with the production manager, the operators, and the team leader, a standard tasks sequence was defined, and the most critical operations were identified. The instructions mainly refer to the fastening of the ground cable, the compliance with the fixing points of the wires, the correct routing, and the sensors’ connections. The huge customization of the tractors entails a high variability of the driveline’s variants. The operator has to memorize the procedures for all the driveline’s models, and, in most cases, the wire harness steps are very different. For this reason, the MR application has been developed firstly for the most complex driveline model that includes five different variants for the wire harness. In the next sections, the design and development of one of the five applications will be explained as an example. 3.2 Spatial Anchor Development As explained in Sect. 2, Azure Spatial Anchor is one of the latest feats in the MR development field, providing a simple but reliable way of placing digital objects in the

1406

A. Brunzini et al.

real world. Their location can be saved and retrieved by the operator for later use with just a few steps. The easiness of use of this system makes it suitable also for inexperienced users and thus convenient for the company. The architecture of the developed system is thus divided into two main components: – The Azure Spatial Anchor Service, available on the Azure Platform online, which stores anchors and their locations. – The Unity Application, which requests the anchors’ positions to the Azure servers and places them in the MR space. The storing mechanism is completely managed by the Azure Service and its activity can be monitored by using the provided Azure Platform tools. Therefore, the number of saved anchors and requests received can be tracked at all times. In this case study, the Azure Spatial Anchor is used to precisely place digital content (i.e., instructions in the forms of videos, images, text, etc.) on specific points over the tractor’s driveline, in front of the operator. Through the spatial anchors, the correct instruction position can be retrieved over time, since this information is kept saved by the Azure Service and will not be lost even when the application is exited, or the device is turned off. Due to the Azure Anchor being a relatively new technology, it still has partial support for multiple anchors. For this reason, new functionalities have been implemented, to repeat the standard functions of saving and retrieving for all the created anchors and saving multiple anchor identifiers. The process of creating multiple Azure Anchors can be summed up in a few steps: – Start Azure session: establishes a connection with the Azure Service. – Anchor positioning: the anchor objects (namely cube-shaped holograms) must be manually placed where they are needed. – Create Azure Anchor: creates an Azure Anchor in the position where the anchor objects are placed. – Save Azure Anchor: sends all the Azure Anchors information to the Azure Service and keeps their identifiers on a local text file for retrieving. – Stop Azure session: stops the previously created connection with the Azure Service. The process of recovering previously created Azure Anchors, instead, is structured like follows: – Get Azure Anchor: retrieves the previously saved Azure Anchors identifiers from the local text file. – Start Azure Session: once again, it establishes a new connection with the Azure Service. – Find Azure Anchor: queries the Azure Service for the retrieved Azure Anchors identifiers. Once anchors are found, their information is downloaded, and they are correctly placed in the same location they were in during their initial creation.

A Novel Approach to Use Marker-Less Mixed Reality Applications

1407

3.3 Unity Application The MR application has been designed and developed using Unity with the Mixed Reality Toolkit (MRTK) package and the Azure Spatial Anchor. The MR application allows the operator to create and retrieve two spatial anchors, and display consecutive work instructions provided through textual indications, pictures, videos, and animations. The operator can then interact with the holograms through simple gestures. Using Azure Spatial Anchor, it is enough to create and save the anchors only once (i.e., the first time that the application is run); the operator correctly places the anchors over the real driveline along the production line and these are stored in the Azure Platform online. During the work and before starting the wire harness activities, the operator can simply retrieve the anchors and the related instructions. The MR application interface was composed of a specific customized menu with two buttons for the anchors’ management, a debug window, and six work instructions anchored to the related parent anchor (Fig. 1). The application allows both placing (saving and retrieving) the anchors over the driveline and supporting the operator during the work activity. The digital content related to the anchors’ management and to the instructions are consecutively displayed. Only two spatial anchors were used to lighten the application and speed up the process of the anchors’ retrieval and loading. The six work instructions were gathered in two distinct parent anchors, based on their location relative to the driveline. Initially, a button was created for each step of the Azure Spatial Anchor procedure: Start session, Create Anchor, Save Anchor, Get Anchor, Find Anchor, and Stop Session. In this way, during the work, the operator had to interact with too many buttons making the procedure complex and time-consuming. To make the application interface more user-friendly, easy-to-use, and rapid, all the operations were grouped into one menu with only two buttons: Create Anchors and Find Anchors, as shown in Fig. 1. The former allows starting the session, creating, and saving the anchor at the position where the object was placed. The latter executes all the operations related to the retrieval and the positioning of the saved anchor. The debug window shows the operations performed by the operator with the anchors (e.g., the creation and saving) and their outcome (operation fulfilled or failed). Each instruction was positioned in Unity using the driveline CAD model to ensure that it appears in the correct place over the driveline. The work instructions were created based on standard WCM visual tools: SOP (standard operation procedure) and OPL (one point lesson). They describe the sequence and procedure of the operations by graphical and textual representations. For these reasons, the instruction content is a clear graphical representation of the operation to perform; circles, arrows, or numbers are added to indicate exactly the area in which the operator has to work. In the blue boxes the textual procedures are explained with all the operations to be followed step-by-step. In this way, the operator should not forget any step, and the possibility to make errors decreases. Also, instructions are interactive so that once the specific operation has been concluded, the operator can immediately scroll to the next instruction with a simple gesture toward the picture, taking advantage of real-time support.

1408

A. Brunzini et al.

Fig. 1. Development of the MR application in unity – Simultaneous visualization of all the elements that will be consecutively available for the operator (i.e., the menu for the anchors’ creation and retrieval, the debug window, the parent anchors, and each instruction)

3.4 Anchor Movement The Azure Spatial Anchor system allows fixing the digital content in a specific place, and it remains stable during the entire use of the MR application. However, in this specific case study, the driveline moves, during the assembly phase, along the production line; thus, the position of the instructions should move with the driveline. To solve this problem using Azure Spatial Anchor, a smart solution has been implemented. By knowing the speed of the production line and the duration of each wiring step, a time-controlled application has been developed. Specifically, by starting from the position of the instruction over the fixed driveline in Unity, and through the execution time of the activity, the tasks sequence, and the line’s speed, the exact position in which the instruction must appear in the real world was calculated. The procedure was repeated for each instruction considering the wire harness sequence. In this way, when the operator clicks on the instruction’s hologram, the next information automatically appears at the exact point where the driveline is located. The adopted solution does not require the operator to change the anchor’s position in real-time. The movement of work instructions is automatic and properly designed in Unity.

4 Test and Results The functioning of this solution has been tested in CNH plant with operators working on the driveline wire harness process. The test has been designed as a preliminary assessment of the solution on-field, involving the operators actually designated to this activity. The wire harness process to be supported by the developed MR application is always performed by the same two operators, one per shift. For this reason, quantitative usability and user experience (UX) analysis were not possible; the acquired data would

A Novel Approach to Use Marker-Less Mixed Reality Applications

1409

not have been considerable to obtain statistically significant results. Moreover, due to the spread of COVID19 pandemic, the tractor’s production was subjected to some variations including delays and shortcomings. The availability of the studied driveline was reduced compared to usual, thus the number of task repetitions would not be sufficiently high to obtain robust results. The quantitative analysis of errors and timing would have been affected by all these conditions, compromising reliable observations and understandings. Also, for the same reason, it was not possible to quantitatively assess the effect that the on-field implemented MR solution has on quality control and production savings and costs. Therefore, a qualitative analysis of the operators’ UX as regards the developed application, and the use of MR on-field has been accomplished (Fig. 2).

Fig. 2. Operators doing tests (above) and a visualized instruction through HoloLens 2 (below)

Two expert operators have been enrolled in the study and participated in the experiment. Their age was 38 and 40. Their experience with the use of head-mounted displays (HMDs) and XR technologies was limited. A dedicated training session was accomplished to explain to them how to use HoloLens 2 and the developed MR application. Functionalities, interactions modalities, gestures, and the MR application features have been carefully shown and explained. The two operators were let free to try the device and the application until confidence in the usage was reached. They have used the MR application for the entire duration of their shift, with all the product variants supported by the app. At the end of the shift, the operators have been involved in a debriefing session and answered a semi-structured interview.

1410

A. Brunzini et al.

The first part of the interview concerned the UX about the developed MR application content. Both operators found the content understandable, clear, easy to follow and to learn. The text, always associated with images and videos, helped in the task recognition and execution. The layout was considered minimalist, orderly, appealing, and consistent throughout the entire application use. Indeed, the step-by-step instructions have been designed always maintaining the same layout. Animations (with circles, arrows, etc.) were clear and meaningful, letting the user to instantly understand where to put the attention. This also promoted user engagement, letting the operators to consider the app satisfying and exciting, assuring them an enjoyable experience. However, operators had quite different opinions about the usefulness of the application. While the former considered it extremely supportive, the latter believed that in some cases it resulted obstructive, forcing him to always respect a predefined tasks sequence. Their feedback was in contrast about the efficiency of the application. One operator found the application very efficient, quick, with the start and stop easy to run, while the other one found it the opposite, complaining of issues with the gestures. The second part of the interview regarded the operators’ UX with the use of the MR technology on-field during their work shifts. The main aim was to understand if being supported in the tasks’ execution, on the tractor’s driveline, was appreciated and advocated by operators. Both asserted that the use of the HMD did not hinder the execution of the tasks and did not make them feel under pressure. On the contrary, the use of HMD allowed them to be autonomous in learning and carrying out their work and helped them to not make mistakes. However, even if wearing the HoloLens 2 was not invasive and did not cause them discomfort (e.g., nausea, headache, etc.), both operators complained of its repetitive usage. Indeed, although ergonomics, the HMD resulted heavy and uncomfortable over the long term. Furthermore, disagreements between the operators arose concerning the usefulness of MR support for this kind of activity. Only one of them would recommend the use of MR for this activity, believing also that this type of innovation should be more considered within a work environment. Even if they do not think that the use of MR promotes concentration, they finally believe that it simplifies the way to work. One of them found it useful because helped him to prevent a couple of forgetfulness with the fixing points of the wires, while the other one considered it effective for the correct routing of the wire in front of/behind specific driveline elements. Nevertheless, some drawbacks and failures have been encountered. The main one concerns the positioning of the instructions on the in-motion driveline. Even though the timing entered to solve the movement-related problem was the one established by standard execution times, the operators often failed to respect it. As a consequence, they found sometimes the digital instruction shifted in space, not in the right place. Also, although the tasks sequence was initially established together with the operators, they tend to perform the tasks sequence according to their habit. In this case, the result is a wrong synchronization and placing of the digital content over the driveline. Since the MR application is used by only two operators, these problems could be solved by adapting the app to each different user, thus developing a custom-made application, in a user-centric vision.

A Novel Approach to Use Marker-Less Mixed Reality Applications

1411

Other issues refer to the HoloLens 2 battery life; indeed, they must be recharged a few times during the shift. When the operator has to perform a MR non-supported task, he takes off the HMD. This allows the recharge of the device but, in the meantime, is a time waste for the operator. Future tests will involve a greater sample of participants to have quantitative analysis and a focus group for discussing the application optimization.

5 Conclusions This paper proposes a MR application for the operator’s support during the tasks’ execution on the in-motion production line. The application tried to solve the movementrelated issues by implementing a time-controlled solution. Even if the operators’ feedback showed satisfactory and promising results because the app allowed them to prevent errors and forgetfulness, some technical limitations arose. First, the in-motion-related problem was not completely overcome. Secondary, to be truly effective in preventing a significant number of errors, the application should be developed and employed for all the tractors’ driveline variants.

References 1. Rauch, E., Linder, C., Dallasega, P.: Anthropocentric perspective of production before and within Industry 4.0. Comput. Ind. Eng. 139, 105644 (2020) 2. Froschauer, R., Kurschl, W., Wolfartsberger, J., Pimminger, S., Lindorfer, R., Blattner, J.: A human-centered assembly workplace for industry: challenges and lessons learned. Procedia Comput. Sci. 180, 290–300 (2021) 3. Longo, F., Nicoletti, L., Padovano, A.: New perspectives and results for Smart Operators in industry 4.0: a human-centered approach. Comput. Ind. Eng. 163, 107824 (2022) 4. Kent, L., Snider, C., Gopsill, J., Hicks, B.: Mixed reality in design prototyping: a systematic review. Des. Stud. 77, 101046 (2021) 5. Baroroh, D.K., Chu, C.-H., Wang, L.: Systematic literature review on augmented reality in smart manufacturing: collaboration between human and computational intelligence. J. Manuf. Syst. 61, 696–711 (2021) 6. D’Orazio, L., Messina, R., Schiraldi, M.M.: Industry 4.0 and world class manufacturing integration: 100 technologies for a WCM-I4.0 matrix. Appl. Sci. 10(14), 4942 (2020) 7. Rodriguez, L., Quint, F., Gorecky, D., Romero, D., Siller, H.R.: Developing a mixed reality assistance system based on projection mapping technology for manual operations at assembly workstations. Procedia Comput. Sci. 75, 327–333 (2015) 8. Muñoz, A., Mahiques, X., Solanes, J.E., Martí, A., Gracia, L., Tornero, J.: Mixed reality-based user interface for quality control inspection of car body surfaces. J. Manuf. Syst. 53, 75–92 (2019) 9. Bottani, E., Longo, F., Nicoletti, L., Vetrano, M., Vignali, G.: Wearable and interactive mixed reality solutions for fault diagnosis and assistance in manufacturing systems: implementation and testing in an aseptic bottling line. Comput. Ind. 128, 103429 (2021) 10. Mark, B.G., Rauch, E., Matt, D.T.: Worker assistance systems in manufacturing: a review of the state of the art and future directions. J. Manuf. Syst. 59, 228–250 (2021) 11. Egger, J., Masood, T.: Augmented reality in support of intelligent manufacturing – a systematic literature review. Comput. Ind. Eng. 140, 106195 (2020)

1412

A. Brunzini et al.

12. de Souza Cardoso, L.F., Mariano, F.C.M.Q., Zorzal, E.R.: A survey of industrial augmented reality. Comput. Ind. Eng. 139, 106159 (2020) 13. Rokhsaritalemi, S., Sadeghi-Niaraki, A., Choi, S.-M.: A review on mixed reality: current trends, challenges and prospects. Appl. Sci. 10(2), 636 (2020) 14. Rosales, J., Deshpande, S., Anand, S.: IIoT based augmented reality for factory data collection and visualization. Procedia Manuf. 53, 618–627 (2021) 15. Arora, B., Parkar, N.: Augmented Reality: Tracking Methods, vol. 5(01), pp. 1–4 (2017) 16. Zhou, F., Duh, H.B.-L., Billinghurst, M.: Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR. In: 7th IEEE/ACM International Symposium on Mixed and Augmented Reality, pp. 193–202 (2008) 17. Afif, F.N., Basori, A.H., Saari, N.: Vision-based tracking technology for augmented reality: a survey. Int. J. Interact. Digit. Media 1(1), 46–49 (2013) 18. Saeedi, S., et al.: Navigating the landscape for real-time localization and mapping for robotics and virtual and augmented reality. Proc. IEEE 106(11), 2020–2039 (2018) 19. Cýrus, J., Krˇcmaˇrík, D., Petr˚u, M., Koˇcí, J.: Cooperation of virtual reality and real objects with HoloLens. In: Arai, Kohei, Kapoor, Supriya (eds.) CVC 2019. AISC, vol. 944, pp. 94–106. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-17798-0_10 20. https://library.vuforia.com/objects/model-targets. Accessed 3 Nov 2020 21. https://docs.microsoft.com/it-it/azure/object-anchors/faq. Accessed 3 Nov 2020 22. https://azure.microsoft.com/en-us/services/spatial-anchors/#overview. Accessed 3 Nov 2020 23. Kastner, L., Lambrecht, J.: Augmented-reality-based visualization of navigation data of mobile robots on the Microsoft HoloLens - Possibilities and limitations. In: 2019 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), pp. 344–349 (2019) 24. Luxenburger, A., et al.: Augmented reality for human-robot cooperation in aircraft assembly. In: 2019 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR), pp. 263–2633 (2019) 25. Ungureanu, D., et al.: HoloLens 2 research mode as a tool for computer vision research (2020)

A Human-Centred Design Approach for Designing Augmented Reality Enabled Interactive Systems: A Kitchen Machine Case Study Marina Ricci1(B)

, Alessandra Scarcelli2 , Annarita D’Introno2 , Valeria Strippoli2 , Simona Cariati2 , and Michele Fiorentino1

1 Department of Mechanics, Mathematics and Management, Polytechnic University of Bari,

Bari, Italy [email protected] 2 Department of Civil Engineering Sciences and Architecture, Polytechnic University of Bari, Bari, Italy

Abstract. In the ever-changing scenario of technology evolution, designers need to develop new interactive systems that respond to users’ needs. Augmented Reality (AR) could be a tool and an opportunity for designers to create novel interactive systems. AR has proven to be effective in several domains, showing potential for widespread deployment even in everyday life tasks such as the use of household appliances. Thus, leveraging the Human-Centred Design (HCD) approach, we integrate AR into the design and development process of an interactive system for household appliances. Based on our survey results with 463 participants, one of the appliances that could benefit most from AR is the kitchen machine. Starting from a case study, we develop a demo to prove the feasibility of designing interactive systems with the integration of AR technologies, following the HCD approach. Keywords: Augmented Reality · Interactive System · Human-Centred Design · Industrial Design · User Interface

1 Introduction Technology advancements are generating devices that are increasingly more accessible and affordable than the previous generation, introducing new possibilities for designing interactive systems. In this ever-changing framework, designers are needed to develop interactive digital products to support the way people communicate and interact in their daily and working lives [1]. Since ubiquitous computing technology is becoming widespread [2, 3], designers should also have available tools and methods to design new interactive systems. Nowadays, the most significant challenge for designers is creating interfaces and experiences that could extend and enhance the real world. Augmented Reality (AR) technologies [4] have a great potential for the design and reinterpretation © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1413–1425, 2023. https://doi.org/10.1007/978-3-031-15928-2_123

1414

M. Ricci et al.

of interactive systems, with profound implications for use, aesthetics, and function. The potential of AR to shape the future of both design education [5] and product design [6] lies in enabling a new generation of interfaces and interactions. AR integrates 3D virtual objects superimposed on the real environment in real-time, augmenting reality rather than replacing it completely [7]. Nowadays, the development of AR technologies is proceeding at a very fast pace [8] with an increasing interest in investigating the employ of AR technologies in different fields [9]. AR technology is also spreading into domains where advanced technologies are not traditionally used, such as household appliances. By definition, a household appliance is a machine that assists users in household functions such as cooking. This paper aims to test the feasibility of introducing AR technology in this field. To this extent, the research started by identifying a case study through a survey of potential household appliances users. Through semi-structured questionnaires, we surveyed a sample of 463 participants to figure out which household appliances could benefit from AR technologies. The results showed that the kitchen machine is one of the best candidates. A kitchen machine is a small appliance, generally used on a table, for cooking. This appliance was originally designed to mix and knead food, reducing time and manual effort. Kitchen machines evolved from the first hand-operated prototypes of mechanical mixers invented in the early 1900s to the nowadays models featured with electric motors, microprocessors, digital displays, and smartphone apps. The form in which recipes are presented to users evolved in parallel with the household appliance. Starting from paper-based recipes, characterized by black-and-white 2D text and figures, nowadays recipes are in form of digital content such as web pages and video tutorials. The latest kitchen machines integrate a recipe book system that users can use to prepare dishes by exploiting digital content. The step-by-step recipe information is displayed via the small LCD integrated monitor in form of text and static images, but still preserving the issue of paper-based recipes. The issue with these recipe formats lies in the separation between the workspace in the kitchen and the information conveyed by the recipe [10]. AR technology aims to overcome this issue by conveying information about the dish creation process through virtual content such as CAD models, animations, videos, images, and text directly registered in the real world. Thus, AR provides a rapid spatial mapping between the recipe information and the task location to be performed. We believe that applying AR technologies to kitchen machines, could improve user performance in terms of task completion time, error rate, and system usability. Using a case study, we describe the application of the human-centred approach to integrating AR technologies to develop interactive systems for household appliances. The remaining paper is structured in five sections. The first describes the state-ofthe-art related to AR technologies applied to kitchen appliances. The second describes the human-centred design approach deployed with AR integration. The third describes the case study of an AR-enabled kitchen machine. The fourth describes the AR-enabled kitchen machine demo. Lastly, we report our conclusions and future works.

A Human-Centred Design Approach for Designing Augmented Reality

1415

2 Related Work In the literature, there are few contributions in the field of AR technologies applied to kitchen appliances. Kellogg et al. highlighted that kitchen is by nature a candidate for AR interfaces because there is a great need for users to stay in touch with physical reality while using a variety of sophisticated tools that benefit from digital information [11]. In addition, kitchen appliances are suitable for free-hand interaction for both hygiene and safety reasons. While cooking, users usually consult recipes touching other surfaces, and then handle ingredients. Sometimes, cooking can also be considered a dangerous task because of the tools that must be handled (e.g., sharp knives) while consulting recipes. Gausemeier et al. [12] developed an AR-PDA system consisting of an AR-based user interface that allows users to receive information and configure appliances. The case study concerns the oven, augmented with a virtual telescoping cart. The prototype allows recognition and tracking of moving objects, scene augmentation using animated 3D objects or images, and personalized user interaction by touching the display with a pen or using Personal Digital Assistant (PDA) buttons. In addition, AR-PDA is used to prevent errors or malfunctions on home appliances by providing users with 3D assembly instructions and augmented real-world animations. Mahroo et al. [13] implemented the HoloHome, an AR framework that aims to provide new means of interaction with the Smart Home and its components. The goal of the system is to provide a Mixed Reality (MR) environment implemented on Microsoft HoloLens, to enable user interaction with Smart Home devices and appliances through augmented objects. HoloHome addresses the problem of locating real-world objects and aligning virtual objects with real ones within the spatial environment, using the Vuforia image processing engine. Also, the HoloHome leverages the ability to control and adjust household appliances by leveraging the Internet of Things (IoT) system to bring a new way of interaction between users and the smart home platform. Walczak et al. [14] presented several VR and AR systems for interactive visualization of household appliances. The goal is to support wholesale sales of a variety of products configured and then ordered in a manufacturer’s showroom and individual sales by allowing customers to view different versions of products in their home environment before making a purchase decision. The system also provides training and support for appliance maintenance. The visualization software, created using a game engine, leverages a wide range of visualization hardware, such as a smartphone, a Head-Mounted Display (HMD), and a Powerwall, to enable its use of indifferent settings. Users can configure and manipulate the components of the virtual appliance directly in a 3D space. Case studies cover, for example, an immersive service guide for an induction cooktop and an interactive animated virtual oven, displaying features that are difficult to show in the real world without AR. Ergün et al. [15] developed AR-SI, an induction cooking appliance to augment food preparation support experiences, with a future perspective of replacing people with robotic servers. The induction technology used integrates IoT and a supported virtual assistant. To assist the user in interacting with food and induction devices, AR-SI includes a virtual assistant in AR that suggests to the user how to follow the recipe and cooking procedures step-by-step, considering weight control and presenting 3D visualization on

1416

M. Ricci et al.

the real stove and pot. They used a game engine to build and develop the AR application. The marker includes the 3D model of the stove, and they used a Scale-Invariant Feature Transform (SIFT) to track the physical model during the AR interaction. Beginner users in cookery art can quickly and accurately prepare even the most complicated meals with AR support. In addition, the induction cooking appliance can be useful for cooking school students by practicing with the AR stand. As literature findings revealed, new emerging concepts in household appliance design are recently developing due to new technological advances such as those of the Industry 4.0 paradigm [16]. Some operations, such as cooking, have evolved with new trends that mix IoT, AR, Virtual Reality (VR), and even robotics [15].

3 Human-Centred Design Approach The Human-Centred Design (HCD), as reported in the ISO 13407:1999, is an approach to interactive systems development that aims to make systems usable and useful by focusing on the users, their needs, and requirements, and by applying human factors/ergonomics, and usability knowledge and techniques [17]. ISO 13407:1999 is an international standard that describes best practices in usercentered design, then withdrawn and expanded into ISO 9241-210:2010, later withdrawn and expanded into ISO 9241-210:2019. The standard guides the design activities that take place during the life cycle of interactive systems, ensuring that the development and use of interactive systems take user needs into account. During the process, many needs and expectations of users emerge having an impact on interaction design, as designers refine their understanding of users. A correct HCD approach consists of the following iterative steps: 1. 2. 3. 4.

Identification of the context of the use, and research on users. Definition of user needs and requirements, and system requirements. Development of design solutions that can meet the needs and requirements. Evaluation of the design solution against requirements.

In this paper, AR is embedded in interactive systems with the HCD approach to promote a higher level of innovation, reducing related problems, and increasing interactivity and ease of use. Through AR, virtual graphics can transform the physical world, guiding and assisting users to use the system properly. Thus, the adoption of AR technology becomes a powerful tool to design interactive systems that are more and more user-friendly and accessible, increasing user knowledge. 3.1 Context of Use and User Research Understanding and specifying the context of use includes an extensive discussion of the environments of use in which the interactive system will be deployed. In addition, it is necessary to know the users and the tasks for which they will use the system. Users’ information is collected and analyzed through a proper user analysis process that allows collecting user needs and preferences. User data can be gathered using qualitative or quantitative methods (e.g., interviews, surveys or questionnaires, field studies, focus groups, etc.).

A Human-Centred Design Approach for Designing Augmented Reality

1417

3.2 User Needs, User Requirements, and System Requirements The HCD approach first identifies User Needs, then formalizes them into measurable User Requirements, and finally maps them into System Requirements. User Needs (UN) are identified through user research, based on behavioral analysis of users in their natural environment, and within the reference context. User Requirements (UR) concern the set of user-generated statements that describe the functions, performance, and capabilities that the system will bring to them as they use it. System Requirements (SR) are a statement of what the system must do and/or be to meet the user’s requirements. Exploiting the HCD approach in the development of interactive systems, with the integration of AR technology, could solve real-world problems, guide the user in its use, and improve information visualization and user experience [18]. 3.3 Design Producing design solutions starts with designing personas, scenarios, and story-boarding useful for clearly visualizing the final interactive system. Personas represent user needs by creating fictional individuals to describe the typical user based on the user profile. Their purpose is to represent a group of end-users during design discussions and keep everyone focused on the same goal. Personas are usually defined by identity and photo, status, goals and tasks, skill sets, requirements and expectations, and relationships [19]. Scenarios present detailed realistic examples of how users may perform their tasks in a context. The main purpose is to provide examples of future use to understand and clarify user requirements and improve a basis for subsequent usability testing. The scenario represents a story that describes how a particular persona completes a task or behaves in any given situation. The scenario is usually defined by the settings, actors, objectives, sequence of events, and outcomes [19]. Storyboards are drawn tables that tell the scenarios related to the target user demonstrating the relationship between user inputs and system outputs. A storyboard is a powerful means for the designer because, by telling a story about (parts of) the interaction(s), it allows the user to gain access to ideas expressed on two levels, communication, and experience [20]. The next phase of the HCD approach is the deployment of the prototype, which can be as simple as a sketch or static mock-up or as complicated as a fully functional interactive system with full functionality. 3.4 Evaluation Design evaluation starts with assessing the design against the requirements based on users’ perspectives. This evaluation step can be used to: – Gather new information about User Needs.

1418

M. Ricci et al.

– Provide feedback on the strengths and weaknesses of the design solution from the user’s perspective (to improve the design). – Evaluate whether User and System Requirements have been achieved. – Establish baselines or make comparisons between designs. Two widely used approaches to human-centred evaluation are (1) user-based testing and (2) inspection-based evaluation using usability and accessibility guidelines or requirements. The interactive system will be implemented at the end of this iterative process.

4 AR-Enabled Kitchen Machine Case Study In this case study, following the HCD approach, we design and implement a user interface to guide the user in executing AR-guided recipes. We describe it in the following sections. 4.1 User Research We analyze the context of use by investigating the kitchen machine’s role within the kitchen environment. Thus, we create a questionnaire using Google Forms distributed to a sample of potential users for 8 days (n = 463; 95% female and 5% male). The questionnaire provides quantitative data about user behavior and habits regarding cooking activity through 30 open and closed questions using the 7 point-Likert Scale. As an interesting result, the user analysis demonstrates that most of the users own a kitchen machine (64%). Also, many users like to cook (mean = 6 ± 1,9 on a 7 point-Likert Scale), appreciating their kitchen machine. Users’ level of familiarity with AR technologies is quite high (mean = 5 ± 0,7 on a 7 point-Likert Scale). In addition, 58% of users would be willing to wear an HMD if it is “light and non-invasive” to increase the kitchen machine potential. Several issues emerge from the questionnaire and are later converted into user needs. 4.2 User Needs, User Requirements, and System Requirements After user research, we define User Needs (see Table 1), User Requirements (see Table 2), and System Requirements (see Table 3), which are fundamental to setting goals for the interactive system. 4.3 Design Following the HCD approach, we use personas to identify target users and simulate behaviors and goals to design the interface [21]. Elisabeth is the primary persona, a woman who agrees to use the AR-enabled kitchen machine: a modern woman, successful in life and family, mother of three children, an elementary school teacher. She loves to cook but she is always very busy with her job. She owns a kitchen machine, but she has tried several times to use it, without getting great results, preferring traditional recipes made manually.

A Human-Centred Design Approach for Designing Augmented Reality

1419

Table 1. User needs UN-ID

User need

Source

UN-0100

The user needs to cook without effort

Questionnaire

UN-0200

The user needs to cook quickly

Questionnaire

UN-0300

The user needs to cook minimizing the number of errors

Questionnaire

UN-0400

The user needs to interact with the kitchen machine easily and intuitively

Questionnaire

UN-0500

The user needs to have a recipe archive always at disposal

Questionnaire

UN-0600

The user needs to cook by optimizing resources and avoiding waste

Questionnaire

UN-0700

The user needs to live an engaging and interactive cooking experience

Questionnaire

Table 2. User requirements UR-ID

User req. name

Description

Motivation

UN ref.

UR-0100

Suggestion Program #1

The user wants the system to present preset programs

The user, particularly if UN-0100 UN-0200 worker, has limited UN-0300 UN-0400 time for cooking

UR-0200

Suggestion Program #2

The user wants the system to suggest what to cook

The user prefers to follow suggestions when cooking

UN-0100 UN-0200 UN-0400 UN-0500

UR-0300

Guide Assistance #1

The user wants the system to perform guided operations with kitchen tools

The user wants to spend less time and effort possible to realize a recipe

UN-0100 UN-0200 UN-0300 UN-0400

UR-0400

Guide Assistance #2

The user wants the system to provide a step-by-step recipe guide

The user wants to make UN-0100 UN-0200 a few mistakes, UN-0300 UN-0700 particularly if a beginner

UR-0500

Interface Features #1

The user wants the system interface to give the ability to search for a recipe

The user already has in UN-0100 UN-0400 mind what to cook, but UN-0500 wants to be guided in the process

UR-0600

Interface Features #2

The user wants the system interface to interact vocally

The user wants to have the most natural interaction possible

UR-0700

Reuse and disposal

The user wants to cook using food in the Fridge

The user wants to avoid UN-0600 waste in the kitchen environment

UN-0400 UN-0700

In this scenario, AR can help Elizabeth in the preparation of the dishes, thanks to the step-by-step guidance of a virtual assistant. The scenario involves cooking a dish using the AR-enabled kitchen machine. After wearing an HMD (AR glasses), Elizabeth turns

1420

M. Ricci et al.

Table 3. System requirements. Priority: (S) the system should have; (M) the system must have SR-ID

System req. name

Priority

Description

UR ref.

SR-0100

Visual display

M

The AR system must include a free-hand information display

UR-0500 UR-0600

SR-0200

Gesture-based Human-Computer Interaction

M

The AR system must provide for gesture-based natural interaction

UR-0300 UR-0600

SR-0300

Voice commands Human-Computer Interaction

M

The AR system must provide for natural interaction based on voice commands

UR-0200 UR-0400 UR-0600

SR-0400

Wireless Network #1

M

The AR system must communicate with the Kitchen Machine sensors through a wireless network protocol

UR-0100

SR-0500

Wireless Network #2

M

The AR system must communicate with the Fridge sensors through a wireless network protocol

UR-0100 UR-0200

SR-0600

Optical Tracking #1

M

The AR system must track the position of the Kitchen Machine in the space

UR-0400

SR-0700

Optical Tracking #2

S

The AR system should track and recognize the ingredients to use

UR-0300

SR-0800

Preset Programs

M

The AR system interface must present a list of preset programs: Empty the fridge, Let me inspire you, and Search

UR-0100 UR-0200 UR-0500 UR-0700

SR-0900

Embodied Conversational Agent

M

The AR system interface must suggest operations to execute a recipe step-by-step thanks to the guidance of a humanoid avatar

UR-0100 UR-0200 UR-0400 UR-0600

SR-1000

Suggestion System

S

The AR system should include a recipe suggestion system

UR-0100 UR-0200

SR-1100

Recipe Database

M

The AR system must include a database with integrated recipes

UR-0200 UR-0400 UR-0500

SR-1200

Augmented Virtual Assets

M

The AR system must provide registered augmented information that guides the user in manipulating tools and ingredients

UR-0300 UR-0400

on her kitchen machine. Through an AR interface, the Graphical User Interface (GUI) shows three preset programs: “Empty the fridge” (system visualizes the ingredients in the fridge and suggests a recipe), “Let me inspire you” (system suggests a recipe), and “Search” (a recipe). She selects the preset program “Empty the fridge” and the only two ingredients available are zucchini and eggs. Then, the humanoid avatar suggests some recipes with these two ingredients, and she selects the zucchini omelet. The AR interface integrates a humanoid avatar that suggests step-by-step the preparation of the dish through the augmentation of 3D virtual information spatially registered in the real kitchen environment (see Fig. 1).

A Human-Centred Design Approach for Designing Augmented Reality

1421

Fig. 1. The case study storyboard and its interactions: (a) Turn on the kitchen machine; (b) Select the preset program (Empty the fridge); (c) Discover the ingredients available in the fridge; (d) Choose a recipe based on the ingredients (zucchini omelet); (e, f) Perform the operations with animations and auxiliary lines; (g) Mix the ingredients; (h) Dish preparation finished.

5 AR-Enabled Kitchen Machine Demo The demo is developed to be experienced using both an AR HMD (HoloLens 2) and a Hand-Held Display (Android tablet) with a table stand. The application software is implemented in Unity 3D engine leveraging the Vuforia AR Software Development Kit for the tracking capability of the system. Exploiting Vuforia Virtual Buttons we implement a Tangible User Interface (TUI). Virtual Buttons provide a useful mechanism for making the target interactive. Once the target image is occluded with the real ingredient, the Virtual Buttons generate a canvas that contains information about that ingredient (see Figs. 2 and 3). In order to achieve visual coherence between the kitchen machine and the virtual information shown to the user, the kitchen machine is continuously tracked. The kitchen tools (see Fig. 4) are tracked with a marker that augments virtual information useful in preparing and handling ingredients.

Fig. 2. a) Kitchen machine tracked and switched on; b) Preset programs

The AR interface includes a humanoid avatar (embodied conversational agent) that aims to leverage some properties of human communication to make the interface more

1422

M. Ricci et al.

Fig. 3. a) Empty the Fridge preset program; b) Avatar suggests recipes based on ingredients

Fig. 4. Ingredients handling with the aid of virtual tools: a) Animations; b) Auxiliary lines

effective (see Fig. 5). The objective of the avatar is to create a feeling of “company” in the physical environment while guiding the user in the recipe execution. Also, virtual assistants may provide better learning of cooking procedures [15].

Fig. 5. A) Ingredients mixing operation; b) Dish preparation finished and celebration

A Human-Centred Design Approach for Designing Augmented Reality

1423

The interface design is highly spatial, dynamic, and playful, with bright colors, particle systems, 3D animations, sound effects, and voice commands. Spatial gestures and virtual floating icons allow the user to interact with the machine.

6 Conclusions and Future Works Designers need to devise interactive systems to support the way people live and interact due to technological advancements. From this perspective, the ever-widespread AR technology can become a tool for designers. One of the domains that could benefit from AR is household appliances, namely kitchen machines. In this paper, we use an HCD approach to develop new interactive systems with AR technology integration. Furthermore, we present a case study following the HCD approach that relates to the development of an AR-enabled kitchen machine to guide the user in the cooking activity. To complete the last step of HCD (i.e., evaluation), we planned a user study that aims to evaluate: – the user’s experience through a User Experience Questionnaire (UEQ). – the cognitive load related to the recipe preparation task, through the NASA Task Load Index (NASA-TLX). – the usability of the AR system, through the System Usability Scale (SUS). – the error rate during the recipe preparation. Although the user study is still an ongoing phase, this paper demonstrates the feasibility of integrating AR applications in a specific domain (e.g., household appliances), exploiting the HCD approach.

References 1. Preece, J., Rogers, Y., Sharp, H.: Interaction design - beyond human-computer interaction. Ubiquity 2002, 2 (2002) 2. Weiser, M.: The computing for the TwentyFirst century. Sci. Am. 265, 94–104 (1991) ´ Mobile augmented reality interactive systems for urban tourism. In: Central 3. Samardžija, A.C.: European Conference on Information and Intelligent Systems, p. 129 (2015) 4. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. Inf. Syst. 77, 1321–1329 (1994) 5. Chen, S.C., Duh, H.: Mixed reality in education: recent developments and future trends. In: Proceedings - IEEE 18th International Conference on Advanced Learning Technologies, ICALT 2018, pp. 367–371 (2018). https://doi.org/10.1109/ICALT.2018.00092 6. Ceccacci, S., Germani, M., Mengoni, M.: User centred approach for home environment designing. In: ACM International Conference Proceeding Series (2012). https://doi.org/10. 1145/2413097.2413136 7. Azuma, R.T.: A survey of augmented reality. Presence Teleoperators Virtual 6, 355–385 (1997) 8. Evangelista, A., Ardito, L., Boccaccio, A., Fiorentino, M., Messeni Petruzzelli, A., Uva, A.E.: Unveiling the technological trends of augmented reality: a patent analysis. Comput. Ind. 118, 103221 (2020)

1424

M. Ricci et al.

9. Schmalstieg, D., Hollerer, T.: Augmented Reality: Principles and Practice. Addison-Wesley Professional, New York (2016) 10. Sato, A., Watanabe, K., Rekimoto, J.: Shadow cooking: situated guidance for a fluid cooking experience. In: Stephanidis, C., Antona, M. (eds.) UAHCI 2014. LNCS, vol. 8515, pp. 558– 566. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07446-7_54 11. Kellogg, A.W., Carroll, M.J., Richards, J.T.: Making Reality a Cyberspace, Cyberspace: First Steps. MIT Press, Cambridge (1991) 12. Gausemeier, J., Fründ, J., Matysczok, C., Mnich, G., Fircks, A.V.: AR-based configuration and information retrieval of household appliances on mobile devices. In: Proceedings of CHINZ 2003: The 4th Annual Conference of the ACM Special Interest Group on Computer-Human Interaction New Zealand Chapter, pp. 93–98 (2003). https://doi.org/10.1145/2331829.233 1846 13. Mahroo, A., Greci, L., Sacco, M.: HoloHome: an augmented reality framework to manage the smart home. In: De Paolis, L.T., Bourdot, P. (eds.) AVR 2019. LNCS, vol. 11614, pp. 137–145. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25999-0_12 14. Walczak, K., et al.: Virtual and augmented reality for configuring, promoting and servicing household appliances. In: Trojanowska, J., Ciszak, O., Machado, J.M., Pavlenko, I. (eds.) MANUFACTURING 2019. LNME, pp. 368–380. Springer, Cham (2019). https://doi.org/10. 1007/978-3-030-18715-6_31 15. Ergün, S., Karadeniz, A., Tanrıseven, S., Simsek, I.Y.: AR-supported induction cooker AR-SI: one step before the food robot. In: 2020 IEEE International Conference on Human-Machine Systems (ICHMS), pp. 1–5. IEEE (2020) 16. Gattullo, M., Evangelista, A., Uva, A.E., Fiorentino, M., Boccaccio, A., Manghisi, V.M.: Exploiting augmented reality to enhance piping and instrumentation diagrams for information retrieval tasks in Industry 4.0 maintenance. In: Bourdot, P., Interrante, V., Nedel, L., MagnenatThalmann, N., Zachmann, G. (eds.) EuroVR 2019. LNCS, vol. 11883, pp. 170–180. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31908-3_11 17. ISO/TC 159/SC 4: ISO 9241-210:2019 - Ergonomics of human-system interaction—Part 210: Human-centred design for interactive systems. https://www.iso.org/standard/77520. html. Accessed 13 Feb 2022 18. Gattullo, M., et al.: A context-aware technical information manager for presentation in augmented reality. In: 26th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2019 – Proceedings, pp. 939–940 (2019). https://doi.org/10.1109/VR.2019.8798262 19. Baxter, K., Courage, C., Caine, K.: Understanding Your Users: A Practical Guide to User Research Methods. Morgan Kaufmann, Burlington (2015)

A Human-Centred Design Approach for Designing Augmented Reality

1425

20. Van der Lelie, C.: The value of storyboards in the product design process. Pers. Ubiquit. Comput. 10, 159–162 (2006). https://doi.org/10.1007/S00779-005-0026-7 21. Bodker, S.: Through the Interface: A Human Activity Approach to User Interface Design. CRC Press, Boca Raton (2021)

Performance Evaluation of an Immersive Measurement Instrument for Automotive Field Applications Michele Iaquinandi, Carlotta Fontana(B) , Iolanda Fiorillo , Alessandro Naddeo , and Nicola Cappetti University of Salerno, 84084 Fisciano, SA, Italy [email protected]

Abstract. The development of cutting-edge technologies in the industrial sector has led to the demand for increasingly specific tools in the optimization of efficiency problems. The automotive sector is the one that makes the most use of these technologies, such as, among all, Virtual Reality (VR) and Augmented Reality (AR). A virtual reality tool is inserted as a guide tool for the user in drawing as quickly as possible the key information for optimizing the process. This research work fits into this context, the goal of which was the implementation of an immersive platform for carrying out accurate measurements within an entirely virtual automotive environment. A VR system of this type allows to check the dimensional and shape tolerances of car components, performing measurement, with high precision and in real time, in a custom-made virtual environment, in which it is possible to simulate the presence of a myriad of components and test their mutual interaction. A comparative test was carried out, obtained by varying the graphical and geometrical model in the VR settings, in order to evaluate the level of usability and the degree of efficiency of a tool for measuring distances between objects in a virtual reality environment, depending on the system parameters. The validation of a measurement instrument in VR is part of an increasingly current technological context in which the need to optimize the time-cost curve embraces the need for increasingly accurate results. Keywords: Virtual reality · Tolerance measurements · Assembly training

1 Introduction Design and product development planning is the crucial phase for the creation of a new product [1] and, recently, technological systems have evolved to reduce planning times and costs. This evolutionary step has been evident in products that require huge expenses invested in prototyping, simulations and verifications. This category is mainly occupied by the automotive sector, where technological development has allowed a competitive advantage over the competition [2]. During the construction of a new vehicle, physical prototypes are always essential for performing the needed tests to verify performances and perceived quality. Although the importance of physical prototyping is indisputable, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1426–1435, 2023. https://doi.org/10.1007/978-3-031-15928-2_124

Performance Evaluation of an Immersive Measurement Instrument

1427

its main disadvantage lies in the capital invested in making prototypes that require large amounts of money in stages before optimizing mass production costs. Furthermore, the time to market lengthens as the number of physical prototypes increases, so there is a risk of favoring competitors in the same market. For these reasons, a solution has been sought for some time to meet the needs of the manufacturer, so the number of industrial realities moving towards the use of virtual and augmented reality (VR/AR) is rapidly growing [3]. On the one hand, Virtual Reality allows to reproduce a physical scenario in an entirely virtual environment, customizing the level of interactivity and user involvement. Augmented Reality, on the other hand, permit to increase the level of detail of the surrounding environment, filling the physical space with additional information that can be obtained in real time by the user. In fact, those technologies allow to carry out evaluations, in terms of performances and perceived quality, limiting costs and exploring many alternative solutions in a reasonable time. Although virtual reality tools were initially made available exclusively to the videogame sector, their potential in the industrial sectors was immediately understood [4]. A reduction in the costs necessary for the implementation of these new systems has had an impact on the advertising used by automotive companies. They began to create eye-catching advertisements, then show the finished product to the user in the pre-production stages, and finally to the designer for evaluations and verifications. To date, these tools are also widely used for simulations and, in the automotive companies, they are supporting research for the development of fully autonomous driving cars [5–7]. The VR/AR technologies have undergone an exponential evolution to the point of becoming more and more realistic. The prototypes developed in virtual reality can faithfully reproduce the physical ones, offering the possibility of making very accurate assessments, in particular on the quality perceived by the customer. The context developed by Industry 4.0 [8] is marked by the development of technologically advanced tools, which drive the operator in the components’ assembly and maintenance phases [9–11]. A virtual reality environment, in fact, can be designed to reproduce and display each component to be examined, allowing the possibility of analyzing the simultaneous interaction of multiple elements in the same scene. This makes possible to optimize the analysis of the performance of multiple components associated with the same car, giving the user the opportunity to interact with them and extract key information to move on to the next phase of the simulation or production cycle. Nonetheless, it is reported the frequent use of immersive VR platforms for managing the levels of usability perceived by the user, in the final stages of driving tests [12–14]. VR platforms and VR/AR experiences are increasingly used during the development of new products in the automotive sector, thanks to its ability to replace, at least in the preliminary phase, the use of physical models, and to demonstrate a high reduction of development times. Several areas of application are being developed along the Product development plan of a car but, although their potential is high, it is still difficult to use them to perform accurate dimensional assessments. A car shown in an immersive virtual environment represents defects that are difficult to perceive on a computer desktop and, furthermore, it is still difficult to analyze them. A VR feature allowing to carry out the measurements with high precision can be a highly efficient tool to be used by designers and engineers during the VR sessions both in design and quality assessment phase.

1428

M. Iaquinandi et al.

For this reason, the aim of this work was to evaluate the performance exhibited by an immersive measurement system developed in a virtual environment, in terms of computational and graphic performances, by testing different setting scene configurations. To do this, three different measurement algorithms were tested, with the aim of achieving the most accurate measuring dimension value compared with the original CAD models. In order to evaluate the performance response of this tool, also three different configurations of the same geometry were considered to be placed in the immersive scene, obtained by varying the number of triangles of the original mesh, to compare the response efficiency of the virtual immersive platform, both in terms of measurement accuracy, proportional to the number of triangles, and in terms of usability perceived by the user interaction.

2 Methods The definition of the immersive platform for the detection of measurements in the Virtual Reality environment took place through the use of the Unreal Engine game engine, widely used in the automotive sector, for its intuitive use in the management of nodal connections in command programming [15, 16]. The system was tested using the Oculus Rift S hardware device, and an Alienware Aurora R8, Intel ® Core™ i7-9700 and NVIDIA GeForce RTX 2060 as GPU. The realization of a truly immersive VR experience requires the necessary management of a precise simulation of the environment, including graphics, audio and behavioral interaction towards the user. The simultaneous presence of complex mathematics within the same scene level, associated with the possibility of recalling some properties in real time, in terms, for example, of dimensional and/or spatial information, can also drastically affect rendering performances and, more generally, the overall quality perceived during the interaction. To avoid this problem, it was therefore necessary to identify the possible scenarios for carrying out a measurement within the graphic scene; once defined the representative scenarios in terms of thickening of mesh models, they were evaluated in graphical and interaction performances, by considering the variation of the FrameRate, index of the computational limitations of the system [17], and of the “Pixel per display per pixel override”, a parameter that defines the resolution quality of the Oculus Rift S device, used for immersion in a VR environment. Different performance results had been compared to establish the best configuration in terms of usability experience and accuracy of performances. 2.1 Analysis of Scene Geometries The creation of a Virtual scene always starts with the CAD models’ import in the VR environment. During the geometries upload in VR scene, the user often needs to manage an intermediate step, between the CAD software for modeling the geometries and the VR software for models insertion in the graphic scene, which has the aim of checking each models mathematical equation and transform it in a triangularized one, just performing operation of closing and cleaning in order to make it suitable for its positioning and managing in the VR scene. The most widely used format for importing geometries into the Unreal environment is .FBX (FilmBoX), a proprietary file format that provides

Performance Evaluation of an Immersive Measurement Instrument

1429

interoperability between digital content creation applications. Blender has been used as interface software, which thanks to its wide versatility allows an easy and fast interaction with Unreal Engine, also through the use of some Add-ons dedicated to the interaction with the graphic engine. The first step involved the extrapolation of the real dimensions, obtained from the original CAD model, to be used as a reference measure in evaluating the final performance of the immersive measuring platform. From Parametric Model to Mesh Format A car-component element, in the form of a dashboard geometry, was used to set the immersive scene configuration and to evaluate the system performance results. In particular, the original CAD geometry model, in the form of a .STEP format made up of 33 subcomponents, underwent a triangularization procedure for a mesh conversion, progressively increasing the thickening of triangles numbers, in order to obtain 3 models of the same geometry, characterized by different levels of mesh tessellation. Table 1 shows the result mesh models configurations. Table 1. Mesh configuration models for dashboard element. Dashboard

Vertices

Edges 1498044

Faces 998686

Triangles

Dashboard1

499222

998686

Dashboard2

1997266

5992146

3994744

3994744

Dashboard3

7989412

23968524

15978976

15978976

2.2 Measurement Algorithms Once the tessellation configurations for each math were generated, they were loaded into a scene layer to be placed in Unreal Engine. Three procedures for performing measurements in the immersive environment have been tested for carrying out the comparative analysis; in particular, three different access methods to the topological information of the scene geometries were used: 1) the default geometries collision methods, in its simple and complex versions; the simple one automatically generates a collision volume that entirely encloses the geometry under examination, resulting, however, in an overestimation of the overall volume (with subsequent errors in the measurement calculation between two points selected on the surface of the object (Fig. 1)); the second one uses collision points that lie perfectly on the surface of the collided object with a higher accuracy in the points selection; nevertheless, the precise pointing strictly depends on the user’s maneuvering sensitivity in the precisely identification of the point. 2) A method based on a set of “preset points”, named “sockets”, that needs to be associated to the tessellated model in the scene geometry (Fig. 2).

1430

M. Iaquinandi et al.

They allow to bypass the problem of the collision boxes, since any point indicated by the user in correspondence with the object considered; although it greatly simplifies the search for a high degree of precision in measuring in a virtual environment, the generation of sockets requires manual intervention, in the drafting of the algorithm, tiring and repetitive, especially in the presence of complex objects that require of a large number of notable points. 3) A method based on the recall of the vertices of the mesh, without having to generate a priori of the noteworthy points (Fig. 3).

Fig. 1. The figure represents the collision point from the intersection between the line trace and the impacted object, in the two collision configurations tested: simple (green line) and complex (red line). As shown, the point collision obtained from the simple configuration impact far from the real object surface, leading to errors generation in the measurement phase.

Fig. 2. The figure shows socket points applied to a geometry model.

Performance Evaluation of an Immersive Measurement Instrument

1431

Fig. 3. Representative image of the immersive distance measure from the selection of two vertex mesh groups.

In this third configuration, it is possible to predict the multiple selection of a set of vertices of the geometry, relative to a selection neighborhood, in order to measure the distance in terms of the maximum, minimum or average value considered. On the bases of limitations, pros and cos of the evaluated methods, the third one was used in the validation test phase for its response effectiveness of the virtual measurement system in the interaction with the different mathematical models loaded on the scene, and for the wider measurements configuration solutions provided. 2.3 Test Validation A comparative test was carried out to evaluate the level of usability and the degree of efficiency of a tool for measuring distances between objects in a virtual reality environment, depending on the system parameters. Three levels of scene have been inserted within the immersive simulation platform, one for each level of tessellation of the mathematics examined. in which the user, armed with a VR head mounted display and a control joystick, has made a series of object pointing selections in order to compute and display the result of the distance measurement on the screen. The calculation of the distance has foreseen the management in VR of all the single phases of execution (see Table 2), among which of particular interest are the pointing to the object, the selection of a specific area of points identification, the choice of the desired point, the selection of the second group of points and, therefore, of the desired point and, finally, the display of the measurement-distance, printed on the screen.

3 Results A qualitative evaluation of the performance exhibited by the system downstream of the three simulations was made through the use of the FrameRate value, i.e. the number of frames displayed per second [fps], considered for each action performed during the simulation experience. The tests were also repeated by varying the setting of the Oculus “Pixel for Display per Pixel Override” parameter, a decimal number that sets the effective resolution of the rendering of the VR application. The values used in the tests carried out are, respectively, 0, 0.5 and 1. Below are shown the graphs relating to the FrameRate values obtained for each of the three types of mesh used within the immersive simulation, as the resolution of the VR viewer varies (Fig. 4).

1432

M. Iaquinandi et al.

Fig. 4. System performance graphs, in the form of frame rates [fps], for each dashboard implemented on the virtual scene, with dependence on the Pixel for Display resolution parameter.

Performance Evaluation of an Immersive Measurement Instrument

1433

Table 2. Mesh configuration models for dashboard element. Event1

The empty scene

Event2

Rotation of the head towards the object

Event3

Teleportation

Event4

Aiming towards the object

Event5

Generation of the monitor

Event6

Selection of the first area of interest

Event7

The first modification of the region of interest

Event8

Selection of the first point

Event9

Selection and appointment of the new region of interest

Event10

The second modification of the region of interest

Event11

Selection of the second point and display of the distance between the 2 points

The graphs show how the thickening of the triangles mesh in the scene geometries greatly influences the system performance, altering the perceived quality through a sudden drop in the framerate value. In particular, it averages around 80 fps, in the presence of a geometry with a lower tessellation, up to a minimum value of about 40 fps. The trend is decreasing in proportion to the weight of the geometries, resulting in maximum evidence in correspondence with the Dashboard3. This phenomenon is accentuated as the Pixel for Display parameter decreases, identifying the degree of resolution set within the VR device, which is therefore decisive in choosing the optimization parameters of the degree of usability to be set inside of management in virtual reality. In general, it can be said that the framerate assumes a lower value when the user performs the threepoint measurement maneuvers; this is attributable to a greater computational effort of the system, in drawing on the intrinsic information of the mesh vertices, the variability of which is also dependent on the type of hardware supporting the immersive experience in VR.

4 Conclusions This research work focuses its objective on the drafting of an immersive platform for carrying out highly accurate measurements in a virtual reality environment. In particular, the system performance was evaluated in terms of response efficiency and usability in relation to the interaction with the user, through the comparison of different scenarios obtained by modifying the geometric and resolution parameters of the system. First, the numerical accuracy in measuring the distance between geometric mesh-type elements was verified, comparing the value with the real one, extrapolating it from the original CAD environment. In this sense, three methods of selecting the elements to be measured have been implemented; the first concerns the selection through the use of a collision volume, which allows, in the complex mode, to select points perfectly adhering to the

1434

M. Iaquinandi et al.

surface of the desired solid, allowing a precise measurement between two or more reference elements. The negative aspect concerns the strong sensitivity on the part of the user in accurately identifying the area of interest, entrusted to the correct direction of the arm in the pointing area, which is not infrequently affected by human error; a second method concerns the use of reference points associated with geometry, for the identification of specific topological features, of interest in the measurement; a last method, finally, foresees to recall on the scene, in real time, the identification points of the mesh vertices. This latter method turns out to be the most intuitive and effective in identifying specific points of geometric interest, while producing accurate results, despite the inevitable triangular discretization of the geometries in mesh format. Greater measurement precision is inevitably associated with a thickening of the number of triangles associated with each scene mathematics, although, however, this greater tessellation is associated with an additional computational effort. To verify the response of the virtual reality system, various evaluative tests were conducted, varying the conditions of thickening of the triangles of the scene geometries together with a resolution parameter of the Oculus Rift S device, used in the virtual scene display. The performance of the system was considered in terms of FrameRate, an index identifying the computational response speed. A comparative analysis of the results obtained showed a strong sensitivity of the system response to the number of triangles of each mathematics, a parameter, therefore, to be considered in the design of immersive platforms of this type. An immediate decline in performance is encountered, as expected, during the execution of the main steps to obtain the desired measurement, during which the system requires a computational effort in real time. Finally, the Pixel for display parameter is also denoted, as a parameter characterizing the response quality of a VR system. It should be noticed that the results downstream of the foregoing tests are dependent on the hardware system utilized, by changing it, with the improvement of GPU and CPU characteristics, higher performance response of the immersive tool must be expected. The virtual measurement platform thus described is placed in a technological context of Industry 4.0 in which it is more demanding the need towards an optimization of industrial processes, aimed not only at minimizing costs, but also at improving the efficiency of the degree of usability by the technical staff, in particular for those sectors of industrial production, such as those relating to the assembly-disassembly of components, as well as those of industrial maintenance, where attention to detail is required, taking into account the simultaneous presence of a multiplicity of elements. In this sense, an immersive virtual measurement platform is able to provide, at the same time, an overview of all the elements to be considered in the region of interest and a high accuracy of measurement even of extremely small details.

References 1. Owens, J.D.: Modeling the new product development process: the value of a product development process model approach as a means for business survival in the 21st century. In: Handbook of Research on Knowledge-Intensive Organizations, pp. 208–227. IGI Global (2009) 2. Peters, S., Chun, J.-H., Lanza, G.: Digitalization of automotive industry – scenarios for future manufacturing. Manuf. Rev. 3, 1–8 (2016)

Performance Evaluation of an Immersive Measurement Instrument

1435

3. Gavish, N., et al.: Evaluating virtual reality and augmented reality training for industrial maintenance and assembly tasks. Interact. Learn. Environ. 23(6), 778–798 (2015). https:// doi.org/10.1080/10494820.2013.815221 4. Barenberg, S.A.: Augmented reality (AR) & Virtual reality (VR) - a channel for digital transformation in industrialization fostering innovation & entrepreneurship. Int. J. Innovative Technol. Explor. Eng. 8(10), 3228–3236 (2019). https://doi.org/10.35940/ijitee.j1167. 0881019 5. Fereydooni, N.: VROOM-measurement and design research for virtual reality in automated vehicles. In: Adjunct Publication of the 23rd International Conference on Mobile HumanComputer Interaction, September 2021, pp. 1–3 (2021) 6. Becerra, Z.M., Fereydooni, N., Brewster, S., Kun, A.L., McKerral, A., Walker, B.N.: Workshop on virtual reality (VR) in automated vehicles: developing and evaluating metrics to assess VR in the car. In: 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, September 2020, pp. 96–98 (2020) 7. Sportillo, D., Paljic, A., Ojeda, L.: Get ready for automated driving using virtual reality. Accid. Anal. Prev. 118, 102–113 (2018) 8. Kumari, R., Saini, K.: Advanced automobile manufacturing: an industry 4.0. In: 2021 8th International Conference on Computing for Sustainable Global Development (INDIACom), March 2021, pp. 899–904. IEEE (2021) 9. Loch, F., Ziegler, U., Vogel-Heuser, B.: Using real-time feedback in a training system for manual procedures. IFAC-PapersOnLine 52(19), 241–246 (2019) 10. Gutierrez, T., Rodriguez, J., Velaz, Y., Casado, S., Suescun, A., Sanchez, E.J.: IMA-VR: a multimodal virtual training system for skills transfer in industrial maintenance and assembly tasks. In: 19th International Symposium in Robot and Human Interactive Communication, September 2010, pp. 428–433. IEEE (2010) 11. Loch, F., Ziegler, U., Vogel-Heuser, B.: Integrating haptic interaction into a virtual training system for manual procedures in industrial environments. IFAC-PapersOnLine 51(11), 60–65 (2018) 12. Gugenheimer, J., Wolf, D., Haas, G., Krebs, S., Rukzio, E.: SwiVRChair: a motorized swivel chair to nudge users’ orientation for 360 degree storytelling in virtual reality. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, May 2016, pp. 1996– 2000 (2016) 13. Penumudi, S.A., Kuppam, V.A., Kim, J.H., Hwang, J.: The effects of target location on musculoskeletal load, task performance, and subjective discomfort during virtual reality interactions. Appl. Ergon. 84, 103010 (2020) 14. Henriques, A.C., Winkler, I.: The advancement of virtual reality in automotive market research: challenges and opportunities. Appl. Sci. 11(24), 11610 (2021). https://doi.org/10. 3390/app112411610 15. Lindner, P., Miloff, A., Hamilton, W., Carlbring, P.: The potential of consumer-targeted virtual reality relaxation applications: descriptive usage, uptake and application performance statistics for a first-generation application. Front. Psychol. 10, 132 (2019) 16. Kloiber, S., Schinko, C., Settgast, V., Weinzerl, M., Schreck, T., Preiner, R.: Integrating assembly process design and VR-based evaluation using the unreal engine. In: VISIGRAPP (1: GRAPP), February 2020 pp. 271–278 (2020) 17. Merenda, C., Suga, C., Gabbard, J., Misu, T.: Effects of vehicle simulation visual fidelity on assessing driver performance and behavior. In: 2019 IEEE Intelligent Vehicles Symposium (IV), June 2019, pp. 1679–1686. IEEE (2019)

Virtual Golden Zone for Enhancing the Ergonomics of Complex Production Lines Lucilla Dammacco1(B) , Raffaele Carli1 , Michele Gattullo2 , Vito Lazazzera3 , Michele Fiorentino2 , and Mariagrazia Dotoli1 1

2

Department of Electrical and Information Engineering, Polytechnic of Bari, Bari 70125, Italy [email protected] Department of Mechanics, Mathematics and Management, Polytechnic of Bari, Bari 70125, Italy 3 Masmec SpA, Modugno 70026, Italy

Abstract. For the sake of being competitive in an ever-changing market, industrial companies need a redefinition of traditional design and integration of parts, equipment, and services such a redefinition allows effectively addressing the interaction between machines and operators, particularly in the area of complex production lines. In this context, enhancing ergonomics is crucial to reduce fatigue and stress of workers and increase work-place efficiency and comfort. Moreover, identifying ergonomic flaws in three-dimensional human-machine design problems (e.g., body posture, reach, visibility) at an early stage of the engineering process allows to prevent these issues at a low cost. Virtual reality (VR) is emerging as a powerful tool to improve the ergonomic assessment in the design of complex production lines. However, VR is not yet a wellconsolidated practice for industrial companies, and the state-of-the-art applications are limited to simplified, isolated, and customized experiments. This work proposes the use of a virtual golden zone (VGZ) as a standard and efficient VR method for the ergonomic analysis and optimization of operator activities in manual manufacturing stations. The resulting effectiveness and benefits are highlighted through the application of the approach to a real industrial case study. Finally, the outcomes of a usability questionnaire, compiled by the professionals involved in the VR reviews, are presented to evaluate the usability of the VGZ methodology in the design process of complex production lines. Keywords: Virtual reality · Ergonomics Human computer interaction

1

· Complex production lines ·

Introduction

Nowadays, the concept of Industry 4.0 is leading to the so-called smart factory, where industrial automation is combined with new digital technologies to c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1436–1447, 2023. https://doi.org/10.1007/978-3-031-15928-2_125

Virtual Golden Zone for the Ergonomics of Complex Production Lines

1437

improve working conditions and foster the collaboration between all components of production: business models [14], design and development environments [22], manufacturing systems [5], equipment and machines [3], supply chain [6], logistics [2], and services [12]. This transformation provides immense opportunities for the success of manufacturing, allowing customization of products, short operation cycle times, quick delivery times, faster time to market, and improvement of workers’ safety and ergonomics [19]. Virtual reality (VR) is one of the key technologies that is boosting the digital transformation [16], especially since it is contributing to effectively addressing the interaction between machines and operators in the area of complex production lines [10,11]. In fact, VR is emerging as a powerful tool to optimize the product design process, which enables users to explore multiple facets of a product before committing to any form of physical production [7]. Moreover, VR tools can integrate different technologies that allow users to interact with the virtual environment representing the real one in a multi-sensory way, such as space and body perception [15]. Nevertheless, there is a lack of understanding on how companies implement the VR technologies, especially with regard to the ergonomic assessment. In fact, VR is not a well-consolidated practice for industrial companies yet: no standard implementation exists in the state-of-the-art, whilst practical applications only aim at conducting simplified, isolated, and customized experiments, such as the comparison with computer aided design (CAD) software and the implementation of complex ergonomic methodology. Usually, the classic review approach to the design issues and critical phases is performed directly on a PC with CAD software support, which is not always able to bring out the defects on a two-dimensional (2D) screen, thus creating the risk of losing dimension and scales, particularly important in the ergonomic evaluation [4]. Differently from the related literature, this work is focused on the use of VR as a tool supporting the CAD software, for the design of complex production lines, discussing the benefits offered to the professionals for intuitive ergonomic evaluation (see Fig. 5). In particular, a methodology based on the novel concept of virtual golden zone (VGZ) is introduced and applied in reference to a real company case study, highlighting the advantages of quickly identifying all the ergonomic areas for the operator activities by a three-dimensional (3D) model in the VR environment. The presented case study is focused on the design of manual manufacturing machines for a complex production line, showing the features of the VR immersive interaction and space perception in organizing the operator tasks, improving the worker safety, and reducing fatigue and stress. Finally, this work investigates the usability of the VGZ methodology during the virtual reviews integrated in the company development process, through a System Usability Scale (SUS) questionnaire compiled by professionals involved in the design. The paper is structured as follows. Section 2 presents the related literature and the paper positioning within it. The application of the VGZ methodology to the real case study is illustrated in Sect. 3. The evaluation of the VGZ usability is described by analyzing the SUS questionnaire results in Sect. 4. Finally, some concluding remarks are reported in Sect. 5.

1438

2

L. Dammacco et al.

Related Works and Paper Positioning

The usefulness of VR for the design of complex production lines has been discussed in various studies [8]. An in this regard, Wolfartsberg [21] describes the development and evaluation of a VR-based tool to support the engineering design review. In-depth design assessment is the basis of the design review activities [7,17], since experts from different disciplines have to discuss and concur on a design solution while exchanging project information. Indeed, the design phase requires the communication between different departments with different knowledge, co-engineering work with customers, and identification of collaborative needs [9]. The added value of this approach relies in sharing remotly the work with a teamwork including geographically distant participants. Moreover, the VR technology is considered to be useful in the design phase and in the CAD project integration due to the CAD software limitation in offering an interactive analysis with the implemented model. In the design phase, the main VR advantages is the ability to anticipate eventual design issues before the actual system production starts. Practically, several companies investigate the use of VR for design flaws, styling reviews, and even mere walk-through applications. Traditionally, the assessment of specific aspects such as ergonomics are however carried out only once the physical system is realized, when problems can be thus obviously recognized. Instead, VR tools allow to identify incorrectly designed equipment and ensure the operation ergonomics before the first use. Moreover, completing the ergonomic assessment during the design stage allows problems to be solved at a low cost and without machines modifications [1]. For instance, Peng [18] applies the VR tools for ergonomics evaluation and verifies the reachability of door handles within a vehicle. Various methods and tools have been developed to evaluate the ergonomics related worker activities in industrial contexts and their exposure to risk factors. The authors in [20] provide a detailed review and comparison analysis of the most commonly used methods for the ergonomic assessment such as the Rula (Rapid Upper Limb Assessment) and Ocra (Occupational Repetitive Action Tool) approaches. These methods have different features and consider different aspects such as posture and force. For the sake of enhancing their effectiveness, these methods can be integrated into VR tools. For instance, Haggag et al. [13] describe a framework combining the RULA method for the 3D motion analysis with the Kinect technology: the RULA is associated with a skeleton tracking system and integrated by computer processing. However, these methods are complex and require high time, cost, and efforts to be implemented and set up in the design phase. Conversely, in the early phase of engineering the manufacturing industry aims at quickly integrating advanced technologies to improve quality, productivity, and effectiveness. In addition, there is not a uniform and optimal method that represents and speeds up the ergonomic assessment using VR. The industrial practice consists in collecting the posture data through subjective observation or picture and video support. From the above discussed literature review, it emerges that an effective and efficient tool is needed to enable expert engineers to quickly check if the

Virtual Golden Zone for the Ergonomics of Complex Production Lines

1439

system design is compliant with the ergonomic requirements. To fill this gap, the contribution of this work lies in presenting an efficient and effective VRbased method to be used for the ergonomic evaluation of the production line in the design phase. A real case study shows the practical applicability and usability of the proposed method.

3

Virtual Reality Integration in an Industrial Case Study

In this section we introduce a novel VR method for the ergonomic assessment to be conducted in the design of a complex production line, and we show its application to a real case study related to an automotive company specialising in custom automatic and manual manufacturing systems. 3.1

The Virtual Golden Zone Methodology

The proposed methodology aims at providing designers and engineers with an ergonomics assessment tool that imports and utilizes the golden zone principle in the VR environment. In particular, the method relies on checking the 3D design of production lines (including specific aspects such as the right equipment positioning) with respect to the 3D model of the volume within which the operator is able to ergonomically perform the given tasks. Such a model is denoted as the virtual golden zone (VGZ) and represents the optimal working and picking volume in reference to a specific area of the human body which ranges from the knee height to the shoulder elevation. The main concept in the use of the VGZ is eliminating the risk of long-term exertion injuries associated with manual material handling processes, thus minimizing the operator activities that require lifting, reaching, bending motions, and strain. The proposed VGZ is divided into three areas: non-working area, occasional work area, and usual work area (as shown in Figs. 1 and 2). – Non-working area (C-D): this is the external area where the picking can be done only by stretching a hand and where the operator can not perform the task. Moreover, two sub-areas are identified based on distance from the manufacturing station: sub-area C where the components can be picked by turning the torso, and sub-area D where the components can be picked by walking. – Occasional work area (B): this is the intermediate area wherem components can be picked by stretching the arms over the shoulders. It is advisable to use this area only for occasional work and not to frequently place used tools, since the area is 6 times larger than the work area. – Usual work area (AA-A): This is the comfortable area where a task can be performed by the operator in the eyes field of view using both hands. It represents the optimal zone for the equipment positioning in order to handle components within the anatomical-physiological movement area. The usual work area is generally splitted into two parts: in sub-area AA all components

1440

L. Dammacco et al.

Fig. 1. VGZ for the ergonomic evaluation of the design of a complex production line. The colored areas outbounding from the manikin represent the three VGZ sub-areas related to the operator activities: usual work (AA-A), occasional work (B), and nonworking area (C-D).

Fig. 2. Top view of the VR model representing the manual manufacturing station with a woman manikin model and her relative VGZ using the Unreal Engine platform.

are located nearby the station in the visual field of view and at a suitable working height; in sub-area A the components are arranged in an area 3 times larger than the work area and thus can be picked up by stretching arms and using both hands. The size and positioning of the VGZ are different and configurable for each operator; hence, at least two different working scenarios must be considered, related to a woman and a man with a height of 1.65 and 1.75 m, respectively. Primarily, the definition of working height is crucial to effectively address the interaction between operators and production lines. On the one hand, if the working height is too high, shoulders are often raised to compensate, thus leading to painful muscle contractions at the neck and back. On the other hand, if the working height is too low, the back is overwhelmed by the excess curvature of the trunk, thus giving rise to complaints of back pain (Fig. 2).

Virtual Golden Zone for the Ergonomics of Complex Production Lines

1441

Fig. 3. From VR to reality: versus the corresponding machine virtualized by using the unreal engine platform (a) the rear axle shaft machine assembled in the company plant (b).

Comparing the presented approach with the related literature, we remark that manikin and golden zone (GZ) models are also used in CAD platforms during the design phase for a preliminary ergonomic assessment. However, CAD tools remain widely dedicated to a single user and do not allow collaborative design and immersive perception. Conversely, introducing the VGZ in a VR environment offers a novel tool of interaction and enables engineers to view projects from a 3D perspective able to consider all arms and body movements (e.g., stretching, flexion, elevation, abduction, and adduction). Thus, the VGZ provides a thorough understanding of the structure and layout of production lines work and allows users to mimic the workers’ activities and evaluate the corresponding ergonomic performance. 3.2

The VR System Description

To implement the VGZ methodology in the VR environment of the considered manufacturing station, the Creo Parametric software database and Unreal Engine VR platform were employed (see Fig. 3), and Oculus Quest headset was used as hardware. Specifically, the CAD models were built in the Creo Parametric environment, while the scene, trigger, and interaction rules between the user and the virtual environment were developed and coded in the Unreal Engine platform. In order to define the VR model configuration and start the ergonomic evaluation, the interaction tasks were assigned to the Unreal Engine platform using the Blueprint options. The created VR environment was characterized by easyto-learn interactions for various stakeholders. The goal was indeed to provide a quick and helpful tool for users with a different job background in the automotive sector, without the need for any CAD or computer science knowledge.

1442

L. Dammacco et al.

Fig. 4. Customized blueprint to activate the visibility action (input-output action) of the manikin and VGZ in the position required by the user via the nodes “set visibility”.

The developed VR system was able to detect users’ inputs and modify the virtual world accordingly, while providing feedback with the Oculus controllers. Based on the requirement analysis, the system supported the standard features provided by Oculus Quest sensors and Unreal Engine platform, such as looking, walking, touching, and grabbing. Since physically walking in the VR room is dangerous as moving blindfolded in the real world, the teleportation technique was implemented through default interaction prefabs by Unreal Engine setup. To regulate users’ navigation, the thumbstick on the controllers was used to activate the teleporter indicating the desired spot and choosing the direction with an arrow projected on the floor. This mechanism allows to move quickly to the chosen point near the machine to be inspected. Moreover, by holding up or down the thumbstick, users walk using locomotion, and reach the equipment and tools on board the machine. The locomotion interaction was implemented creating a Blueprint where location, rotation, and speed options were suitably imposed. In addition, the grabbing interaction was developed, with and/or without gravity option, by assigning the collision shapes to the components. Tracking systems enabled positioning and orienting physical objects over the allowed space in real-time. The general operation is as follows: as soon as the Oculus controller enters a construction group’s collider, the VR tools trigger specific interactions and the system triggers a short vibration (using actuators integrated into the

Virtual Golden Zone for the Ergonomics of Complex Production Lines

1443

Fig. 5. User’s view of the manual manufacturing station: focus on gears and operator hands. The semi-transparent coloured volumes correspond the virtual golden zone. The subplot in the left top corner shows the user experiencing the virtual scene by wearing a virtual reality head-mounted display and grasping the touch controllers.

Oculus controller). Hence, users were able to grab equipment, tools, etc. Further interactions were customized, such as the visibility (input-output action) of the manikin and the VGZ in the required position. The visibility actions were linked to a pressed or released event via the nodes related to the “Set Visibility” function into the Blueprint; consequently, a node is fired every time any corresponding key is pressed/released on the Oculus Controller. Indeed, to manage the manikin body (both in the case of female and male) and VGZ visibility interaction, a Blueprint was created to associate the corresponding static meshes to the Oculus controller. Buttons “X” and “Y” on the left-hand controller respectively for female and male manikin’s bodies were employed to activate the input and output of models in the VR environment, whilst button “A” on the right-hand controller was used to enable the VGZ (as shown in Fig. 4). After the described VR environment set-up, a work team including designers, engineers, and managers conducted the design ergonomic evaluation. Reviews were organized to check the design of a manual manufacturing station of the production line performing different assembly tasks and characterized by large and customized structural dimensions. Initially, reviews were organized to evaluate the machines’ structure dimensions. Subsequently, equipment and tools were imported into the VR environment to perform the ergonomic check regarding the worker activities (see Fig. 1). To achieve the correct ergonomics for the operator activities and the final version of the 3D model, 6 reviews which lasted from 1 to 2 h were conducted with about 16 professionals (i.e. designers, engineers, managers). In fact, the professionals wearing the VR device first started checking the scene by analyzing the manikin body with the corresponding VGZ

1444

L. Dammacco et al.

positioned near the workplace. Subsequently, they mimicked the operator activities; in particular, they tested the arms movement to verify if all machine and station components are in the correct position (see Fig. 5). Each VR review was focused on a different level of the CAD project with the corresponding updates; in particular, the VR sessions were organized to address and solve the following ergonomics issues: – – – – – –

the the the the the the

4

gravity rack height and orientation; distance between operator and boxes with the components; arms movement for the picking of components; height of the base of the structure; sealants tool positioning/handling; powered driver positioning/handling.

Evaluation of the VGZ Usability Through a Questionnaire

For the sake of evaluating the usability of the VGZ methodology and highlighting its advantages with respect to traditional approaches (i.e., using the GZ in CAD software), the professionals involved in the ergonomic assessment of the presented case study were asked to fill out a questionnaire using Google Forms. Apart from personal information requests about gender, age, and company role, the core section of the questionnaire was focused on SUS (System Usability Scale) questions. In particular, both for the traditional GZ and VGZ method, the following 10 questions were included with 5 response options on a 5-point Likert scale (1—Strongly disagree, 2—Disagree, 3—Neither agree nor disagree, 4—Agree, and 5—Strongly agree): 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

I think that I would like to use this system frequently. I found the system unnecessarily complex. I thought the system was easy to use. I think that I would need the support of a technical person to be able to use this system. I found the various functions in this system were well integrated. I thought there was too much inconsistency in this system. I would imagine that most people would learn to use this system very quickly. I found the system very cumbersome to use. I felt very confident using the system. I needed to learn a lot of things before I could get going with this system.

The questionnaire was completed by 16 participants, of an average age of 36 years ranging from 23 to 56 years old: the age mean and the standard deviation (SD) are 34.68 and SD=8.88, respectively. The gender division was as follows: 4 females, 12 males. First, the work team compiled the SUS questionnaire to evaluate the usability of the GZ in the CAD software. Second, the SUS questionnaire to evaluate the usability of the VGZ was compiled after the work

Virtual Golden Zone for the Ergonomics of Complex Production Lines

1445

Fig. 6. SUS scores obtained by the traditional GZ and VGZ methods used in the CAD software and VR environment, respectively. The diagram reports the relative acceptability scale.

team performed the VR reviews, without any debriefing or discussion about the employed methodology. The result of the SUS questionnaire is a score represented by a number in the 0 to 100 range, which measures the overall usability of the system under analysis. Note that the score related to individual questions are not significant in a stand alone made. To calculate the final rating, each response to all questions is summed in accordance with the following procedure: for questions 1, 3, 5, 7, and 9, the score contribution is determined by subtracting one unit to the scale position; for questions 2, 4, 6, 8, and 10, the contribution is determined by subtracting the scale position to 5 unit; the sum of the above computed contributions is multiplied by 2.5 to obtain the overall SUS score in a scale out of 100. In addition, the SUS score can be converted into the so-called Acceptability Score, i.e., a scale of 5 categories: awful (score is less than 51), poor (score in the range between 51 and 67), good (the score is between 68 and 80.3), and excellent (score is greater than 80.3. As reported in Fig. 6, the obtained SUS score for the GZ is 68.44 (with SD=10.61) corresponding to the “good” rating, whilst the SUS score for the VGZ is 81.25 (with SD=10.26) corresponding to the “excellent” rating. As a consequence, the outcome is that the perceived usability of the VGZ was higher than the traditional GZ usability. On the one hand, the obtained result confirmed that the traditional GZ made a good contribution during the last years. On the other hand, the result highlights that the VGZ implementation enhances ergonomic assessment more than the traditional tool.

5

Conclusions

Virtual Reality (VR) represents one of the innovative tools that contribute to designing flexible manufacturing systems by leveraging on co-engineering approaches. Focusing on the design check of complex production lines, and specifically on the ergonomic assessment of a manufacturing station, this work shows

1446

L. Dammacco et al.

a novel VR methodology aimed at enhancing the critical design choices affecting the operator ergonomics. The presented method relies on the use of the virtual golden zone (VGZ) to identify the correct and comfortable working area of the operator as well as ensure the safety and efficiency of his/her activities. The effectiveness of the VGZ in providing professionals with an effective tool for ergonomic evaluations is demonstrated through the application to a real industrial case study in the automotive sector. The paper also presents the results of a system usability scale (SUS) questionnaire related to the use of the VGZ as a support tool for CAD software for ergonomic assessment in the design of complex production lines. The obtained results confirmed the effectiveness, efficiency, and high satisfaction experienced by users in using such a VR-based methodology for enhancing the ergonomics of manual manufacturing stations. This work can be considered as a best practice on how industry can take advantage from implementing VR in the design process. However, the presented findings can be considered only as a starting point for the digitalization of manufacturing companies. For instance, the VR use can be extended to the entire development process for a complete ergonomic evaluation before the production phase. Moreover, having established that the VGZ can enhance the ergonomic assessment of complex production lines, future works will investigate improving the human interaction with the VR environment by using a glove for haptic feedback. The connection between the head-mounted device, VR software, and glove can allow the user to measure the wrist rotation and movement of each finger and further enhance the ergonomics of operator activities.

References 1. Ahmed, S., Irshad, L., Demirel, H.O., Tumer, I.Y.: A Comparison Between Virtual Reality and Digital Human Modeling for Proactive Ergonomic Design. In: Duffy, V.G. (ed.) HCII 2019. LNCS, vol. 11581, pp. 3–21. Springer, Cham (2019). https:// doi.org/10.1007/978-3-030-22216-1 1 2. Boenzi, F., Digiesi, S., Facchini, F., Mossa, G., Mummolo, G.: Sustainable warehouse logistics: a nip model for non-road vehicles and storage configuration selection. In: Proceedings of the XX Summer School Operational Excellence Experience “Francesco Turco - Operational, Excellence, Experiences” (2015) 3. Bozza, A., Cavone, G., Carli, R., Mazzoccoli, L., Dotoli, M.: An MPC-based approach for the feedback control of the cold sheet metal forming process. In: 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), pp. 286–291. IEEE (2021) 4. Bullinger, H., Richter, M., Seidel, K.A.: Virtual assembly planning. Hum. Factors Ergon. Manuf. Serv. Ind. 10(3), 331–341 (2000) 5. Cavone, G., Dotoli, M., Epicoco, N., Franceschelli, M., Seatzu, C.: Hybrid petri nets to re-design low-automated production processes: the case study of a Sardinian bakery. IFAC-PapersOnLine 51(7), 265–270 (2018) 6. Cavone, G., Dotoli, M., Epicoco, N., Morelli, D., Seatzu, C.: Design of modern supply chain networks using fuzzy bargaining game and data envelopment analysis. IEEE Trans. Autom. Sci. Eng. 17(3), 1221–1236 (2020)

Virtual Golden Zone for the Ergonomics of Complex Production Lines

1447

7. Chandrasegaran, S.K., et al.: The evolution, challenges, and future of knowledge representation in product design systems. Comput. Aided Des. 45(2), 204–228 (2013) 8. Dangelmaier, W., Fischer, M., Gausemeier, J., Grafe, M., Matysczok, C., Mueck, B.: Virtual and augmented reality support for discrete manufacturing system simulation. Comput. Ind. 56(4), 371–383 (2005) 9. DeChurch, L.A., Mesmer-Magnus, J.R.: The cognitive underpinnings of effective teamwork: a meta-analysis. J. Appl. Psychol. 95(1), 32 (2010) 10. Digiesi, S., Cavallo, D., Lucchese, A., Mummolo, C.: Human cognitive and motor abilities in the aging workforce: an information-based model. Appl. Sci. 10(17), 5958 (2020) 11. Fonseca, L.M.: Industry 4.0 and the digital society: concepts, dimensions and envisioned benefits. In: Proceedings of the International Conference on Business Excellence, vol. 12, pp. 386–397 (2018) 12. Gattullo, M., et al.: Design preferences on industrial augmented reality: a survey with potential technical writers. In: 2020 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), pp. 172–177. IEEE (2020) 13. Haggag, H., Hossny, M., Nahavandi, S., Creighton, D.: Real time ergonomic assessment for assembly operations using kinect. In: 2013 UKSim 15th International Conference on Computer Modelling and Simulation, pp. 495–500. IEEE (2013) 14. Lasi, H., Fettke, P., Kemper, H.-G., Feld, T., Hoffmann, M.: Industry 4.0. Bus. Inf. Syst. Eng. 6(4), 239–242 (2014). https://doi.org/10.1007/s12599-014-0334-4 15. Mitra, S., Acharya, T.: Gesture recognition: a survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 37(3), 311–324 (2007) 16. Mujber, T.S., Szecsi, T., Hashmi, M.S.: Virtual reality applications in manufacturing process simulation. J. Mater. Process. Technol. 155, 1834–1838 (2004) 17. Paes, D., Arantes, E., Irizarry, J.: Immersive environment for improving the understanding of architectural 3d models: comparing user spatial perception between immersive and traditional virtual reality systems. Autom. Constr. 84, 292–303 (2017) 18. Peng, Q.: Virtual reality technology in product design and manufacturing. In: Proceedings of the Canadian Engineering Education Association (CEEA) (2007) 19. Proia, S., Carli, R., Cavone, G., Dotoli, M.: Control techniques for safe, ergonomic, and efficient human-robot collaboration in the digital industry: a survey. IEEE Trans. Autom. Sci. Eng. 19(3), 1798–1819 (2021) 20. Roman-Liu, D.: Comparison of concepts in easy-to-use methods for MSD risk assessment. Appl. Ergon. 45(3), 420–427 (2014) 21. Wolfartsberger, J.: Analyzing the potential of virtual reality for engineering design review. Autom. Constr. 104, 27–37 (2019) 22. Yadav, G., Kumar, A., Luthra, S., Garza-Reyes, J.A., Kumar, V., Batista, L.: A framework to achieve sustainability in manufacturing organisations of developing economies using industry 4.0 technologies’ enablers. Comput. Ind. 122, 103280 (2020)

Development of a Parametric Scene Editor of Serious Games for Safety in Workplaces Fabrizio Renno1 , Stefano Papa1(B) , Rosario Polichetti1 , Domenico Coccorese1 , Raffaele d’Angelo3 , Corrado Grasso2 , Andrea Tarallo1 , and Antonio Lanzotti1 1 University of Naples Federico II, 80125 Naples, Italy

[email protected]

2 SNOP Automotive Srl, 81052 Pignataro Maggiore, Caserta, Italy 3 Direzione regionale INAIL – CONTARP, 80143 Naples, Italy

Abstract. This work is focused on the development of a “serious game”, i.e., a software with no playful purpose adopted to train and verify the skills of users to improve safety in the workplace. Particular attention has been given to the accidents that occur during typical operations done in factory sites, e.g., driving forklifts. Firstly, the different types of “serious games” already available on the market have been analyzed to identify the best way to carry on tests and simulations by means of virtual, mixed, and augmented reality. Afterward, once the best solution has been identified, the Unity development environment has been considered to define a standard that could also be used for future projects. So, the result is the development of a powerful editor of the scene, in which the user can program all the components adjusting the game to the tasks to be performed by the worker, the tools, the environment and the targets. So, an upgradeable “parametric path” has been created, which will be followed by the forklift, realizing a new game and test environment for the worker. Then, modular components have been conceived to contemplate the future developments of the project, including a graphic editor: these will lead to a software that can be easily modified according to the customer’s requests. Keywords: Serious games · Occupational safety · Virtual reality

1 Introduction In the last decade great importance has been given to “serious games” i.e. software not realized for the entertainment field but, above all, for educational purposes [1–3]. It allowed to count on more dynamic teaching approach than the conventional ones and so innovative results have been obtained in learning process. Today, the powerful notebooks, mobile devices (smartphones and tablets) allow everyone to be in contact with many contents that can be enhanced thanks to the VR/AR technologies and easily used thanks to very user-friendly software. So, the educational processes are influenced by the evolution of the Information Technology tools. However, there are many areas in which the “serious games” can provide a meaningful contribution. For instance, they © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1448–1459, 2023. https://doi.org/10.1007/978-3-031-15928-2_126

Development of a Parametric Scene Editor of Serious Games

1449

can be used to prevent and/or reduce accidents to workers engaged in many kind of tasks [4–6]. The Italian National Insurance Institute against accidents at work (INAIL, Istituto Nazionale per l’Assicurazione contro gli Infortuni sul Lavoro) stated that, during the months between January and November 2021, there were 502,458 reports of accidents at work, over 10,000 more than those in the same period of 2020. On the other hand, there were 1,116 reports of accidents in the workplace with a fatal outcome, 35 fewer than those registered in the first months of 2020. The agency of the United States Department of Labor (Occupational Safety and Health Administration, OSHA) has estimated that the number of fatal accidents caused by an elevated truck is about 85 per year while, in the same period, 34,900 are accidents that cause serious injuries and, finally, 618,000 accidents involving these vehicles are classified as not serious. The Industrial Truck Association has estimated that there are approximately 855,900 forklifts in the United States and, of these, 11% have been involved in serious accidents. The data gathered by OSHA (Table 1) show the percentages relating to fatal accidents involving these vehicles and to the activities most affected by such accidents [7]. Table 1. Causes of fatal accidents in workplaces [7].

As an example, it can be considered that in small companies, the forklift trucks are not used by qualified personnel but by workers who do not have adequate preparation for this task. The analysis of these statistics shows that the causes of accidents with a forklift truck are mainly related to: 1. inexperienced or poorly trained driver who does not know how to react promptly to any obstacles and/or dangers; 2. too much safety, habit and mastery of the vehicle has a detrimental effect as the worker, confident in his abilities, tends to engage in incorrect behavior; 3. transport of excessive loads that can cause the vehicle to overturn; 4. high cornering speeds, i.e., the trucks do not have high cornering stability and therefore, before entering a curve, it’s needed to slow down; 5. bad signage in the workplace, i.e., transit areas are not adequately marked; 6. transport of workers on the arms of the vehicle. This project was born with the intent of remedying a dramatic need: to reduce the high number of fatal accidents occurring in the workplace, which, according to recent research and statistical surveys, are increasingly linked to drivers who have proved to be incorrectly trained to perform this task. So, the main purpose of this work is to describe

1450

F. Renno et al.

the steps followed for the creation of a new serious game based on a simulator aimed to train workers assigned to specific tasks and teach the user the rules and behaviors to be owned and respected therein, as to drive forklift trucks, unload deliveries, stacking boxes for storage and moving pallets around the warehouse inside industrial sites. The videogame is the only medium that allows the user to actively participate in the action and choices. A similar degree of identification, however, is not total as, in everyday life, user is distracted by many things: therefore, in order to keep the player’s attention threshold high, virtual reality has been implemented in the simulator. Furthermore, the second aim was to create a “parametric” and upgradeable text editor to manage the code of the software in order to allow the easily customization and update of the paths, of the scenarios and of the levels of the game. So, the serious games and the methods used for the research activity are therefore described in Sects. 2 and 3. Afterwards, the case study is deepened. Finally, the results are discussed outlining the future developments.

2 Serious Games Serious games are tools now used in many field such as, for example, medical, rehabilitation, industrial and so on [1, 2]. In order that the serious game be effective, it has to be characterized by a perfect balance between entertainment, simulation and learning. In addition, the players must be motivated to get involved and challenge each other in order to increase and improve their knowledge. Great attention is paid to level design, that is the development phase of a game in which it is possible to take care of the creation of the various levels of the game itself. In this phase, therefore, the skills of the player and the game mechanics must be evaluated in order to create a correct experience and in order to provide the player with ever new cue. Therefore, the level design phase is one of the most important aspect to consider in the development of a game/simulator. It is important that the game provides visual clues or signals that can guide the player to the completion of the targets: they must always be sure of what to do avoiding running into frustrating situations. It is also needed to provide an experience that allows the user to master the fundamental mechanics, in the shortest possible time. Great attention must also be paid to balancing the learning curve as learning the mechanics too quickly would lead to a sense of boredom for the player. It is also essential to be able to balance the difficulty curve as it is necessary to avoid that at the beginning there are levels or objectives that are too difficult for the player. Therefore, it is necessary to make sure that the game is as stimulating as possible and keeps the player’s attention alive in the long term: frustrating situations must be avoided and the software will have to guide and assist the player. Great attention must also be paid to the reward which becomes the lever to encourage the user to overcome their limits and get involved. There are videogames that address different themes such as bullying, migration, war: these are designed in order to stimulate individual empathy. Serious games, for example, are also used in the medical field: surgical and radiological training are performed on mannequins, animals or patients but, this form of education, however, has limitations: when the operation is carried out on a manikin or an animal, the evaluation of the studentdoctor remains subjective. To overcome these limitations, different simulators have been

Development of a Parametric Scene Editor of Serious Games

1451

developed and, unlike the real simulations, the virtual ones allow to accurately evaluate the student-doctor. These simulators are created with the aim to allow future surgeons to learn complex operations in a very simple but, above all, guided way: they are led, step by step, in the operations to be performed. The data show that with the help of this simulator, learning takes place in shorter times (compared to what happens with the use of conventional methods) and, also, that medical errors are fewer. The simulator has a guided mode where the basic notions are provided to the doctor/player and a free mode in which the doctor/player must perform an operation, also having to adapt to unforeseen events that may arise during the operation [8]. So, it is easy to understand that the main advantage of these simulators is the possibility of being able to learn by doing, without producing harmful consequences to things or people, resulting in the reduction of any accidents that can even be fatal: think of a surgical operation carried out by a trainee. In the industrial context innovative training methodologies have been developed coupling virtual ergonomics and serious games for improving workers safety. Compared to traditional procedures, interesting results have been reached in terms of level of attention and awareness of the operators about their safety [9]. Thanks to the possibility of completely program the scenario and the events in the game, the presented tool is also powerful to create situations that are difficult to implement in a real factory environment.

3 Tools and Methods Virtual Reality allows to simulate a real environment by means of electronic technologies to the point of giving, to those who experience it, the impression of being really immersed in that environment [10, 11]. The notions of “serious game”, virtual, augmented and mixed reality and their software constitute the methodological premise for the concrete realization of the forklift prototype/simulator. For the creation of the simulator, great importance was given to the “level design” phase so that the game would be stimulating and accessible. After the definition of the playful aspect, the operational aspect for the construction of the forklift simulator were considered. So, it was decided to use the “Unity” game engine [12]. It is based on the C # programming language, developed by the Microsoft company and now in its tenth version. The reasons that led to this choice are many and listed below: 1. free and available engine for different devices such as, for example, Mac, Windows, Linux, smartphone and browser; 2. software developed for one platform can be converted and used for others as well, with minimal effort; 3. detailed technical documentation to support the game engine; 4. “community” of users available to share useful information and experiences; 5. availability of a virtual store (“asset store”) in which developers can buy threedimensional models, programming codes, etc. In particular, points 3, 4 and 5 proved to be essential for the realization of the project as, otherwise, the construction times would have increased considerably.

1452

F. Renno et al.

To define a standard that could also be used for future projects, a powerful editor of the scene has been developed, in which the user can program all the components adjusting the game to the tasks to be performed by the worker, the tools, the environment and the targets.

4 Case Study The methodology conceived was applied to an industrial case study provided by SNOP Automotive Italy Srl located in Pignataro Maggiore (CE), Italy [13]. It is part of the Motor Vehicle Parts Manufacturing Industry. SNOP Automotive Italy Srl produces steel components for automotive industry: steel sheets are processed in presses and the semifinished products are assembled by welding. The production process involves more than 200 employees. SNOP required the creation of a serious game that would prevent and/or minimize accidents in the workplace through the learning of correct behavioral rules by the workers and, more generally, by all the employees who worked in the business context. Specifically, it was asked to facilitate the safe driving of the forklift by the driver, through an exact definition of the driving rules of the aforementioned vehicle. The future trials, that will be conducted at the SNOP facilities, will be carried out in collaboration with the Istituto Nazionale per l’Assicurazione contro gli Infortuni sul Lavoro (INAIL), which is the Italian public non-profit entity safeguarding workers against physical injuries and occupational diseases [14]. So, this work is mainly focused on driving a forklift truck and on the rules to be observed and behaviors to be adopted on such a vehicle. SNOP Automotive Italy Srl suggested the main requirements for the game. Further, it provided the standards and the info needed for the creation of the scenarios and of the events to implement in the code. 4.1 Targets and Game Features The initial choice was the creation of a serious game compatible with multiple platforms and not only with personal computers. It was justified by several preventive assessments. First of all, it meets the economic and organizational needs of small and medium-sized companies which often do not have sufficient funds for the purchase of many personal computers and, for reasons of speeding up the process of production, they are not able to favor the training and education of workers during working hours. Therefore, the creation of a software (serious game) that can be installed on multiple devices and not just personal computers allows to expand the pool of users who have access to education and training and, at the same time, does not affect time and organization of the production process. Secondly, the economic needs of the users of the simulator are also met, in fact they have the opportunity to train and increase their skills, by installing the aforementioned software on any smartphone, capable of providing a videogame experience of a level of content and quality not inferior to that which they would have enjoyed with the use of a personal computer. Finally, it should be noted that such a choice was also dictated by the current and particular historical moment: in recent years, the prices of video cards for personal computers, due to the semiconductor crisis, have increased dramatically. The

Development of a Parametric Scene Editor of Serious Games

1453

ability to use the software using devices other than a personal computer, in fact, allows to overcome this further criticality and to guarantee training at no cost. Other aspects to be considered are listed below. Realization of a Stimulating Serious Game. It is necessary to balance the level of difficulty of the game equally as too high difficulty or simplicity would lead the user to leave the game out for frustration or disinterest. Then, as the learning process is the fundamental aspect to preserve, it was necessary to define the tools useful to report the error messages in order to make them clear and intelligible: the user, driving the forklift in the simulator, must be warned of the incorrect actions that have been accomplished. Notification alone, however, is not enough: a way must be found to discourage the user from taking such actions. Therefore, taking advantage of the psychology of the game, a score has been created which will be shown upon completion of the level and which, in the future, will be viewable in an online database. This choice was made to “entice” the workers/players to confront and challenge each other, in order to overcome the score obtained by the colleague: such a logic, connected to the deduction of the score following incorrect actions (on the contrary, there will be an attribution score for correct actions), will encourage workers/players to improve their “driving style” in order to increase their score. The advantage linked to the game logic appears clear: a virtuous circle is created in which the player/worker will be led to learn more and more and better to “overcome” the opponent/colleague’s score with respect to each level. Creation of a Serious Game that is Easy to Learn. In order to improve the assimilation of the notions and rules that the simulator aims to transmit, it is necessary to convey the messages in a clear and intelligible way by means of a direct identification of the problem/event and the explanation, in a simple way, of the relative solution. Realization of a Serious Game that Can be Customized by the User. Workers and employees in any job have the ability to generate their own level and make it visible to all other subjects. In order to involve as many people as possible, regardless of their specific task, it is necessary to make sure that the game is accessible and understandable even by those who do not have programming skills by providing everyone with simple tools for the creation of their “environments”. Realization of a Serious Game Characterized by Different Game Modes. The simulator provides a “free” and a “guided” game mode. The difference between the two is that, in the first the error messages will not appear, whereas they will be present and visible in the second. 4.2 Methodological Approach Figure 1 shows the procedure conceived and developed for the making of the forklift simulator. It starts with the definition of the inputs, the needs and requirements, the design choices and the selection of the adapt tools. The successive step is the programming phase and so the creation of all the scripts needed for the realization of the simulator. Afterwards, the import of the 3D CAD models and the implementation of the text editor allow to enrich the scenario of the serious game with all the needed contents according to the Virtual Reality tools.

1454

F. Renno et al.

Fig. 1. Procedure used for the making of the serious game

4.3 Objects, Events and Errors Everything that is displayed on the screen in the scenario created in the Unity environment is represented by game objects (walls, boxes, forklift, containers, etc.). It is also possible to import complex three-dimensional models keeping the expected behaviors intact. Features and properties of each object can be programmed. Furthermore, it is possible to add events like movements, accelerations/decelerations, collisions, frictions and to implement the gravity in the scene by means of adequate scripts. The forklift can be moved by means of the keyboard or a controller. Therefore, all the scripts needed to make more realistic the vehicle dynamic, the scene, and so the game, were implemented. For instance, a specific script was implemented to gradually increase the translation of the forklift up to a maximum value, when the key/command associated with the movement was pressed. On the other hand, by deselecting this button/command, the vehicle gradually decelerated until it came to a complete stop. Furthermore, another script was implemented to stop the forklift truck when it hit a wall and a column. This script was able to “read” the speed with which the forklift impacted and to return a game over or an error as a result, depending on the greater or lesser extent of the impact. Sometimes, it was needed a successive modification of the code to solve some problems and to improve the likelihood of the result. Then, a suitable game menu to choose the game level, to reset the errors made, to view a guide that provides the player/operator with essential information such as, for example, the game loop, controllers, etc. was created. In order to make the simulation very realistic, in addition to the containers and the forklift, other items have been implemented with their peculiar behaviors as stop object, crossing delimiter, worker, Forklift spawner, obstacles, paths (Fig. 2). Afterwards, in order to collate all the elements presented a “Game Loop” has been defined: the driver of the forklift simulator is required to transport a container from point A to point B of the playing area respecting the correct driving rules, without taking

Development of a Parametric Scene Editor of Serious Games

1455

Fig. 2. The modular components

incorrect or harmful actions. In case of incorrect action, a penalty will be applied and it will affect the final score. Once the targets and the objects were defined, the identification and the analysis of the errors to be programmed within the prototype/simulator were needed. In a first phase, the following simple cases/events were considered: 1. 2. 3. 4. 5. 6.

stop at the stops; avoid getting too close to the workers/employees; don’t run over the workers; sound the horn before crossing an intersection; do not impact against walls and columns; do not stray from the designated forklift path.

During the guided mode, upon the occurrence of any incorrect action carried out by the player-driver, an error message will be shown that will describe the infringement carried out and will deduct points for the offender. It will be possible to export, locally (and, in the future, online), a file containing the offenses committed and the points deducted. The aforementioned data export system will allow to compare the score obtained by different players within the same level. 4.4 The Text Editor Another aim of the work was the design of a standard and reference model for the creation of a possible graphic editor i.e. of a “parametric” and upgradeable text editor to manage the code of the software in order to allow the easily customization of the paths, of the scenarios and of the levels of the game. For the definition of the text editor the JSON (JavaScript Object Notation) format was used. In particular, it is simple to read and write for persons (“human friendly”) and, equally simple, for machines. This language can be used to transport data to a remote or local server or to a database in order to make the “levels”, or game environments created by users, easily accessible, without the need to rewrite the code. The JSON file will adapt to the following protocol: it will have to identify the object, its size, position, and rotation within the space in which it will be allocated. In addition, to allow the export of the results achieved by the player, the protocol of the JSON file to be respected will be the following: identify the player, record the total score, identify the level, record a list with all the correct and incorrect actions performed by the player himself. The project has allowed the creation of a text editor, as shown in Fig. 3. A first JSON file was generated considering the SNOP Automotive facilities in Caserta. Starting from several 3D CAD models the environment was created in the simulator by means of the only modular components and the text editor. Similarly, different

1456

F. Renno et al.

Fig. 3. The text editor allows the instance of the generated modular components.

players will be able to create their own customized game environment and exchange it with each other. 4.5 Requests, Standards and Rules One of the main aspect of the project to be implemented in the simulator was the needs to add to the scene of the serious game some unpredictable situations similar to those that could happen in a real factory. So, the insertion of dynamic events within the simulator was considered. The standards used at SNOP Automotive Srl facilities provide for many events and behavior to observe and contemplate; Fig. 4 shows some of them.

Fig. 4. Some of the events and behavior to observe and contemplate.

At present, the following events were considered and implemented: 1. obstacles along the roadway; 2. worker who suddenly invades the carriageway, it is necessary to notify the presence of the forklift to the worker by activating the acoustic warning device;

Development of a Parametric Scene Editor of Serious Games

1457

3. forklift that meets, on the same road, the one driven by the player: it is necessary to notify the driver of the other forklift by activating the acoustic warning device; 4. object that falls from above and invades the roadway. So, the creation of several scripts allowed to upgrade the text editor in order to satisfy all the requirements imposed by the standards and rules. Specifically, when the forklift comes into contact with an object connected to the one that will have to fall, it will cause the latter to fall. The driver/player must therefore move around the obstacle and not impact with it, under penalty of deduction of the score. These events have increased the complexity of the simulator as, in addition to a constant attention required of the player, it will also be necessary to drive at an unsustainable speed and avoid sudden maneuvers and/or braking as, thanks to the physics introduced, the container moved by the forklift could fall.

5 Results The serious game of the case study considered in the previous sections was created according to the needs and requirements defined. Figure 5 shows some snapshots of the forklift during the simulation test.

Fig. 5. Snapshots of the forklift simulator scene.

The game was submitted for testing to a group of workers, in order to collect the first feedback about the use of the tool: after the initial training on the commands to navigate and interact with the items and the events appearing during the experience, a session of play have been performed, reporting a really enthusiast impression from the users.

1458

F. Renno et al.

6 Conclusions and Future Works The serious game based on the simulator of a working forklift truck, equipped with a text editor and supporting Virtual Reality, was created by means of the Unity software. The efficiency of the simulator designed will be tested in the next months at the SNOP Automotive Srl facilities in order to get a concrete contribution to the reduction of accidents in the workplace by encouraging the culture of “learning to work by playing”. One of the strengths of the simulator is the fully functional on different devices with different ranges of price: this allows, not only to companies but also to players/operators, not to have to incur exorbitant disbursements in order to buy the appropriate equipment. In addition, thanks to the great portability of the simulator, it is possible for players/operators to face and create levels from home. These same levels of play, then, will be easily comparable and modifiable, allowing a virtuous cycle: users, moved by a competitive spirit, will try to improve more and more and to achieve the best score. It should be noted that the realization of the simulator of the forklift truck can be an opportunity to validate the use for innovative and alternative purposes of the game, smartphone devices and virtual reality. In addition, the inclusion in the project of the text editor has already outlined a future perspective: the possibility to get by means of a graphic editor the same results achieved with the text editor but, with less effort and in guided mode. More generally, it should be noted that all components made and/or used for the realization of the forklift simulator, from a technical and operational point of view, can also be used in other projects and not only in the forklift simulator. Therefore, the realization of the simulator of forklift, beyond its “overt function”, represents “more abstractly” a moment of reflection, elaboration of elements, experimentation, redefinition of concepts and their areas of application. Next step will be the trial session that allow to evaluate the goodness of the serious game conceived and his usefulness.

References 1. Abt, C.C.: Serious Games. Viking Press, New York (1970) 2. Chittaro, L., Buttussi, F.: Assessing knowledge retention of an immersive serious game vs. a traditional education method in aviation safety. IEEE Trans. Visual. Comput. Graph. 4(4), 529–538 (2015) 3. Johnson, D., Wiles, J.: Effective affective user interface design in games. Ergonomics 46(13– 14), 1332–1345 (2003) 4. Lanzotti, A., Calise, M., Molaro, M., Papa, S., Patalano, S., Renno, F.: Federica’s MOOC (Massive Open Online Course): a blended course in engineering drawing at Federico II. Int. J. Interact. Design Manuf. 1–14 (2019) 5. Sacks, R., Perlman, A.: Construction safety training using immersive virtual reality. In: Construction Management and Economics, vol. 31, no. 9, pp. 1005–1007. Taylor and Francis, London (2013) 6. Lanzotti, A., et al.: Interactive tools for safety 4.0: virtual ergonomics and serious games in real working contexts. Ergonomics. 63, 1–22 (2019) 7. OSHA. https://www.osha.gov/data. Accessed 01 Feb 2022 8. Elessawy, M., et al.: Evaluation of laparoscopy virtual reality training on the improvement of trainees’ surgical skills. Medicina (Kaunas) 57(2), 130 (2021)

Development of a Parametric Scene Editor of Serious Games

1459

9. Lanzotti, A., et al.: Interactive tools for safety 4.0: virtual ergonomics and serious games in tower automotive. In: Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y. (eds.) IEA 2018. AISC, vol. 822, pp. 270–280. Springer, Cham (2019). https://doi.org/10.1007/9783-319-96077-7_28 10. Papa, S., Lanzotti, A., Di Gironimo, G., Balsamo, A.: A new interactive railway virtual simulator for testing preventive safety. In: WIT Transactions on the Built Environment. WIT, vol. 181, pp. 367–378 (2018). https://doi.org/10.2495/CR180331. ISSN: 1746–4498 11. Papa, S., Casoria, F., Di Gironimo, G., Micciché, G.: Virtual prototyping and simulation of robotic devices and maintenance procedures for remote handling activities in the access cell of dones. In: Proceedings of the International Conference on Nuclear Engineering (ICONE), vol. 1. American Society of Mechanical Engineers (ASME), GBR (2018). https://doi.org/10. 1115/ICONE26-82390. ISBN: 9784888982566 12. UNITY. http://www.unity.com. Accessed 01 Feb 2022 13. SNOP Automotive Srl. https://www.snop.eu/. Accessed 01 Feb 2022 14. INAIL. http://www.inail.it. Accessed 01 Feb 2022

Design of XR Applications for Museums, Including Technology Maximising Visitors’ Experience Marina Carulli1

, Andrea Generosi2 , Monica Bordegoni1 and Maura Mengoni2(B)

,

1 Politecnico di Milano, 20154 Milano, Italy

{marina.carulli,monica.bordegoni}@polimi.it 2 Università Politecnica delle Marche, 60131 Ancona, Italy [email protected], [email protected]

Abstract. eXtended Reality (XR) technology can enhance the visitors’ experience of museums. Due to the variety of XR technologies available that differ in performance, quality of the experience they provide, and cost, it is helpful to refer to the evaluation of the various technologies performed through user studies to select the most suitable ones. This paper presents a set of empirical studies on XR application for museums to select the appropriate technologies to meet visitors’ expectations and maximise the willingness of repeating and recommending the experience. They provide valuable insights for developing Virtual Museum applications increasing the level of presence and experience economy. Keywords: Virtual museum · User experience · Extended reality · Multisensory experience · Sense of smell

1 Introduction Modern museums are extending their traditional mission of educating people to a new mission: disseminating the cultural heritage in a more open, inclusive, and creative way by presenting the education content in an interactive and emotionally involving way to enlarge the audience and increase ticket sales [1]. Novel interaction and communication technologies can be effectively used to promote exhibitions, drastically change how they are conceived and organised, and reach out to the audience. In the last decade, museums have investigated and invested in digital technologies to attract visitors. Particularly effective are the technologies named eXtended Reality (XR), which comprise Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). In recent years, many virtual museums have developed [2] to improve traditional collections and exhibits with state-of-the-art digital technologies. Several studies demonstrate the feasibility and efficacy of various systems based on XR technologies for different application areas [2–6]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1460–1470, 2023. https://doi.org/10.1007/978-3-031-15928-2_127

Design of XR Applications for Museums

1461

XR technologies can be non-immersive, semi-immersive and immersive, up to the level of immersiveness they provide [7]. Nowadays, the visualisation systems most used by virtual museums are non-immersive, e.g., PC displays with desktop control devices used for interaction or two-dimensional multi-touch displays [8]. Semi-immersive systems that exploit visualisation technologies more like large-screen movie experiences (e.g., stereoscopic Powerwall projection, videowall, holographic projection) are usually preferred, as they accommodate many visitors at the same time [2, 9]. However, such systems typically integrate interactive devices intended for a single user, e.g., handheld devices and gesture interfaces [10], and fully immersive VR/AR systems are rarely used [9, 11]. Because of this wide range of different technological alternatives, there is a problem in choosing the more suitable solution to implement the experiences that meet visitors’ needs and expectations. However, there is a lack of systematic methodologies to compare possible technological setups and select those characteristics enhancing the visitors’ experience when visiting virtual museums and realising an “experience economy” perspective [12]. In recent years, many studies have examined how much XR technologies can improve the users’ experience at museums, exhibitions, cultural heritage sites and art galleries by studying the effects of VR and AR applications on the sense of presence and enjoyment [13, 14]. Others, such as He et al. [5], analysed the effect of the different information types and augmented immersive scenes on the same qualitative elements of the experience. However, the literature does not report comparative assessments of the various technologies to identify the best installation to meet the museum’s managers’ goals. Comparing existing studies is difficult due to the large variety of installation objectives, technologies, application contexts, cultural heritage sites, and target goals. In addition, several research works have been carried out in the cultural heritage area using a multisensory approach. In particular, all senses must be aroused convincingly and coherently to design and develop an immersive virtual experience. Specifically, the more the different stimuli perceived through the various senses will be consistent, the more the users’ perceptual system will recognise virtual objects and environments as real, giving them the feeling of being in front of a real world. More in detail, the integration of different sensory stimuli is significant, as neuroscientific and psychological studies show that perception in humans is always multisensory, using and integrating vision, touch, hearing, and other senses. Even if this exploration is usually experienced in the real world, using the multisensory approach is crucial for XR applications and environments. As reported by [15], using the multisensory approach in this area enormously improves the sense of presence (and, therefore, the perceived immersion), the involvement of users in the virtual environments and, in conclusion, the users’ experience. The paper presents an experimental approach to compare different XR technological setups and then define the technological specifications to effectively realise applications. Specifically, the approach will be based on the user-perceived interaction quality and the visitors’ museum experience defined by Pine & Gilmore [12]. In addition, literature overview and interviews allow the research to collect the requirements about features of the museums of the future. The results of these studies have been used to implement an application mixing Virtual/Augmented Reality and multisensory interaction.

1462

M. Carulli et al.

2 Technology Assessment for Implementing Virtual Museums This section summarises the comparative evaluation of Virtual Museums (VM) systems based on XR technologies, conducted by authors in previous research [16], that will be used to select the most appropriate technologies in specific application contexts. The comparative evaluation refers to three aspects affecting the user interaction with virtual museums as follows: • The sense of presence experienced by users during the interaction with XR technologies (i.e. presence); • The quality of the experience in virtual museums (i.e. experience economy); • The users’ attitude towards the experience and their willingness to repeat and recommend it (i.e. attitude toward the experience). A single-factor within-subject approach is adopted to assess the impact of the VM technological set-up on the three above indicators. The five types of single-factor technological setups for virtual museums that have been considered are the following: • a 2D desktop system using PC monitors for visualisation and a 3D mouse for interacting and navigating in the virtual museum. This configuration is generally used for web-based VM; • A 3D active stereoscopic projection (F10 ASD ZOOM by Projection Design) that frontally projected images and videos on a flexible display (150 DIAMOND; 300 × 225 cm). Users wear special glasses (NuVion) for stereoscopic imaging; • a fully immersive VR system based on the use of a head-mounted display (HMD, i.e. HTC Vive, with two VIVE base stations; • a mobile Augmented Reality (AR) system. Users can visualise the virtual reproduction of the artefact on a smartphone with a 2246 × 080 6.18” FHD + display by framing the marker with the smartphone camera. The marker used to trigger the application is placed on a pedestal. • An MR system based on a holographic projection display (i.e., HYPERVSN SOLO L). LED lights mounted on a rotating high-speed fan to create the illusion of visualising a hologram. A software application has been developed for each technological system using the Unity 3D development tool and setting each XR technology’s display and interaction parameters. The software application supports users in experiencing and interacting with the same cultural contents, but with different modalities. The reproduced archaeological artefacts are the Augusto Capite Velato, the Dagger, and the attic Kylix – currently exhibited in the Museo Archeologico Nazionale delle Marche of Ancona. The digital models of the artefacts are very accurate and have been created using state-of-the-art digitalisation technologies [17]. The users can interact with the virtual museums according to the specific technological set-ups. The desktop application enables the rotation, the moving and the zoom of the displayed object by using a commercial mouse (e.g. left button to select and drag the object and the wheel to zoom). Figure 1 shows the following more immersive set-ups and related applications.

Design of XR Applications for Museums

1463

The second application implementing the stereoscopic projection enables user interaction through a touch-screen console broadly used in museums [11]. The third type of interaction is implemented using the HTC VIVE headset, which tracks the user’s position and head orientation. The embedded camera allows the displayed virtual environment’s view to be adjusted in real-time while users move in the physical space.

Fig. 1. Tested applications for the virtual museum

The AR application exploits a smartphone camera that frames the marker and allows the user to see the displayed object from any perspective. Unity Vuforia is used to develop the AR application. Finally, the application for holographic projection was developed in the Unity programming platform. Unfortunately, it does not allow the user to interact with the displayed content, but it only visualizes a rotating artefact in the holographic supporting case. Every application was developed to increase the perception and the fruition of digital cultural artefacts as described above [2]. Therefore, the surrounding environment is minimal to catch the user’s attention on the digital artefacts, and the artefacts are presented in a black context.

1464

M. Carulli et al.

2.1 Evaluation Criteria Presence. Presence is “the subjective experience of being in one place or environment, even when one is physically situated in another” ([18]: p. 225). Assessing a user’s presence judgement is one of the leading usability criteria of virtual systems [19]. However, this definition well fits with full-immersive VR, where the user is completely isolated from the real world [20]. While in AR and MR, presence is considered as a “perceptual illusion of non-mediation” ([21]: p. 2) or as “the artificiality of simulated objects” ([22]: p. 36) due to the integration of real and virtual elements in the same experience. In all cases, presence is subjective and context-dependent. Specifically, it depends on the user’s mental imagery [23] and the ability to isolate him/her from external events [18]. Measuring presence can be performed by submitting a post-experiment self-report based on standardised questionnaires [24]. In particular, the Witmer & Singer’s [18] Presence Questionnaire (PQ) is the most widely used one, even if it refers to immersive virtual environments. Many other authors have presented new questionnaires and scales, readjusting the Witmer & Singer’s PQ to compare the level of presence in environments with different levels of immersion (e.g., [24, 25]). Experience Economy. “Experience Economy” was coined by Pine & Gilmore [12] for describing the overall set of products and services offered to realise a “memorable experience”. It is typical of the service economy. According to this perspective, XR can be seen as key enabling technology to facilitate the learning experience in museums, provide new entertainment facilities, enhance the aesthetic experience of the artefacts and the whole space, and finally support visitors in escaping reality [14] by immersing in history and art. Very few studies (i.e., [13] [26]) report comparative analyses of virtual museums’ performances exploiting XR in an “experience economy” perspective. Attitude Towards the Experience. Experiencing virtual museums can trigger either positive or negative response. The first evaluation can bring the visitors to repeat and recommend the experience. Moore [27] defined the Attitude Towards the Experience (ATE) as a measure of the valence of the hedonic experiences, i.e., those experiences that are chosen for pleasure and are affective and sensory, as compared with practical goal-oriented and cognitive experiences [28]. A VM experience falls under the former category. No studies are reported in literature investigating the VM experience in terms of ATE toward XR.

2.2 Measurement, Data Collection and Analysis A total of three scales have been used for measuring XR performance and comparing XR technologies in virtual museums as described in the following section. They are the result of the above-mentioned theoretical construct and an adaptation of the reported research works. The first one refers to the measure of Presence (PR) that derives from Yim et al. [29]. A 5-point Likert-type items is chosen, ranging from 1 = “not at all” to 5 = “absolutely”.

Design of XR Applications for Museums

1465

The second scale captures the quality of Visitors Experience (VX) according to the four experience realms proposed by Pine & Gilmors’s [12]. It consists of 16 statements: 4 related to the education realm (EDU), 4 to entertainment (ENT), 4 to escapism (ESC) and 4 to aesthetics (EST). It includes and extends the Radden & Han [1] and Jung et al. [14] statements. All items are rated from 1 (strongly disagree) to 7 (strongly agree) using 7-point Likert scales. The third scale measures the visitors’ Attitude Toward Experience (ATE) based on Moore’s research [27]. It comprehends four 7-point Likert-type items with bipolar verbal ends to measure the overall valence and two statements, rated from 1 (not at all) and 7 (absolutely), to assess the visitors’ attitude as an intention to revisit and to recommend. During the experiment for multi-technology analysis, sample used are asked to experience the three artefacts by adopting the five different XR set-ups. The order of presenting the technologies and the artefacts was counterbalanced across participants to avoid carry-over effects. The sample consists of 30 volunteer participants (11 females and 19 males), aged between 18 and 56 (Mean = 32, SD = 10.26), without particular visual acuity problems. They were students from the engineering faculty and university personnel. A questionnaire, containing the formerly presented three scales was submitted to all participants and data were collected and elaborated by adopting the repeated measure ANOVA, which compares means across one or more variables based on repeated observations. SPSS statistics software is used to apply ANOVA and then test XR effects on VM experience. Before the experiments, the scales are tested for reliability. The experimental results point out the following aspects: – Holographic projection exhibits the lowest scores for all the three metrics. On the opposite, HMD presents the highest; – The five XR technologies impact on ATE differently with somewhat other mechanisms. The holographic projection has the highest direct PR effect on ATE, which is more than double the average global effect and almost ten times as much as AR. Stereoscopic PR even impacts on ATE, but negatively; – ATE is mostly demonstrated by VX by its direct impact on ATE or the (mediated) indirect effect on PR. The only exception regards the holographic-based application where PR unfolds ATE instead of VX, especially if one considers that PR path coefficient to VX is the highest absolute value; – All proved experiences lack of an additional sensory stimulation that is smell. This is a challenging issue in XR because it is responsible for triggering the emotional response and can strongly influence both ATE and Experience Economy measures. The last conclusion requires an additional test of XR set-ups enriched by olfactory displays in order to verify how much the VX combined with an olfactory display can improve the user experience and contribute to advanced technologies for virtual museums.

1466

M. Carulli et al.

3 Importance of Multisensory Experiences in XR Applications XR applications can be effectively used for museum exhibitions to make them more interactive and immersive and facilitate visitors in enjoying the cultural contents, history and the art collections’ meaning. The goal of adopting XR is to expand the target audience to visitors that are not used to cultivating arts and sciences and to fragile people (e.g. children, mental disabilities). A multisensory approach to select and apply XR allows those persons to better understand concepts that are generally considered too complex. Moreover, the user experience (UX) is one fundamental element of museums and exhibitions nowadays. The UX is closely related to the human interaction with objects at physical, perceptual, and cognitive levels. Consequently, UX is a fundamental element to take into consideration in the design of multisensory experiences to satisfy all human senses and increase the understanding of the exhibition meanings and the users’ engagement [30]. When human senses are simultaneously stimulated, objects are appreciated more intensely. From a neurological perspective, this effect is obtained by integrating the information from the different sensory modalities, which results in a final integrated experience [31]. In recent years, olfactory displays (ODs) have gained attention from the scientific community due to their contribution in arousing a tactile memory, and emotional response, and also when applied in virtual environments [32]. Smell has paramount importance in generating emotional links between users and products [33], people and living scenarios [34], and humans because it boasts a direct and immediate connection to the limbic love involved in memory and emotional processing [35]. Techniques as basis for ODs can be classified into scent generation and scent delivery methods [36]. Although literature presents some research works on the development and application of effective ODs [35, 37–40], some technical issues remain unsolved. The market also launched some commercial solutions for XR, such as Vaqso (https://vaqso.com/), Olorama (https:// www.olorama.com/ ), and the Feelreal device (https://feelreal.com/), Maggioni et al. [41] presented a design toolkit for scents generation (i.e., OWidgets) consisting of a graphical user interface and the related software framework.

4 Multisensory Experience for a Virtual Museum Based on the evaluation of the technologies for implementing Virtual Museums and the considerations about the effectiveness of multisensory effects on interaction, an application based on immersive XR and olfaction has been developed. The application consists of an immersive virtual experience of an installation at the Palazzo dei Diamanti located in Italy. The exhibition consists of 12 rooms that introduce human moods, such as melancholia and alienation, contemplation and empathy, musical inspiration, communion and harmony, voluptuousness and feral instincts, fusion and ecstasy, solarity and enthusiasm, atmosphere and affection. The designed AR multisensory experience aims at increasing the users’ immersion and the sense of presence in the augmented canvas. In addition, it will allow the evaluation of the influence of scents on the experience of the cultural exhibition.

Design of XR Applications for Museums

1467

The AR multisensory experience is made of 2 main parts, and both exist in every room of the museum to augment sensations and the comprehension of the environment. The first one is the presence of 3D printed objects – columns in big rooms, frames in small spaces – hiding ODs triggered by a light-sensitive resistor (LDR). Scents, released by ODs developed by the authors and described in [37], have been carefully selected to create a link and augment the perception of the canvas. For example, the smell in the room about Fusion and Ecstasy is a mix of rose and vanilla. Each scent is a combination of 2 or 3 aromas, according to the balanced creation of “notes” of the perfume. The 3D printed shapes have been designed according to the room’s mood and have been printed with different materials to stimulate the tactile interaction (e.g. soft materials are used to differentiate colours). So, every time a visitor touches the column, the scents are sprayed and the sounds are played. The second part of the AR multisensory experience consists of an AR application for tablets or smartphones, in which the most representative paintings of each room are animated. For example, in the Contemplation and Meditation room, for the Ave Maria a trasbordo made by Giovanni Segantini in 1886, the AR application displays the boat moving (Fig. 2). In addition, a recorded voice accompanies the experience, describing the story of the painting and explaining the mood of the room.

Fig. 2. The two installations at the Palazzo dei Diamanti: on the left, the printed column with the Arduino board for managing the OD; on the right, the “Ave Maria a trasbordo” canvas, animated by AR.

5 Conclusion Despite the study’s limitations in analysing XR technologies due to the laboratory context of the experiments and the possible differences in visitors’ responses in real museums, the results provide helpful insight for developing VR/AR/MR museums. HMD is the best technology to maximise the three measures, arousing visitors’ interest and encouraging them to live the experience again. However, only a single user

1468

M. Carulli et al.

can exploit it. The adoption of more HMD in museums is expensive, requires complex technological infrastructures and needs to solve numerous logistic issues. When situational constraints need to use multiple-user technologies, stereoscopic projection and mobile AR applications are reasonable solutions for presence and ability to activate an experience economy perspective. At the same time, desktop-based systems are not attractive, and holographic displays are not a suitable technology for the expected interactive museum context. In addition, the use of multisensory technologies could improve the user experience of museums’ exhibitions. In particular, olfactory stimuli can impact visitors’ subconscious and conscious levels to arouse immersiveness and enhance users’ sense of presence and involvement in the XR experience. Unfortunately, further work must be done to make olfactory displays ready to use for a large audience and integrated into virtual museum setups.

References 1. Radder, L., Han X.: An examination of the museum experience based on Pine and Gilmore’s experience economy realms. J. Appl. Bus. Res. (JABR), 31(2), 455–470 (2015). https://doi. org/10.19030/jabr.v31i2.9129 2. Carrozzino, M., Bergamasco, M.: Beyond virtual museums: experiencing immersive virtual reality in real museums. J. Cult. Herit. 11(4), 452–458 (2010). https://doi.org/10.1016/j.cul her.2010.04.001 3. Noh, Z., Sunar, M.S., Pan, Z.: A review on augmented reality for virtual heritage system. In: Chang, M., Kuo, R., Kinshuk, Chen, G.D., Hirose, M. (eds.) Learning by Playing. Game-based Education System Design and Development. Edutainment 2009 International conference on technologies for E-learning and digital entertainment. Springer, Heidelberg (2009). https:// doi.org/10.1007/978-3-642-03364-3_7 4. Rua, H., Alvito, P.: Living the past: 3D models, virtual reality and game engines as tools for supporting archaeology and the reconstruction of cultural heritage–the case-study of the Roman villa of Casal de Freiria. J. Archaeol. Sci. 38(12), 3296–3308 (2011). https://doi.org/ 10.1016/j.jas.2011.07.015 5. He, Z., Wu, L., Li, R.: When art meets tech: the role of augmented reality in enhancing museum experiences and purchase intentions. Tour. Manage. 68, 127–139 (2018). https://doi. org/10.1016/j.tourman.2018.03.003 6. Leopardi, A., Ceccacci, S., Mengoni, M.: Dynamic projection for the design of an adaptive museum guide. In: Rizzi, C., Andrisano, A.O., Leali, F., Gherardini, F., Pini, F., Vergnano, A. (eds.) Design Tools and Methods in Industrial Engineering. Lecture Notes in Mechanical Engineering, pp. 85–94. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-311 54-4_8 7. Burdea, G.C., Coiffet, P.: Virtual reality technology, John Wiley & Sons (2003) 8. Sylaiou, S., Karoulis, A., Stavropoulos, Y., Patias, P.: Presence-centered assessment of virtual museums’ technologies. DESIDOC J. Libr. Inf. Technol. 28(4), 55–62 (2008) 9. Ch’ng, E., Cai, S., Leow, F.T., Zhang, T.E.: Adoption and use of emerging cultural technologies in China’s museums. J. Cult. Heritage 37, 170–180 (2019). https://doi.org/10.1016/j.culher. 2018.11.016 10. Pietroni, E., Adami, A.: Interacting with virtual reconstructions in museums: the Etruscanning Project. J. Comput. Cult. Heritage (JOCCH) 7(2), 1–29 (2014). https://doi.org/10.1145/261 1375

Design of XR Applications for Museums

1469

11. Barbieri, L., Bruno, F., Muzzupappa, M.: Virtual museum system evaluation through user studies. J. Cult. Herit. 26, 101–108 (2017) 12. Pine, B.J., Gilmore, J.H.: Welcome to the experience economy. Harv. Bus. Rev. 76(4), 97–105 (1998) 13. Sylaiou, S., Mania, K., Karoulis, A., White, M.: Exploring the relationship between presence and enjoyment in a virtual museum. Int. J. Hum. Comput Stud. 68(5), 243–253 (2010). https:// doi.org/10.1016/j.ijhcs.2009.11.002 14. Jung, T., tom Dieck, M.C., Lee, H., Chung, N.: Effects of virtual reality and augmented reality on visitor experiences in museum. In: Inversini, A., Schegg, R. (eds.) Information and Communication Technologies in Tourism 2016, pp. 621–635. Springer, Cham (2016). https:// doi.org/10.1007/978-3-319-28231-2_45 15. Marto, A., Melo, M., Gonçalves, A., Bessa, M.: Multisensory augmented reality in cultural heritage: impact of different stimuli on presence. Enjoyment, Knowl. Value Exp. IEEE Access 8, 193744–193756 (2020). https://doi.org/10.1109/ACCESS.2020.3032379 16. Leopardi, A., et al.: X-reality technologies for museums: a comparative evaluation based on presence and visitors experience through user studies. J. Cult. Herit. 47, 188–198 (2021) 17. Mengoni, M., Leopardi A.: An exploratory study on the application of reverse engineering in the field of small archaeological artefacts. Comput.-Aided Des. Appl. 16(6), 1209–1226, (2019). https://doi.org/10.14733/cadaps.2019.1209-1226 18. Witmer, B.G., Singer, M.J.: Measuring presence in virtual environments: a presence questionnaire. Presence 7(3), 225–240 (1998). https://doi.org/10.1162/105474698565686 19. Stanney, K.M.: Virtual environments. In: The human-computer interaction handbook, L. Erlbaum Associates Inc., (2003) 20. Stevens, B., Jerrams-Smith, J., Heathcote, D., Callear, D.: Putting the virtual in-to reality: assessing object-presence with projection-augmented models. Presence: Teleoperators Virt. Environ. 11(1), 79–92 (2002). https://doi.org/10.1162/105474602317343677 21. Lombard, M., Ditton, T.: At the heart of it all: the concept of presence. J. Comput.-Mediated Commun. 3(2), (1997) 22. Lee, K.M.: Presence. Explicated 14(1), 27–50 (2004) 23. Schuemie, M.J., Van Der Straaten, P., Krijn, M., Van Der Mast, C.A.: Research on presence in virtual reality: A survey. Cyberpsychol. Behav. 4(2), 183–201 (2001). https://doi.org/10. 1089/109493101300117884 24. Iachini, T., et al.: The experience of virtual reality: are individual differences in mental imagery associated with sense of presence? Cogn. Process. 20(3), 291–298 (2018). https://doi.org/10. 1007/s10339-018-0897-y 25. Schnack, A., Wright, M.J., Holdershaw, J.L.: Immersive virtual reality technology in a threedimensional virtual simulated store: investigating telepresence and usability. Food Res. Int. 117, 40–49 (2019). https://doi.org/10.1016/j.foodres.2018.01.028 26. Loizides, F., El Kater, A., Terlikas, C., Lanitis, A., Michael, D.: Presenting cypriot cultural heritage in virtual reality: a user evaluation. In: Ioannides, M., Magnenat-Thalmann, N., Fink, E., Žarni´c, R., Yen, A.-Y., Quak, E. (eds.) Digital Heritage Progress in Cultural Heritage: Documentation, Preservation, and Protection. Lecture Notes in Computer Science, vol. 8740, pp. 572–579. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13695-0_57 27. Moore, S.G.: Some things are better left unsaid: how word of mouth influences the storyteller. J. Consum. Res. 38(6), 1140–1154 (2011). https://doi.org/10.1086/661891 28. Dhar, R., Wertenbroch, K.: Consumer choice between hedonic and utilitarian goods. J. Mark. Res. 37(1), 60–71 (2000). https://doi.org/10.1509/jmkr.37.1.60.18718 29. Yim, M.Y.C., Cicchirillo, V.J., Drumwright, M.E.: The impact of stereoscopic threedimensional (3-D) advertising. J. Advert. 41(2), 113–128 (2012). https://doi.org/10.2753/ JOA0091-3367410208

1470

M. Carulli et al.

30. Gallace, A., Spence, C.: Multisensory design: reaching out to touch the consumer. Psychol. Mark. 28, 267–308 (2011) 31. Spence, C., Driver, J.: Crossmodal space and crossmodal attention. Oxford University Press (2004) 32. Bordegoni, M., Carulli, M., Ferrise, F.: Improving multisensory user experience through olfactory stimuli, Emotional Engineering Vol. 7: The Age of Communication, 201–231, (2019) 33. Bordegoni, M., Carulli M.: Evaluating industrial products in an innovative visual-olfactory environment. J. Comput. Inf. Sci. Eng. 16(3), 9 (2016) 34. Bordegoni, M., Carulli, M., Shi, Y., Ruscio, D.: Investigating the effects of odour integration in reading and learning experiences. Interact. Des. Architect. 32, 104–125 (2017) 35. Krusemark, E.A., Novak, L.R., Gitelman, D.R., Li, W.: When the sense of smell meets emotion: anxiety-state-dependent olfactory processing and neural circuitry adaptation. J. Neurosci. 33(39), 15324–15332 (2013) 36. Nakamoto, T.: Human olfactory displays and interfaces: odor sensing and presentation. Information science reference (2013) 37. Hirota, K., Ito, Y., Amemiya, T., Ikei, Y.: Presentation of odor in multi-sensory theater. VAMR/HCII 2013 (2013) 38. Matsukura, H., Yoneda, T., Ishida, H.: Smelling screen: technique to present a virtual odor source at an arbitrary position on a screen. IEEE Virt. Real. 127–128 (2012) 39. Narumi, T., Nishizaka, S., Kajinami, T., Tanikawa, T., Hirose, M.: Meta Cookie+: An IllusionBased Gustatory Display. In: Shumaker, R. (ed.) VMR 2011. LNCS, vol. 6773, pp. 260–269. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22021-0_29 40. Ranasinghe, N., et al.: A demonstration of season traveller: multisensory narration for enhancing the virtual reality experience. In: CHI Conference on Human Factors in Computing Systems (2018). https://doi.org/10.1145/3170427.3186513 41. Maggioni, E., Cobden, R., Obrist, M.: OWidgets: a toolkit to enable smell-based experience design. Int. J. Hum Comput Stud. 130, 248–260 (2019)

Exploring People’s Visual Perception and Its Impact on Evaluation of a Tiny House Prototype Using Eye Tracking Technology Aurora Berni1(B)

, Chiara Nezzi1 , Laura Ruiz-Pastor1 , Stefania Altavilla1 Ingrid Kofler1,2 , and Yuri Borgianni1

,

1 Free University of Bozen-Bolzano, Piazza Università, 1, 39100 Bolzano, Italy

[email protected] 2 Eurac Research, Viale Druso, 1, 39100 Bolzano, Italy

Abstract. The building design sector can benefit from the new opportunities offered by studies investigating people’s perception of urban and architectural spaces. It emerges from the literature that certain elements of environment settings and buildings such as edges, landmarks and materials can affect perception. However, their impact on people’s visual perception is still unclear, also because of the difficulty to report consciously what has been experienced. Technologies and tools such as mobile eye tracking (ET) give a chance to get insights into visual behaviors in real environments. In this work, the authors had the chance to conduct an experiment, where ET was used in a physical space of a tiny house prototype, which was the result of a research-industry cooperation project about real-world laboratories. An experimental activity involved 26 volunteers, who were asked to visit and freely observe the interior of the tiny house wearing ET glasses and fill in an evaluation questionnaire at the end of the visit. The first-view experience recordings of each visit of the tiny house were thereafter processed to acquire data to be put in relation with questionnaires’ outcomes. Preliminary statistical analysis showed potential relationships between areas of interest (AOIs), namely distinguishable elements of the tiny house, data, and evaluations. The time spent on some AOIs positively or negatively affected the evaluation reported by the questionnaire. It is also worth noting that, surprisingly, some AOIs deemed to be secondary affected the questionnaire ratings more significantly than the core qualities of the tiny house. Keywords: User perception · Eye tracking · Sustainability · Design · Buildings

1 Introduction and Background The importance of human perception of things, environment and products, is gaining traction across several scientific disciplines. The growing attention to human reactions and sensibility targets the preliminary assessment of consumer’s needs and thoughts

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1471–1482, 2023. https://doi.org/10.1007/978-3-031-15928-2_128

1472

A. Berni et al.

[1]. In this context, the design field has been strongly influenced by human factors. In recent years, design research has been focused on integrating human-related activities into the design processes, from the early phase of development to the latest phase of product evaluation. The goal of this research area is a deep understanding of how and to what extent human perception can positively or negatively affect the design, choice, and evaluation of a product or, more generally, of a case study object [2]. Capturing human feelings and extracting scientific data was made possible thanks to biometric instruments, among the others. Nowadays, they constitute one of the most powerful tools at disposal of the design community to monitor and register human-related factors [3]. The design literature shows an extensive use of the eye-tracking (ET) technology, which is becoming popular in capturing human visual behaviors. The employment of ET glasses in the design field allows researchers to detect and analyze the whole perceptive process. Consequently, an understanding of the attention and satisfaction given by an object, or an environment can be obtained [4]. This section focuses on the analysis of ET technology-related case studies, since ET glasses were the biometric instrument used by authors for the experimental activities presented in this paper. The ET use in the design field is of great interest in the literature. Park and colleagues [5] dealt with the possibility to enhance the interconnection between the designer and the consumer by means of the ET glasses, pointing out how the analysis of eye fixations could be beneficial in raising design appreciation. Khalighy and colleagues [6] conducted an experiment with ET glasses worn by 300 participants, whose biometric data were interpreted to propose a formula for the quantification of a product’s visual aesthetics. Another example was brought by Mussgnug and colleagues [7], whose goal was to raise design students’ awareness about user experience for the development of user-oriented products. The satisfactory results proved the efficacy of this technology in revealing hidden aspects of human perception and implicit needs of consumers. Triberti and colleagues [8] studied how to foster beauty through design after the acquisition of testers’ gaze plots and fixations enabled by ET. The chance to acquire information useful to design using ET, and markedly by means of ET glasses, is illustrated by many other scholars, for instance [9, 10] and [11]. The interest of the scientific community in this topic hence mirrors the increasing relevance human perception in design field is experiencing in recent times. ET technology has been combined with various techniques to capture human behavior for design-related purposes. Indeed, ET glasses are often utilized with the support of other biometric measurement instruments, e.g., GSR [3] and EEG [12, 13]. In addition, some literature examples report a shared practice consisting in matching data collected during user experiences and subjective data, usually collected by means of written, oral, or computer-based questions. However only few case studies make use of this dual practice; most articles reporting the use of ET glasses focus on its separate utilization. Boa and colleagues [14] recorded testers’ fixations and eye gazes and subsequently asked for a preference rating based on a Likert scale. The same approach was digitalized by the same scholars into a rating-on-screen procedure [4]. After an ET-based experiment, Kukkonen and colleagues [15] requested participants to fill in a questionnaire with rating scale-based questions about product perception. Data collected through ET glasses were compared with those collected through the questionnaires to study a possible correlation. Cases of failed correlations have been reported as a direct

Exploring People’s Visual Perception and Its Impact on Evaluation

1473

consequence of differences in testers’ evaluation interests. The same method was recently adopted by Kovaˇcevi´c and colleagues [16], where participants’ gaze data were recorded with ET glasses, and they were thereafter asked to rate the attractiveness of a packaging on a Likert scale. ET and hand-written data were statistically correlated to fully describe a potential correlation between attractiveness and fixation. Similarly, oral ratings could also be collected to check their alignment with the heat maps of the ET software [17]. In this paper, subjective ratings matched eye movement parameters as expected. Eventually, an extensive comparison between 12 semantic rating scales and 64 ET metrics has been studied and documented by Rojas and colleagues [18]. What emerges from literature is that some authors attempt to correlate experimental ET data and subjective data, as opposed to a larger group of scholars who rely on ET data only for the scopes of their research [3]. On the other hand, both design [19] and psychology literature [20] highlight that a complete picture of human experiences should be provided by the juxtaposition of objective and subjective data. Regarding fields of application, examples of the use of ET technology can be mostly found in the design field scientific research, in marketing, user research, and in industry and human performance research [21]. ET glasses represent a suitable instrument also for other applications e.g., the building sector, which is the field of research investigated in this paper. The evaluation of architectural environments, both indoor and outdoor, by means of ET glasses is not frequently found in literature. Few examples describe the use of ET glasses for the collection of emotional factors while evaluating a building closed space. Cho and Suh [22] studied the power of color combinations to influence human emotions and behaviors in an environment, evaluating results with a combined method of ET use and a survey. Another case of the employment of ET technology for the observation and evaluation of spatial elements in a virtual reality setting was proposed by Kim and Lee [23]. An exception of the use of ET glasses in a real-world environment instead of representations, such as virtual reality or pictures, is the experiment conducted by De la Fuente Suárez [24] in an historic train station in Mexico. He identified which architectural elements captured attention through the length of observation and the point of view of the observation. Nevertheless, this paper constitutes a rare example: ET applications in the building sector are not widespread, even though closed, and magnified spaces should be considered suitable especially for the use of ET glasses. To sum up, two important aspects must be highlighted here. First, it is not of common practice to relate objective data with subjective ones collected with different techniques, e.g. written, oral or on-screen questionnaires. Second, a consolidated use of ET glasses in the building sector is so far absent. The present work purposefully deals with both these aspects thought to require major investigation, as apparent in the residual of the paper. The focus will be on correlating objective/unconscious and subjective/conscious data that were collected during experiments and events that took place in the Venosta Valley (Italy) from July to October 2021. ET glasses were utilized for the observation of a tiny house prototype indoor environment. Consequently, this paper will contribute to enrich literature cases about the application of ET glasses in the building sector. To do so, aim of this research will be to analyze in details eye gazes of testers and categorize them into a set of Areas of Interests (AOIs), to subsequently correlate these results with the product evaluation obtained from the questionnaire. Purpose of this work is to provide the scientific community with an example of the utility of the use of ET glasses in the

1474

A. Berni et al.

building sector and the im-portance to have a direct comparison between sets of data of different nature (objective/unconscious and subjective conscious). The present paper is structured as follows. After an overview of the methodology (Sect. 2), data collection and processing are described in Sect. 3. Data analysis and results are discussed in Subsect. 3.3. Section 4 closes this work with the conclusions and considerations on future research.

2 Methodology The experiment was carried out within the project “Tiny FOP MOB - A Real World Laboratory made of wood and hemp travelling through the Venosta Valley” (acronym Tiny FOP MOB). The authors had the chance to run an experiment with a real-scale physical prototype of a tiny house made out sustainable materials. The experiment aimed at comparing observation and perception of the interior of a building prototype through the acquisition of objective and subjective data, respectively. The authors collected data of 26 participants with ET glasses (Tobii Pro Glasses 2). After the observation session, participants were asked to fill in a questionnaire to get information about their perception of the prototype. Further details are to be found in the following subsections. 2.1 Materials The prototype available for the experiment was a tiny house made of wood and hemp, named Tiny Fop MOB. The external size of the prototype corresponded to those of a standard container (around 13 m long and 2.50 m wide). This size allowed the tiny house to be built directly on a truck trailer to ease the transportation of the Tiny Fop MOB in five different locations of the Venosta Valley, Italy. It was opened to visitors in the period of July – November 2021, according to the project objectives of spreading knowledge about the scopes and characteristics of the prototype. Indeed, it can represent a sustainable alternative to other similar wood tiny houses due to the combination of local wood and hemp bricks (mixture of hemp, limestone powder and water). The materials of the Tiny Fop MOB play a role in the environmental impact of the prototype, which is CO2 negative according to first estimates. Figure 1 shows the interior of the house. In particular, the structure (walls, ceiling, and floor) as well as part of the wooden furniture and posters (used as information material required for the main project) are visible in Fig. 1a and 1b. Figure 1c displays a detail of the textured wall made of hemp observed by a participant.

Exploring People’s Visual Perception and Its Impact on Evaluation

1475

Fig. 1. Interior of the Tiny Fop MOB (a, b) and the textured hemp wall observed through the Tobii Pro Glasses 2 (c)

2.2 Participants 28 volunteers participated in the experiments; however, the final sample was made of 26 valid experiments. A participant failed to answer the questionnaire, while another one was excluded due to a malfunctioning of the ET device during the observation session. All the participants were recruited on a voluntary basis among the passers-by of the Venosta Valley locations where the Tiny Fop MOB was placed. Only people who confirmed to be at least 18 years old were allowed to visit the tiny house with the ET glasses to avoid legal issues. Adults with visual or mobility impairment were not invited to the study due to the impossibility to acquire visual data and difficulties to access the interior of the house, respectively. Besides that, no other restrictions have been put in place in the participants’ recruitment. The final sample included Venosta Valley inhabitants, tourists, workers, and some University students taking part in a teaching excursion. The procedure (details follow) was approved by the ethical and privacy committees of the authors’ institution. 2.3 Procedure People who accepted to take part in the experiment were recruited and informed that no personal or sensitive data was acquired. A code was assigned to each participant just to match the ET data with the questionnaire evaluations. After the recruitment, participants were informed they could visit the tiny house with no time restrictions. No specific task needed to be performed during the exploration; participants were free to observe the building as long as they wanted being aware that they would have been required to fill a questionnaire afterwards. This aspect was strongly specified by experimenters to encourage a more careful exploration. Further attention was paid to provide projectrelated information to participants. Nonetheless, information was provided only if openly requested by the participant. In any case, informative posters and brochures were present

1476

A. Berni et al.

inside the tiny house. Participants were also informed about the regulations and measures in place regarding the pandemic-related restrictions. The experiment could be interrupted at any time with no further explanation. The participants were asked to wear ET glasses while visiting the interior of the tiny house. The researchers helped participants to wear the devices correctly to avoid discomfort. The recording of the device and the related data acquisition started after the calibration process. The recording was interrupted when participants exited the house. As participants left the house, the ET device was removed and the parts that were in contact with participants’ skin or body were sanitized. The participants were subsequently asked to state their agreement on 12 statements through a 5-point Likert-scale with a value ranging from 1 (totally disagree) to 5 (totally agree), as a part of a larger questionnaire. All these statements aimed to capture different nuances of people’s evaluation and appreciation of the tiny house (see Sect. 3.2 for details). After the submission of the paper-based questionnaire, participants were thanked and discharged.

3 Data Collection and Analysis The experiment is based on the collection and comparison of two typologies of data. • Quantitative and objective data acquired though the ET glasses • Qualitative evaluations (subjected data) acquired through the 5 point-Likert scale questionnaires 3.1 Collection and Processing of Objective Data This paragraph describes the steps to acquire and compute dwell times, which are the time spent on observing each element. Design studies frequently use such data as proxy of attention [25]. Dwell times and other measures are typically calculated to characterize the attention paid to different Areas of Interest (AOIs), which indicate consistent, wellidentified components in ET studies. In this research, the consistent and well-defined components are the different visible elements of the interior of the tiny house, see Table 1. They were initially defined intuitively by the authors, and then validated by watching some firstly available ET video-recordings (see below). Markedly, areas of the prototype observed by the participants could always fall within the proposed AOIs. The selected AOIs can be further grouped into macro-classes as follows: • AOIs 1–3: structural elements of the building, which are supposed to be the main qualifiers of the tiny house, especially regarding sustainability in light of the materials used. • AOIs 4–7: elements typically found in buildings and houses. • AOIs 8–10: elements present because of different circumstances, especially project requirements. The output of ET glasses for each participant was a first-view visual experience video, which shows instantaneous gaze points. The process of identification of instantaneously gazed elements is supported by the visualization of fixations (represented by

Exploring People’s Visual Perception and Its Impact on Evaluation

1477

Table 1. Areas of Interest and visual triggers for the subdivision of eye-tracking recordings Area of interest

Detailed description

1. Walls

Internal walls of the tiny house with a finishing made of hemp and limestone

2. Ceiling

Top-wooden covering (the perimeter and the piece of the wall in the immediate nearby are included)

3. Floor

Bottom-wooden covering (the perimeter and the piece of the wall in the immediate nearby are included)

4. Window and door frames

Perimeter of the windows and door wooden frame and their handle (excluding the central part of the glass)

5. Furniture

Table, chairs, and cabinet made of wood

6. Lightning

Hanging lamps and switches

7. Outdoor

Everything visible from the windows of the Tiny house (it was considered when the participant looked in the middle of the window)

8. Information materials

Informative posters and brochures about the project, its aim and materials

9. Projector screen + electronic devices

Projector screen and electronic devices present in the tiny house for the scope of the project, e.g., projector, temperature sensors

10. Disturbance elements

Project-related objects, which could not be removed because of contextual factors at the time of the experiment

gaze plots) as shown in Fig. 1c. The extraction of data related to dwell times was not automatically performed by the software. The 26 recordings have been processed manually; ten different manual triggers corresponding to the AOIs described above were introduced into the recordings to mark the beginning of the exploration of each AOI. After the introduction of triggers, data was then processed with the software Tobii Pro Studio and exported in a spreadsheet file. The time spent on an AOI in a single in-stance for each participant was calculated as the time from the introduction of its corresponding trigger to the introduction of a new one. As participants could gaze at the same AOI in multiple instances, total dwell times were then calculated by summing the duration of each instance. 3.2 Collection and Processing of Subjective Data As mentioned above, the questionnaire included 12 statements to evaluate the Tiny Fop MOB. The evaluation terms regarded the product quality, originality, participant’s preferences, presence or absence of advantages/disadvantages, perception of sustainability. Some statements aimed at understanding the level of awareness of the project and possible use purposes, e.g., living, conference/seminar, work. Each statement was used as an evaluation variable to be compared with visual behavior data. The variables are: 1) good quality, 2) preference, 3) convenience, 4) absence of

1478

A. Berni et al.

disadvantages, 5) originality, 6) representativeness (of the territory), 7) generic accommodation, 8) permanent accommodation, 9) (small) conference’s location, 10) office, 11) holiday accommodation, 12) (perception of) sustainability. All the statements were formulated in a way that a higher score assigned by participants is to be considered as a more positive evaluation and perception of the tiny house. 3.3 Data Analysis and Results Ordinal logistic regression analyses were performed to match dwell times for each AOI and evaluation variable. The authors used the software Stata to run regressions and disclose possible significant relations. The positive (+) and negative (−) coefficients of each combination between evaluation variables and dwell times AOI were identified, as shown in Table 2. In Table 2, significant relations are highlighted by adding p-values for coefficients associated to p ≤ 0.05. Table 2. Positive (+) and Negative (−) coefficients and significant p-values from the ordinal logistic regression analysis. Evaluation variables

Walls

Ceiling

Floor

Window and door frames

Furniture

Lightning

Outdoor

Information materials

Projector screen + electronic devices

Disturbance elements

Good quality



+

+







+



+

+

Preference



+

+







+



+

+

Convenience

+

+

+











+



Absence of disadvantages

+



+

+

Originality

+

0.017 –

+

+





0.007

0.035

0.013

0.038



+







0.034



+

0.005

0.040



+



0.050

Representativeness

+

+



+

+









+

Generic accommodation



+

+

+





+



+

+

0.020

0.009

0.017

0.009

Permanent accommodation

+







+

+

(Small) conference’s location



Office



+

+

+

+

0.009 –



+

+

0.013 +



+

+

+



+

+

+



+

+

+

+



+



+

– 0.025

+

+

+

+

– 0.008

Holiday accommodation



Sustainability





– 0.026





+



0.019

0.011

+



0.020

0.027

Exploring People’s Visual Perception and Its Impact on Evaluation

1479

3.4 Comments As can be inferred from the table, the perception of “Absence of disadvantages” increased when participants observed “Walls”, “Floor”, “Windows and frames” and “Projector screen + electronic devices”. It conversely decreased when participants observed “Furniture”, “Lightning” and “Information materials”. This evaluation variable resulted as the one showing the largest number of significant relations with dwell times on AOIs. Other relations are briefly reported below. The “Originality” of the product decreased when “Lightning” and “Information materials” were observed more extensively. Significant relations were found when it comes to variables concerning the suitability of the tiny house for different scopes. The perception of the product as suitable for “Generic accommodation” increased observing “Projector screen + electronic devices”. It decreased when “Furniture”, “Lighting” and “Information materials” were observed. The evaluation of the product as suitable for “Permanent accommodation” decreased observing “Furniture” and “Information materials”. Whereas the evaluation of product as “Small conference location” and “Office” was negatively correlated with the observation of “Lighting”. Moreover, the evaluation of the product as “Holiday accommodations” increased when participants observed “Windows and door frames”, while it decreased when “Furniture” was observed. Eventually, the perception of “Sustainability”, one of the most relevant dimensions for the overall scopes of the project, increased when the participants observed “Windows and door frames”, while it decreased observing “Ceiling” and “Furniture” for longer periods of time. Expected significant relations with the observation of walls (made of materials chosen to enhance sustainability) and information materials did not emerge. In general, the observation of AOIs corresponding to structural elements (Walls, Ceiling, Floor) had less impact on the evaluation than the observation of AOIs of detailed elements (“Window and door frames”, “Furniture”, “Lightning”, “Outdoor”). The specific elements of the project did not show many meaningful relations with perceived quality. The AOI “Information materials”, including explicit information about the project and the tiny house, had a significant impact on variables such as “Absence of disadvantages”, “Originality” and “Generic accommodation”. As for the first of these three significant relations, which is unexpected, participants should have captured some negative aspects of the tiny house in its description and illustration reported in the informative materials.

4 Discussion, Conclusions, and Outlook The present work aims at enriching the literature with a case study of ET technology application in the building sector thanks to the availability of a full-scale prototype of a mobile tiny house made of hemp and wood. The presented application includes the collection of both objective/unconscious behaviors and subjective/conscious evaluations in the study of new products’ perception, which represents an additional original element of the paper, beyond being claimed as good practice in the literature. Markedly, the authors correlated data on participants’ visual behavior during a visit to the interior of a tiny house and their evaluation. The used ET glasses allowed collecting objective data during the visit, while subjective data were acquired through 5-point Likert questionnaire participants filled in afterwards. The recordings have been processed to recognize

1480

A. Berni et al.

and identify different gazed elements of the tiny house during the visits, which were categorized into ten Areas of Interest (AOIs). The total dwell times were then considered in the analysis. The processing of the recordings allowed the authors to determine the total duration of observations towards the ten elements for each participant. These dwell times were correlated to participants’ perception of the tiny house articulated into twelve variables featuring nuances of the product’s perceived quality, creativity, and appropriateness. Significant relations were found. The structural elements (“Walls”, “Ceiling”, and “Floor”) were expected to have a higher impact on the evaluation of the prototype since they were featured by a peculiar structure (e.g., greater thickness of the structure than in standard wooden houses) due to the materials involved (hemp bricks and wood). Instead, the analysis showed that their impact on participants’ perception and subsequent evaluation was negligible. Surprisingly, details, to which poor attention was paid in the design of the tiny house and were chosen in a conventional way, emerged as more impactful than core structural elements. In particular, “Lighting” and “Furniture” are those AOIs whose extended dwell times negatively affected the evaluation of the tiny house. This applies especially when it comes to evaluating certain qualities of the prototype such as its pertinence to be used for working purposes or to host small conferences and meetings. From a general and methodological point of view, the number of significant correlations was relatively low beyond the lack of already highlighted expected relations. This aspect highlights the followings. • Despite sight is considered as a fundamental sense, other senses (not monitored here) could have played a role in people’s evaluations of the tiny house beyond the elements that have been observed for longer times. Moreover, social, and individual factors, which are poorly considered in experience design [26], could have had an impact here. Product perception is plainly complex and the gaze towards parts of a building could have poor explanatory power in terms of perceived quality and preferences. This partially contrasts with literature examples where design domains and employed ET tools differ, e.g. [27]. These open issues are to be addressed in authors’ future work. • While some ET studies in design collect ET and conscious data contextually, as stated in Sect. 1, two different approaches are evident in the literature [3], which plainly depend on studies’ objectives. On the one hand, cause-effect relations or, more broadly, correlations are looked for, like in the present case. On the other hand, visual behaviors and evaluations are considered as separate and complementary parts of a larger picture. The limited number of significant relations found in the present study suggests that the use of ET is more promising in the latter. As aforementioned, this work refers to a specific case study within the building sector. The results show that even elements considered of secondary importance have the chance to affect perception significantly. Such phenomenon can be explored in different product domains, which represents an additional area of research for future studies. Designers could also run similar tests during the prototyping phases of the design process to evaluate a virtual prototype and assess the possible differences due to diverse forms of representation (virtual vs. physical), see [28] for the relevance of this task. Consistently, a laboratory experiment is planned where participants will explore the same tiny house

Exploring People’s Visual Perception and Its Impact on Evaluation

1481

through an ET-supported Virtual Reality headset and will be asked to rate the product through the same assessment procedure. Acknowledgments. The work is fully supported by the project “Tiny FOP MOB - A Real World Laboratory made of wood and hemp travelling through the Vintschgau Valley”, funded by the European Regional Development Fund (ERDF) Investment for Growth and Jobs Programme 2014–2020, 5th call in the Axis 1 “Research and Innovation” of the Autonomous Province of Bolzano-South Tyrol, grant FESR1161.

References 1. Boa, D.R., Hicks, B.: Discriminating engineering information interaction using eye tracking and an information operations model. In: DS 84, Proceedings of the DESIGN 2016 14th International Design Conference, pp. 1–10. The Design Society (2016) 2. Hurley, R., Galvarino, J., Thackson, E., Ouzts, A., Pham, A.: The effect of modifying structure to display product vs. graphical representation on packaging. In: Eighteenth IAPRI World Packaging Conference, p. 123. DEStech Publications, Inc. (2012) 3. Borgianni, Y., Maccioni, L.: Review of the use of neurophysiological and biometric measures in experimental design research. Artif. Intell. Eng. Des. Anal. Manuf. 34, 248–285 (2020) 4. Boa, D.R., Ranscombe, C., Hicks, B.: Determining the similarity of products using pairwise comparisons and eye tracking. In: DS 80-5 Proceedings of the 20th International Conference on Engineering Design (ICED 15), vol. 5, pp. 225–234. The Design Society (2015) 5. Park, J., DeLong, M., Woods, E.: Exploring product communication between the designer and the user through eye-tracking technology. Int. J. Fash. Des. Technol. Educ. 5(1), 67–78 (2012) 6. Khalighy, S., Green, G., Scheepers, C., Whittet, C.: Quantifying the qualities of aesthetics in product design using eye-tracking technology. Int. J. Ind. Ergon. 49, 31–43 (2015) 7. Mussgnug, M., Lohmeyer, Q., Meboldt, M.: Raising designers’ awareness of user experience by mobile eye tracking records. In: DS 78: Proceedings of the 16th International conference on Engineering and Product Design Education (E&PDE14), Design Education and Human Technology Relations, pp. 099–104 (2014) 8. Triberti, S., Chirico, A., La Rocca, G., Riva, G.: Developing emotional design: emotions as cognitive processes and their role in the design of interactive technologies. Front. Psychol. 8, 1773 (2017) 9. Mussgnug, M.: Quantitative usability testing in user-centered product development with mobile eye tracking (Doctoral dissertation). ETH Zurich (2017) 10. Liao, P., Liu, M., Xu, Q., Wang, Y., Wang, X.: Application prospect of eye-tracking experiments in studies of construction safety hazard identification. China Saf. Sci. J. 26(11), 31–36 (2016) 11. Borgianni, Y., Maccioni, L., Basso, D.: Exploratory study on the perception of additively manufactured end-use products with specific questionnaires and eye-tracking. Int. J. Interact. Des. Manuf. (IJIDeM) 13(2), 743–759 (2019). https://doi.org/10.1007/s12008-019-00563-w 12. Petkar, H., Dande, S., Yadav, R., Zeng, Y., Nguyen, T.A.: A pilot study to assess designer’s mental stress using eye gaze system and electroencephalogram. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 48999, pp. 899–909. ASME (2009)

1482

A. Berni et al.

13. Rojas, J.C., Contero, M., Camba, J.D., Castellanos, M.C., García-González, E., Gil-Macián, S.: Design perception: combining semantic priming with eye tracking and event-related potential (ERP) techniques to identify salient product visual attributes. In: ASME International Mechanical Engineering Congress and Exposition, vol. 57540, p. V011T14A035. ASME (2015) 14. Boa, D., Hicks, B., Nassehi, A.: A comparison of product preference and visual behaviour for product representations. In: DS 75-7: Proceedings of the 19th International Conference on Engineering Design, Design for Harmonies, vol. 7, pp. 487–496. ICED (2013) 15. Kukkonen, S.: Exploring eye tracking in design evaluation. Join. Forces, 119–126 (2005) 16. Kovaˇcevi´c, D., Brozovi´c, M., Možina, K.: Do prominent warnings make packaging less attractive? Saf. Sci. 110, 336–343 (2018) 17. Li, W., Wang, L., Wang, L., Jing, J.: A model based on eye movement data and artificial neutral network for product styling evaluation. In: 2018 24th International Conference on Automation and Computing (ICAC), pp. 1–6. IEEE (2018) 18. Rojas, J.C., Contero, M., Bartomeu, N., Guixeres, J.: Using combined bipolar semantic scales and eye-tracking metrics to compare consumer perception of real and virtual bottles. Packag. Technol. Sci. 28(12), 1047–1056 (2015) 19. Berni, A., Borgianni, Y.: Making order in user experience research to support its application in design and beyond. Appl. Sci. 11(15), 6981 (2021) 20. Contreras-Somoza, L.M., et al.: Usability and user experience of cognitive intervention technologies for elderly people with MCI or dementia: a systematic review. Front. Psychol. 12, 636116 (2021) 21. Martinez-Marquez, D., Pingali, S., Panuwatwanich, K., Stewart, R.A., Mohamed, S.: Application of eye tracking technology in aviation, maritime, and construction industries: a systematic review. Sensors 21(13), 4289 (2021) 22. Cho, J.Y., Suh, J.: spatial color efficacy in perceived luxury and preference to stay: an eyetracking study of retail interior environment. Front. Psychol. 11, 296 (2020) 23. Kim, N., Lee, H.: Evaluating visual perception by tracking eye movement in architectural space during virtual reality experiences. In: Ahram, T., Taiar, R., Gremeaux-Bader, V., Aminian, K. (eds.) IHIET 2020. AISC, vol. 1152, pp. 302–308. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-44267-5_45 24. De la Fuente Suárez, L.A.: Subjective experience and visual attention to a historic building: a real-world eye-tracking study. Front. Archit. Res. 9(4), 774–804 (2020) 25. Lohmeyer, Q., Matthiesen, S., Meboldt, M.: Task-dependent visual behaviour of engineering designers-an eye tracking experiment. In DS 77: Proceedings of the DESIGN 2014 13th International Design Conference, pp. 549–558 (2014) 26. Carbon, C.C.: Psychology of design. Des. Sci. 5 (2019) 27. Du, P., MacDonald, E.F.: Eye-tracking data predict importance of product features and saliency of size change. J. Mech. Des. 136(8), 081005 (2014) 28. Berni, A., Borgianni, Y.: Applications of virtual reality in engineering and product design: why, what, how, when and where. Electronics 9(7), 1064 (2020)

Heat on Sight - Display Thermal Radiation in the Peripheral Field of View Aline Püschel(B) , Wolfgang Kilian, and Stephan Odenwald Department of Sports Equipment and Technology, Chemnitz University of Technology, Reichenhainer Street 70, 09126 Chemnitz, Germany [email protected]

Abstract. Firefighters risk their lives for saving people in danger. To protect them during their risky duties, it is necessary to find new ways, means and tools to increase their personal safety. One promising approach is the use of a display integrated into a firefighter’s helmet using the peripheral vision to provide important information. Subsequently, a demonstrator of a peripheral display in a firefighter helmet is presented in this paper. In this particular application, measurement data from an infrared sensor array is superimposed on the peripheral field of vision of a firefighter as a color-coded signal. The display consists of RGB LEDs that are controlled via computer interface. In order to address the needs of firefighters as well, a User-Centred Design study was conducted. The authors could show, that by applying a Kano analysis, it is possible to extract the user centred importance of different aspects. The most crucial features can be used for the further development of the functional model to the first prototype. Keywords: User-centred design · Kansei engineering · Peripheral feedback · Peripheral display · Heat detection

1 Introduction 1.1 Topic of the Paper The spatial temperature distribution in the environment of a fire-fighter is an essential information during fire extinguishing operations. To obtain this information, infrared thermography cameras are typically used. These cameras provide a comparatively high resolution display and are available as portable, battery-powered devices. However, they have the disadvantage, that they usually have to be held in one hand and are not always available, especially in fire brigades in rural areas due to the high costs. Furthermore, the system should not distract the firefighter from the actual task, e.g. by regularly changing the focus of gaze between the camera screen and the environment. To achieve this goal, the temperature values are displayed in the peripheral field of view. The Display (LED array) is spatially orientated towards the field of view and the information is displayed in real time. A color-coded representation is used to indicate the different temperatures in the environment. The main advantage of a peripheral-visiondisplay is the possibility to combine a primary visual task with a secondary one without © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1483–1494, 2023. https://doi.org/10.1007/978-3-031-15928-2_129

1484

A. Püschel et al.

switching attention between the tasks. The firefighter can thus perform his primary duty as designated and additionally acquire and process information about the periphery [1]. The previously described concept compensates for the disadvantages of hand-held devices and still provide the benefits of thermography, for instance to indicate fire behind closed doors. 1.2 Specific Objective of the Paper In order to define functional requirements and design such a system to the special needs of firefighters, a User-Centered Design (UCD) study was set-up using the Kansei Engineering (KE) approach. An important part of the UCD was a Kano Analysis, whereby a survey was used to define the functional requirements of the display. This Kano Analysis is used to obtain feedback on user satisfaction regarding these requirements and the analysis of the data leads to conclusions about the further development of the display.

2 State of the Art and Related Work 2.1 User Centred Design Approach For the study, the statistical-based KE approach was used, defined by Lanzotti [7] as “… a structured process of continuous innovation in the product concept development phase…”. Here, both the functional and emotional needs of the users are taken into account. Initially Kansei Engineering is a methodology of Emotional Design, which focuses on human emotions in human-object-interaction. Lanzotti’s statistical approach for Total Quality defines a structured process that starts with emotional and functional user requirements. This method could be subdivided into two phases. According to the Kano model, a method for measuring customer satisfaction, the first phase is focused on the identification of Must-be (M), One-dimensional (O) and Attractive (A) quality elements and additionally emotional (Kansei) elements. The usual methods for recording the M-O-A elements are market research, interviews and questionnaires. Must-be quality is stated and users expect these requirements. Above all, the O and A are decisive for user satisfaction and further design and development. One-dimensional features result in satisfaction when they are available and in dissatisfaction when not. With the implementation of an Attractive feature, satisfaction increases disproportionately, but absence does not lead to dissatisfaction. According to Kano’s model, Indifferent (I), Reverse (R) and Questionable (Q) attributes are also determined after a standardised evaluation. Indifferent means that the user has no preference regarding such a feature, reverse means rejection and the user expects these features not to be present. Questionable attributes are due to contradictory answers from the respondents. The Kano Analysis questionnaires consist of functional and dysfunctional questions for each feature. The two answers are entered into a table for each individual feature and in combination result in the corresponding element. From the result, further steps in the development are derived, which features a product should have. In KE, statistical methods are normally used to identify Kansei words or elements. In the first phase of the development of the peripheral display shown in Sect. 3, the

Heat on Sight - Display Thermal Radiation in the Peripheral Field of View

1485

emotional needs of the users still need to be defined. For this purpose, a questionnaire was used to assess the needs of the potential users [7, 8]. The second phase of KE aims for the selection and validation of an ideal concept, which will be examined in the further process [7]. Thus, the presented results ostensibly concentrate on the first phase to identify functional and emotional users’ needs. 2.2 Feedback Display One example of presenting thermal imaging data in the central field of vision with a higher resolution is the thermal imaging camera integrated in firefighter’s helmet by Rosenbauer International AG (Fig. 1, A). The display blocks large areas of the field of view of the user. To avoid the problems stated in the introduction, the used solution differs from displays in the direct field of view.

Fig. 1. Firefighter’s Helmet by Rosenbauer International AG (A) [https://www.rosenbauer.com]; Mobile device for information representation in the peripheral field of view by Poppinga et al. [2] (B)

Poppinga et al. [2] also used a mobile device for the peripheral display of information consisting of a pair of glasses with twelve LEDs that illuminate the user’s peripheral field of vision (Fig. 1, B). The light points were distributed equally spaced on the frame of the glasses to cover a large part of the user’s potential field of view and to investigate the ability to distinguish between the individual LEDs. For presenting information in the peripheral field of vision Costanza et al. [3] developed a wearable peripheral display for subtle and discrete information transmission. The display, which is attached to the frame of a pair of eyeglasses, present information in the peripheral field of vision of the user without interfering with his or her immediate environment. Nakao et al. [4] developed smart glasses with a peripheral visual display. The researchers implemented a novel display design with an 8 × 8 dot matrix with a 20° tilt. The display was mounted similar to [3] attached to the end of the spectacle arms. The smart firefighter’s helmet design concept Helon 360 developed by Held et al. [5] is equipped with a 360° thermal imaging camera. This device streams the data to the incident commander and allows the firefighters inside the hazard zone to be warned by radio of potential problems that they may not be able to see.

1486

A. Püschel et al.

The following UCD study on a peripheral feedback display for firefighters builds upon the work of Held et al. [5] on their Helon 360 regarding the design of the helmetintegrated display as well as upon the methodology presented by Sanseverino et al. [6] regarding a Kano analysis of cycling gloves.

3 Peripheral Thermal Radiation Display (PTRD) A basic functional model of a peripheral thermal radiation display (PTRD) that is integrated into a firefighter’s helmet was developed. The prototype of the helmet was created to demonstrate the basic functionality. It could be worn by subjects during the study, so that the participant got a first impression of the concept. The schematic hardware overview of the PTRD is illustrated in Fig. 2 (A) and consists only of a few electronic components. Thermal radiation from a hot target is captured by an infrared sensor array, processed by the Microcontroller and finally presented by the RGB display. This type of sensor array has a much lower resolution compared to standard infrared cameras but as the spatial resolution of the human eye in the peripheral vision is also relatively small, this is not a disadvantage. The benefit of the array, however, is that it is small, inexpensive and easy to use. An exemplary thermal radiation gathered from the IR sensor with its schematic representation by the RGB LED display is shown Fig. 2 (B).

Fig. 2. System overview: Main components of the hardware for the functional demonstration device (A); Illustration of display and sensor (B)

4 Methodology The integrated PTRD described in Sect. 3 was introduced to the firefighters, whereby the participants are naive about the peripheral display of data, as no such application is known in fire service. However, they are experienced in this business area. Following the introduction, the participants completed a survey which was divided into three parts. The first part serves to collect statistical data of the target group. General questions on age, gender, years of service in the fire brigade, experience with thermal imaging cameras and the number of missions accomplished were asked.

Heat on Sight - Display Thermal Radiation in the Peripheral Field of View

1487

Based on the KE approach, part two focuses on the semantic description, where firefighters were asked to describe the thermal imaging camera system in a few words. Kansei elements are derived from these descriptions. On the basis of the functional pattern shown, the participants were furthermore asked to consider for which data the display can still be used (apart from the display of IR data). The third part explores the characteristics of the system under development using Kano Analysis. Overall, 24 questions were asked on 12 features, one functional question and one dysfunctional question each. The survey was conducted in german language. For this paper the results of the study, including Figures and tables were translated into English.

5 Results A total of 44 firefighters of four different departments participated in the survey, of which 37 were male and 7 females. 80% of the respondents were between 20 and 50 years old, with the majority of participants being between 31–40 years old (29.5%). 39 of the respondents were volunteers only, 5 have been both volunteers and professionals in the fire service. Almost 40% had over 20 years’ experience in the fire service, with around 50% having experience between 6–20 years. The average number of operations per week was between 2 and 5 for almost 60%. For 40 participants, at least one thermal imaging camera is available on site. Among the firefighters interviewed, 88.6% have experience with thermal imaging cameras and all respondents felt, that it is important to use a system to display thermal imaging data during fire operations. However, even 75% find it questionable not to use a thermal imaging system. Based on the age structure, the experience in the field and the extensive experience with thermal imaging cameras of the participating firefighters, it is assumed that the results of the survey are trustworthy. In summary of the second part, Fig. 3 (A) shows an example of the assignment of the attributes named by the participants to the Kansei elements: Performance, Ergonomics, Economy. Thermal imaging cameras were described as a very useful, efficient and sensible tool that gives firefighters a feeling of safety during fire operations. These cameras are accurate and can be used for various temperature ranges in different environments and for several purposes. In contrast, however, they have been described as costly, heavy, bulky and cumbersome. They represent an additional burden that is also perceived as annoying. 43 Respondents can also imagine displaying other data using a peripheral display. One of the most frequently mentioned points refers to the filling level of the breathing air cylinders or the residual pressure in the cylinders. The display of various critical gases in the ambient air, such as CO, and the display of vital parameters of the firefighter were also functions mentioned several times. Figure 3 (B) summarizes the possibilities that participants referred to.

1488

A. Püschel et al.

Fig. 3. Example from the semantic description of Thermal Imaging Cameras with assignment to Kansei elements (A), summary of the described possibilities of peripheral data (B)

The aim of this survey in the last part of the UCD study is to find out which of the listed functionalities the firefighters find most useful in their experience. Based on this result, the functional model will be further developed into a prototype. The result of all characteristics is summarized in Table 1. Overall, the survey revealed two reverse (R) features that are rejected by the user and seven one-dimensional (O) features or performance characteristics. The implementation of these features increases customer satisfaction proportionally with the degree of implementation and tend to be the feature that a user requests on demand [9]. Furthermore, the survey revealed two attractive (A) features that have the greatest impact on user satisfaction and one indifferent (I) feature. One feature that participants reject, is the additional flashing of the display. 56.8% would find it annoying and 36.4% see no added value in this feature. Nevertheless, at Table 1. Results of Kano analysis A: Attractive, O: One-dimensional, M: Must-Be, I: Indifferent, R: Reverse, Q: Questionable

Heat on Sight - Display Thermal Radiation in the Peripheral Field of View

1489

least in the case of an acute immediate danger, the flashing could help to attract the user’s attention and alert them to the danger. One of the attractive features that would increase satisfaction disproportionately is the implementation of individual setting options. With this feature, the user is given the possibility to adjust the colours, brightness, displayed temperature ranges and possibly the flashing frequencies. This feature allows the user to set these different options individually. 14 respondents find this attractive, 13 would even demand it, but also 7 respondents reject this feature. For 20 participants of the survey, an automatic brightness adjustment of the display to the ambient lighting should be implemented, 11 find this option attractive. For about 30% of the respondents, brightness adjustment should be manual (one-dimensional feature), 20.5% find this attractive, but just as many do not see sufficient added value and as many as 16% reject this feature. In summary, however, brightness adjustment must be implemented, most favourably as a compromise of both options.

6 Limitations of the Study For the first functional device to demonstrate the basic functionality, the safety considerations were relaxed as we do not yet know the final expectations and requirements. Because of limitations with the hardware, the display cannot be used in a real fire scenario. Using a functional demonstrator creates a bias among the participants. However, as this is a novel application, an introduction without a functional system is difficult to realize. Only a small number of volunteer firefighters attended the survey so far and thus, more participants, particularly professionals need to be interviewed.

7 Conclusion and Outlook In this paper, the Kano method is used to determine the important requirements of the user. From the results (Table 1) can be concluded that feature 1 (display in the field of view) and 4 (additional flashing) will not be implemented. The other features should be implemented, whereas the implementation of 6 and 7 will make the display even more attractive to firefighters. The following features are prioritized for the refinement of the prototype. The colour scheme will be implemented as intended; an additional flashing of the display will be omitted. Possibly, it can attract the user’s attention as a very last indication of acute danger. Different display modes will be defined, e.g., the mode “Search for persons” or “Firefighting”, which can be switched between. These different modes display different temperature ranges and can thus be used for different firefighting tasks (searching for people, searching for sources of fire, etc.). Different brightness levels are implemented and the operability is realized by means of simple elements. For the hardware development the next step will be, to build the prototype in such way, that it can withstand at least a firefight training. The survey will be continued with other fire brigades, especially professional fire departments, in order to incorporate any deviating needs into the further development.

1490

A. Püschel et al.

Acknowledgements. Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 416228727 – SFB 1410.

Annex 1 Fragebogen: Peripheres Anzeigedisplay integriert in einen Feuerwehrhelm Teil 1: 1.

Welchem Geschlecht fühlen Sie sich zugehörig? • männlich • weiblich • andere

2.

Wie alt sind Sie? • • • • •

3.

50

Zu welcher Gruppe der Feuerwehreinsatzkräfte gehören Sie? • Hauptamt • Ehrenamt

4.

Wie lange sind Sie bereits im Feuerwehrdienst tätig? • • • • •

5.

Wie viele Einsätze haben Sie wöchentlich im Durchschnitt? • • • •

6.

20 Jahre

~1 2–5 6–10 10

Haben Sie Erfahrung im Umgang mit Wärmebildkameras?

Heat on Sight - Display Thermal Radiation in the Peripheral Field of View

1491

• ja • nein 7.

Finden Sie es wichtig eine Wärmebildkamera bzw. eine anderweitige Anzeige von Wärmebilddaten bei der Brandbekämpfung zu verwenden? • ja • nein

8.

Finden Sie es bedenklich bei der Brandbekämpfung kein entsprechendes System zur Anzeige von Temperaturinformationen zu nutzen? • ja • nein

9.

Wie viele Wärmebildkameras sind an Ihrem Standort verfügbar? • • • • •

keine 1 2 3 >3

10. Können Sie nachfolgend fünf Worte aufzählen, die Wärmebildkameras beschreiben?

11. Können Sie sich vorstellen, dass ein peripheres Anzeigedisplay auch in anderen Anwendungen (als das hier gezeigte zur Anzeige von Infrarotsensordaten) hilfreich sein könnte? • ja • nein 12. Welche zusätzlichen Daten könnten Ihrer Meinung nach noch mit einem solchen Display angezeigt werden?

1492

A. Püschel et al.

Teil 2:

Heat on Sight - Display Thermal Radiation in the Peripheral Field of View

1493

References 1. Bakker, S.S.: Design for peripheral interaction (2013). https://doi.org/10.6100/IR754544 2. Poppinga, B., Henze, N., Fortmann, J., Heuten, W.: AmbiGlasses – information in the periphery of the visual field, vol. 10 (2012) 3. Costanza, E., Inverso, S.A., Pavlov, E., Allen, R., Maes, P.: Eye-q: eyeglass peripheral display for subtle intimate notifications. In: Proceedings of the 8th Conference on Human-Computer Interaction with Mobile Devices and Services - MobileHCI 2006, Helsinki, Finland, p. 211. ACM Press (2006). https://doi.org/10.1145/1152215.1152261 4. Nakao, T., Nakatani, M., Chan, L., Kunze, K.: Smart glasses with a peripheral vision display. In: Proceedings of the 2016 Virtual Reality International Conference, Laval, France, pp. 1–3. ACM (2016). https://doi.org/10.1145/2927929.2927938 5. Held, B., Aljuneidi, S., Pham, V.T.-V., Joseph, A.: Helon 360: a smart firefighters’ helmet integrated augmented reality and 360° thermal image data streaming (2019). https://doi.org/ 10.13140/RG.2.2.28158.41287 6. Sanseverino, G., Schwanitz, S., Krumm, D., Odenwald, S., Lanzotti, A.: Towards innovative road cycle gloves for low vibration transmission. Int. J. Interact. Des. Manuf. (IJIDeM) 15(1), 155–158 (2021). https://doi.org/10.1007/s12008-020-00748-8

1494

A. Püschel et al.

7. Lanzotti, A., Tarantino, P.: Kansei engineering approach for total quality design and continuous innovation. TQM J. 20, 324–337 (2008). https://doi.org/10.1108/17542730810881311 8. Sauerwein, E.: Das Kano-Modell der Kundenzufriedenheit: Reliabilität und Validität einer Methode zur Klassifizierung von Produkteigenschaften (2014) 9. Berger, C., Blauth, R.E., Boger, D.: KANo_S methods for understanding customer-defined quality. Presented at the (1993)

HRV Analysis Through the Sphygmic Wave Pulsation of the Femoral Artery Luca Salvati1 , Nicola Cappetti1 , Matteo d’Amore2 , Anita Fiorentino3 , Arcangelo Pellegrino1 , Pasquale Sena2 , and Francesco Villecco1(B) 1 Department of Industrial Engineering, University of Salerno, Via Giovanni Paolo II 132,

84084 Fisciano, Italy [email protected] 2 Department of Pharmacy, University of Salerno, Via Giovanni Paolo II 132, 84084 Fisciano, Italy 3 Advanced Technology and Pre-Development Programs, Stellantis, Via Ex Aeroporto, 80038 Pomigliano d’Arco, NA, Italy

Abstract. The need, due to road safety reasons, to constantly monitor the physiological conditions of a driver (stress, concentration, fatigue) as well as the alterations to the comfort state of the driving position in long-haul journey, has led many car manufacturers to focus attention on the development of innovative technologies and methods that include the monitoring of biosignals that can be acquired by the driver himself. The possibility of detecting and processing such a signal in real time was the object of this research, which allowed to produce a technology that exploited the analysis of Heart Rate Variability (HRV), a method widely used in clinical field for the analysis of the autonomic nervous system (ANS) with respect to daily biological rhythm stress, comfort or vigilance. While the conventional cardiac signal acquisition system uses clinical instruments such as the electrocardiograph (ECG), in this study a technique is developed that allows the detection and processing of the pulsation signal at the level of the femoral artery through a sensor placed on the driver’s seat. The main advantage consists in having a control in real time by avoiding the application of sensors on the skin. In fact, three fundamental parameters: the heart rate (HR), the interval between beats (RRI) and the typical HRV indices are determined, by using a suitable signal analysis algorithm. The comparison with equivalent values obtained by a conventional ECG device shows a Pearson correlation between 0.35 to and 0.94. This may be of a great help in evaluating thesympathovagal balance. Keywords: HCD · RRI · HRV

1 Introduction The spread of the concept of human-centered design has allowed to satisfy the user’s needs in terms of performance, usability and comfort through the study of the interaction between the user and the product to be made. The search for objective parameters that could encode the response to stimuli or user sensations has paved the way for the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1495–1504, 2023. https://doi.org/10.1007/978-3-031-15928-2_130

1496

L. Salvati et al.

development of an increasingly less invasive and more affordable sensor system: in this sense, the development of the product has been oriented towards the study of a design that allowed to take into account both ergonomic [1, 2] and physiological [3] requirements. In particular, the investigation of a physiological response through electrocardiac sensors [4] represents an objective method to analyze a connection between posture, seat design and level of comfort and to provide an interpretation of the perceptual states related to feelings of fatigue, malaise or discomfort. The study of the variation of the RRI intervals between two consecutive heart-beats is known as heart rate variability (HRV) and it is caused by the activity of the Autonomous Nervous System (ANS): currently this parameter is considered as the most suitable choice for performing the non-invasive evaluation of ANS dynamics. Moreover, the HRV describes the influence of cardiovascular activity and the cardi-ovascular regulatory system [5]. The HRV parameters directly obtained by an electrocardiograph will be compared with data obtained by a non-invasive system consisting of a sensor, incorporated in a cushion on which the subject is seated, able to detect the pulsations produced by the sphygmic wave (SWP) also through clothing.

2 Methods of HRV Analysis A number of parameters are required to analyze the psychophysical conditions of a driver. Among them, attention has to be mainly devoted to Heart Rate Variability in both time and frequency domain [5–8]. The former is usually split measures directly coming from RRI (being SDNN their standard deviation) and the ones as evaluated by adiacent values of RRI, i.e. RMSSD, NN50, and pNN50. As a matter of fact, their low values can be related to a poor parasympathetic activation or to get through significant either physical or mental stresses [9–11]. The Poincaré plot is the scatter plot of the intervals between two successive RR peaks. It has well known that use of Fast Fourier Transform let to assign to vagal activity the 0.15–0.4 Hz band, whereas frequencies in the range 0.04–0.15 Hz and ≤ 0.04 Hz are mainly linked to sympathetic activity [12]. 2.1 Detection System The vibrational signal is recorded through a capacitive microphone sensor embedded in a specific support made of 3D polyester fabric and inserted into a cushion made of the same material (Fig. 1). This fabric approximates the elastic characteristics of muscles in a relaxed state, having almost the same load-displacement characteristics as human skin [13]. The sensor represented by the material above is centered at 20 Hz. By using the stochastic resonance artifacts can be excluded by a simple filtration [14–16]. The sensor is placed slightly to the left, halfway between the spinal termination of the coccyx and the transit point closest to the femoral artery. The developed algorithm is able to identify the peaks of the wave generated by the sphygmic pulsation through the processing of the vibrational signal coming from the seated subject. Hearth data are collected at 1000 Hz for both ECG and sensors.

HRV Analysis Through the Sphygmic Wave Pulsation of the Femoral Artery

1497

Fig. 1. Signal processing and postural scheme

2.2 Signal Processing As motion artifacts may be responsible for noises, three bandpass filters were tested to individuate possible similarities with the ECGs. Filters operated at 10–30 Hz, 15–80 Hz and 4–50 Hz, and signals were also overfiltered for the range 0.8 and 1.8 Hz to let them look like 1 Hz ECGs. Time interval obtained by these waveforms gave the required parameters for the Heart Rate Variability analysis (Fig. 2).

Fig. 2. Signal processing and postural scheme

1498

L. Salvati et al.

2.3 Experiment Protocol Ten volunteers, in the range 25–54 years (mean age: 35.6 ± 9.4 years), agreed to participate to the tests. They were seated for 5 min on a static car seat, whose mat encased the sensor. Subjects were asked to partecipate in the experiment after a night of at least 7 h of sleep, having not smoked nor to dranken substances containing caffeine or alcohol in the previous hours. The ECG electrodes were placed in right arm (RA), left arm (LA) and left leg (LL) positions.

3 Results As above otlined, data acquired in terms of ECG and SWP were juxtaposed and examined by the r of Pearson. By averaging results with the same B.P.F. the 4–50 Hz filter was selected as the most performant one (see Table 1). Table 1. Comparison of B.P.F. filters.

r values (Pearson) 10-30 Hz 15-80 Hz Subjects

r

r

4-50 Hz r

1 2 3 4 5 6 7 8 9 10

0.62 0.21 0.33 0.48 0.31 0.53 0.69 0.35 0.28 0.20

0.58 0.04 0.82 0.21 0.51 0.90 0.93 0.62 0.50 -0.21

0.65 0.45 0.54 0.46 0.38 0.55 0.94 0.35 0.42 0.56

Mean

0.40

0.49

0.53

In order to estimate the accuracy of the measurements and at the same time the limits of such processing, it was decided to compare the acquisitions that best and worst reproduce the correspondence with the respective ECG measurements, (see #7 and #8). In Fig. 3 the satisfying agreement between the SWP signal (red line, r = 0.94) and the RRI as derived from the ECG is shown. HR, RRI, SDNN parameters in the time domain are very close to the actual ones (Table 2) (Fig. 4) and the ‘Mean’ curve reports the average of the RRI values in time windows of 16 s.

HRV Analysis Through the Sphygmic Wave Pulsation of the Femoral Artery

1499

Fig. 3. # 7: Comparison between ECG (HR) sensor (SWP) Table 2. Heart monitoring data in the time domain (# 7)

Mean RR | HR

ECG

SWP

877 ms | 68

876 ms | 69

SDNN

86.3 ms

83.9 ms

RMSSD

39.8 ms

52.3 ms

pNN50

17.4%

28.1%

SD1 | SD2

28.1 ms | 118.8 ms

37.0 ms | 112.9 ms

SD1/SD2 ratio

0.24

0.33

Fig. 4. mRRI, RMSSD, pNN50 variations in the time domain (Subject 7) from ECG (a) and SWP (b)

1500

L. Salvati et al.

By examining the Poincaré diagram, relative to both the ECG signal and the SWP signal, it is observed that the detection did not show any signal loss (Fig. 5).

Fig. 5. # 7: (a) RRI Poincaré Plot; (b) SWP peaks Poincaré Plot

Frequency domain appears to be the proper choice to compare the results. HF and the ratio LF to HF are reported in Fig. 6 for both the sets of data acquired. As it is well shown in the Figure, there is a good agreement between HF signals, that becomes fairly good for LF/HF values. In Fig. 7 efficaciency of proposed model is shown as well.

Fig. 6. # 7’s HRV Frequency Analysis: comparison between ECG (top) and SWP (bottom) results

Fig. 7. # 7: Spectral analysis of the RR data from ECG and SWP.

It is correct to outline that the algorithm, in its present formulation, can easily evaluate the trend of the hearth signal, while appears not to be as much precise as desired in its interpretation (Fig. 8).

HRV Analysis Through the Sphygmic Wave Pulsation of the Femoral Artery

1501

Fig. 8. # 8: Superimposition of data from ECG (HR) and sensor (SWP)

Table 3 and Fig. 9 show how relevant is the fall-out of the information loss in the evaluation of HRV values. Table 3. Hearth rate data in the time domain (# 8) ECG

SWP

Mean RR | HR

921 | 65

903 | 66

SDNN

93.0

139.4

RMSSD

42.7

121.5

pNN50

14.7

64.0

SD1 | SD2

30.2 | 128.0

85.9 | 165.4

SD1/SD2 ratio

0.24

0.52

Fig. 9. mRRI, RMSSD, pNN50 variations in the time domain (# 8)

Moreover, it is noteworthy that even at a rough comparison of the Poincaré Plots it clearly appears that no data are lost in the algorithm processing operation on ECG and SWP detected as shown in the following Fig. 10.

Fig. 10. # 8: Poincaré Plots for (a) RRI; (b) SWP peaks

1502

L. Salvati et al.

The frequency analysis also shows a similarity in the trend of the LF/HF ratio throughout the survey, and of the HF parameter even only in the second half. (Figs. 11 and 12).

Fig. 11. Subject 8, HRV Frequency Analysis: ECG (top) and SWP (bottom) results

Fig. 12. Subject 8: spectral analysis of RR from ECG and SWP data

4 Conclusions The research proposes the development of a non-skin-contact heart rate detection system compared for validation to a clinic ECG device. Although the susceptibility of this system still represents a limit, it certainly offers, in addition to the practicality of installation, the possibility of accessing rather accurate information on the physiological state of a subject. Opportunities that may raise from its use in the fields of ergonomy and of driving safety appear to be very intriguing such as, for instance, continuous monitoring of driver conditions and their relevant changes, states of stress, fatigue and drowsiness in realtime, and even validating comfort indices in ergonomic and postural studies [17–19] by HRV analysis. The device is absolutely non-invasive. Accuracy of the results obtained is a crucial issue, as it may require to increase the number of tests. Thus, it is mandatory to progress in the sensors and filtering of vibrations which represent a significant obstacle in the processing phase of a biosignal of the chosen frequency, easy to overlap with ectopic biosignals of another origin and location.

HRV Analysis Through the Sphygmic Wave Pulsation of the Femoral Artery

1503

References 1. Fasulo, L., Naddeo, A., Cappetti, N.: A study of classroom seat (dis)comfort: relationships between body movements, center of pressure on the seat, and lower limbs’ sensations. Appl. Ergon. 74, 233–240 (2019) 2. Smulders, M., Naddeo, A., Cappetti, N., Van Grondelle, E.D., Schultheis, U., Vink, P.: Neck posture and muscle activity in a reclined business class aircraft seat watching IFE with and without head support. Appl. Ergon. 79, 25–37 (2019) 3. Cappetti, N., Naddeo, A., Soldovieri, V.M., Vitillo, I., Fiorillo, I.: Perceived comfort and muscular activity: a virtual assessment of possible correlations. In: Rizzi, C., Andrisano, A.O., Leali, F., Gherardini, F., Pini, F., Vergnano, A. (eds.) ADM 2019. LNME, pp. 59–70. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-31154-4_6 4. Watanabe, K., Watanabe, T., Matsui, Y.: Health monitoring system at home using an air mattress. J. Jpn. Soci. Appli. Sci. 14(2), 4–8 (2000) 5. Task force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology: Heart rate variability. standards of measurement, physiological interpretation, and clinical use. Eur. Heart J. 17(3), 354–381 (1996) 6. Cerutti, S., Bianchi, A.M., Mainardi, L.T.: Spectral analysis of the heart rate varibility signal. In: Malik, M., Camm, A.J. (eds.) Heart Rate Variability, pp. 63–74. Futura Publ. Comp. Inc., Armonk (1995) 7. Kleiger, R.E., Stein, P.K., Bosner, M.S., Rottman, J.N.: Time domain measurements of heart rate variability. In: Malik, M., Camm, A.J. (eds.) Heart Rate Variability, pp. 33–45. Futura Publ. Comp. Inc., Armonk (1995) 8. Litvack, D.A., Oberlander, T.F., Carney, L.H., Saul, J.P.: Time and frequency domain methods for heart rate variability analysis: a methodological comparison. Psychophysiology 32, 492– 504 (1995) 9. von Borell, E., Langbein, J., Després, G., Hansen, S., Leterrier, C.: Heart rate variability as a measure of autonomic regulation of cardiac activity for assessing stress and welfare in farm animals – a review. Physiol. Behav. 92(3), 293–316 (2007) 10. Salvati, L., d’Amore, M., Fiorentino, A., Pellegrino, A., Sena, P., Villecco, F.: Development and testing of a methodology for the assessment of acceptability of LKA systems. Machines 8, 47 (2020) 11. Shaffer, F., Ginsberg, J.P.: An overview of heart rate variability metrics and norms. Front. Public Health 5, 258 (2017). https://doi.org/10.3389/fpubh.2017.00258 12. Malliani, A., Lombardi, F., Pagani, M., Cerutti, S.: Power spectral analysis of cardiovascular variability in patients at risk for sudden cardiac death. J. Cardiovasc. Electrophysiol. 5, 274– 286 (1994) 13. Kojima, S., et al.: Development of a simple system to sense vital signs from the back. Japan Soc. Des. Eng. SO(2), 78–88 (2015) 14. Hagiyama, N., Hirano, H., Mito, A.: Unconstrained vital sign monitoring system using an Aortic Pulse wave sensor. Sci. Rep. 9, 17475 (2019) 15. Salvati, L., d’Amore, M., Fiorentino, A., Pellegrino, A., Sena, P., Villecco, F.: On-road detection of driver Fatigue and Drowsiness during medium-distance journeys. Entropy (Basel) 23(2), 135 (2021) 16. Salvati, L., et al.: Heart sound processing model for a mat-shaped device. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 809–817. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_82 17. Naddeo, A., Cappetti, N.: Comfort driven design of innovative products: A personalized mattress case study. Work (Reading, Mass.) 68(s1), S139–S150 (2021)

1504

L. Salvati et al.

18. Fasulo, L., Naddeo, A., Cappetti, N.: A study of classroom seat (dis)comfort: Relationships between body movements, center of pressure on the seat, and lower limbs’ sensations. Appl Ergon. 74, 233–240 (2019) 19. Naddeo, A., Cappetti, N., Vallone, M.: Range of rest posture of human lower limbs. In: Bagnara, S., Tartaglia, R., Albolino, S., Alexander, T., Fujita, Y. (eds.) IEA 2018. AISC, vol. 824, pp. 33–46. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-96071-5_4

A Mixed Reality Application for Collaborative and Interactive Design Review and Usability Studies Sandhya Santhosh(B)

and Francesca De Crescenzio

University of Bologna, Forli, Italy [email protected]

Abstract. In recent days product design and review cycles are considerably held to many constraints and requirements. One of the main requirements regards the usability of solutions and the impact that design choices have on final users. Therefore, it is becoming crucial to anticipate human factors concerns in the preliminary phases of product development process. Also, since human factors affect the users subconsciously and influence the decision making in a significant way, they need to be enabled to observe the users while interacting with the product in real-time. In this framework, the platform that the human factors expert and the users use plays a vital role in influencing the human factor assessments and evaluations. Latest developments in Extended Reality opened the way toward the possibility to conceive new simulation platforms that allow experts to deeply explore the products in real-time through collaborative and interactive environments before the physical mockup of the product. The purpose of this paper is to outline a Mixed Reality (MR) tool in the field of aircraft interior design to demonstrate it as a potential co-creative platform for involving human factor experts in the loop while the task is ongoing. An MR multi-user, co-located, collaborative and interactive environment of an aircraft galley is developed where the HF specialist and flight crew member can co-exist to visualize the real scale model of the galley and perform an operational task. Such co-creative tool is foreseen to execute usability tests during design review phase and reduce time, costs of product development cycle while meeting the user requirements. Keywords: Mixed reality · Human factors and ergonomics · Co-creation · Design review

1 Introduction Engineering design advancement demands problem-solving approach besides its restraints, especially in human factors [1]. A successful product design usually depends on combining a number of information from different areas and should meet the functional and aesthetic level alongside being manufactured by the right process at a good price [2]. Design review is a challenging phase for all the actors in the process from engineers, manufacturers to human factor experts and customers. Over the last few decades, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1505–1515, 2023. https://doi.org/10.1007/978-3-031-15928-2_131

1506

S. Santhosh and F. De Crescenzio

there is a lot of heed towards Human Factors and Ergonomics (HF/E) discipline which focuses on how people interact with tasks, equipment, technologies and environment from a unified context of science, engineering, design, technologies and management. Industrial design is indeed concerned with all the human aspects of machine-made products and their relationship to people and environment [3]. Thus, the aim of Human Factors is to understand and evaluate these interactions and optimize the human and system efficiency, effectiveness, safety, health, comfort and quality of life. Ultimately, the human factors methods are designed to improve overall human well-being and system performance [4] promoting human-centered approach. The adoption of such human-centered approach is a major challenge for companies since the product development process is an important and complex business process. In order to compete in technologically advanced markets, designers have to consider a lot of factors such as strict schedules, budgets, lifecycle concerns, inclusiveness and social impact [5]. Human factors input is ultimately seen as giving products a competitive advantage by helping to develop products in short time periods without compromising the quality level [6,7]. Usually, the HF/E aspect arrives after the physical mock-up of the product however, with communication difficulties. Design review is an important step for any product development to move from one stage to the next [8]. It is incorporated to evaluate the factors designers have to consider. Typically, reviewers are from diverse disciplines acting as key users, customers, engineers and experts who need to collaborate in order to optimize the process. Thus, the environment in which they collaborate may also have an impact on people’s ability to interact and provide feedback. Users of a product often alter along aspects such as vision, hearing, strength, reach etc. These features are to be considered highly when reviewing a product or design. When HF/E are considered during the design process, it is to improve the usability and reliability of the product. Design Reviews have nowadays evolved to be designed and carried out in simulated environments rather than real settings with prototypes in order to enhance and anticipate product development in terms of time and cost. Such simulated environments can be computer generated real or imaginary system that can enable a user experience it and perform operations by showing effects in real time [9]. Rapid advances in Computer Science and new devices have paved a way towards Extended Reality (XR) to be an enabling technology to build realistic simulated environments for users to collaborate and interact. XR serves as an umbrella term for Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR) and act as a platform to co-create for generating ideas and directly building solutions [10]. These technologies have proven to help users to collaborate, interact, visualize and co-create on a single platform to perform tasks such as design reviews, pre-design analysis, task analysis, usability testing and product evaluation. Thus, here a possibility can be recognized for integrating a HF expert and an end user on a single platform to study the human views during the design review prior to manufacturing of a product. In this context, we describe the design and development of a multi-user co-located, collaborative and interactive Mixed Reality (MR) software tool in the aircraft cabin interior design review domain. This tool allows the potential final user acting as a crew member to be co-located with Human Factor expert performing tasks of visualizing,

A Mixed Reality Application for Collaborative and Interactive Design

1507

interacting with galley and observer respectively. In this type of collaborative environment, the expert is expected to evaluate metrics such as task analysis, visual comfort and usability. This research is part of a greater project aiming at supporting the hypothesis that Mixed Reality improves the efficiency and efficacy of user centered design through the creation of co creation platforms (H1). In particular, in this paper the authors describe a simulation platform developed to test the specific hypothesis that “Collaboration and co-location in MR can favor the evaluation of Human Factors in interactive environment in real time” (H2). It has been observed that the product evaluation sessions in VR are usually performed by experimental sessions in which potential users are immersed in Virtual Environments representing the future system of product to be manufactured and are asked to interact with it. The human factors expert intervenes after the simulated interaction session to collect the information about the usability and to highlight the issues with the designed components or subsystems. The main idea presented in this paper is to develop a platform to co-locate an observer of the scene directly in the virtual scenario providing him with the same perceptual capabilities of the user herself or himself in real time. In this view a platform has been conceived and a scenario of a specific product in an industrial field characterized by a long development cycle have been proposed to answer the following questions: RQ1 Which system architecture and which scenario can be used to implement the collaborative and co-located features? RQ2 Which parameters could be used to evaluate the effectiveness of the platform against the hypothesis? As part of the preliminary study of the main research project, in this paper the RQ1 is answered while the RQ2 is under development. Therefore, in the following paragraphs the technical set up and interaction design will be described.

2 The Context of Study Product development cycle of a user driven products design is referred as a process that begins with analysis of potential user groups and collection of user’s requirements [11]. Different methods are then used to review the pre-design solutions and to estimate if these actually meet the intended user’s needs. On one hand, design validations aim to ensure that products conform to defined user needs and intended uses. On the other hand, human factors validation is considered as a part of design validation specifically evaluating the product’s user-interface to confirm that the product can be used effectively by intended users. Design reviewing is an iterative process and includes time-taking and expensive prototypes to validate by several stakeholders, including the human factors. Traditionally, physical mock-ups are used to associate a HF expert in evaluating the tasks which is accumulated at the final stages of development process risking delays in unveiling design problems for designers and human factors, which is also not cost and time-effective. Hence, it is crucial to involve HF/E through the design review procedure to help and augment the process. Human factors practitioners follow user-centered design approach as their design philosophy in evaluating the interactions between the users and the tasks and environments. [12–14]. An example of integrating human factor in to design process in the field

1508

S. Santhosh and F. De Crescenzio

of aeronautics concerning the design process of head mounted displays has been made by Dominique et al. [15]. An aircraft passenger cabin represents a significant human factors challenge related to both passengers and cabin crew. Human factors principles usually associated with the flight deck are now being applied to examine human performance functions and ensure that cabin crews and passengers are able to do efficiently what they need or want to do. In general, HFE input to product development has intended to be focused on methods such as direct observations, task analysis, usability issues etc. [16]. Task analysis is used in evaluation of existing systems in identifying design problems by defining the task or activities a user will be performing with the product [17]. The International Standards Organization (ISO) defines usability as being: “the effectiveness, efficiency and satisfaction with which specified users achieve specified goals in particular environments” [18]. Usability issues are nowadays considered crucial in the product creation process with many organizations. User, Task, System and environment are the main components of usability and a good design often is defined in terms of interaction between these four factors [19]. The challenge of integrating human factors during the design process of a new product development is to enhance usability and reliability of the product. Eventually, this integration enables developers to perceive and understand what end users can and cannot do compared to the tasks they will be required to perform once the product is manufactured and operational. A number of studies have been published distinctly on integrating HF/E during design review process and proposing Virtual Reality for multiuser environments. But, seldom have proposed an innovative co-creative platform for involving HF/E with final users/designers during the design review process to validate the product from human perspective and evaluate aspects such as usability and task analysis. 2.1 System Design and Simulation Tools for Assessment In project CASTLE, a VR solution has been adopted in validation of cabin design solutions through human-centered approach. Each cabin item design like aircraft galley, lavatory, cabin interior etc. is simulated to a real scale 3D immersive environment and is presented to potential users to visualize the design and provide their feedback in order to improve the design [21]. While single person perspective has been generally adopted, in this paper we elaborate a case study for a multi-user co-located, collaborative and interactive cabin item design review. The XR technology with its hardware has made promising evolution towards achieving these elements to a simulated ambiance. Especially, MR which allows real and virtual elements to interact is capable to manipulate both physical and virtual objects [10]. [10] has proposed that a MR device such as a See-through display for example, Microsoft HoloLens can be suitable for developing elements to collaborate, interact and improve user experience. However, there are few attempts to build a co-located and collaborative MR environment between two MR devices, Microsoft HoloLens2, presenting as a platform for design review solution. The following section hence illustrates such a study in the field of aircraft cabin item design validation.

A Mixed Reality Application for Collaborative and Interactive Design

1509

3 Field Study Application: Aircraft Galley The application study is built upon a scenario representing a design review setting. Such scenario is displayed in a MR environment which is populated with the 3D interactive model of a new conceived Galley of a Regional Aircraft. The Galley is the part of the Aircraft where food and drink are stored and where the crew members operate to prepare the in-flight service. Two users with different roles are enabled to share the scenario. One user act as a crew member/flight attendant and the other as an observing human factors expert. The observable factors are related to usability and ergonomics including interaction with the objects and reachability of the surfaces. These functions are to be examined by the human factors expert in order to evaluate tasks and usability in the environment. As described in detail in Sect. 3.2, software application is built which allows the crew member to perform a task in Mixed Reality naturally interacting with selected parts of the model and the human factors expert to observe the scene and evaluate the quality of interaction in real time Hence, two users can share the scene in a collaborative and co-located approach while wearing two Mixed Reality head mounted displays. 3.1 Mixed Reality Platform Development The whole scenario is built in a 3D software, UNITY version 2019.4 LTS for Microsoft HoloLens2. The workflow begins with the 3D model acquisition from the designers. This 3D model of the aircraft galley is collected and analyzed in a CAD modelling software, Rhinoceros 3D for aspects such as surfaces and normals so as to export to desired file formats to Unity 3D. On Unity, all the necessary features like hand interaction, grabbable feature, co-located collaboration are developed. Thanks to the Mixed Reality Tool kit (MRTK), a cross- platform tool kit developed by Microsoft to help developers for easy app modelling. MRTK features range from eye-tracking, hand tracking to UI controls and can be integrated to Unity to build the application effortlessly. Three main functions are designed to simulate the interaction with the water jar: a) pointer selection from MRTK configuration profile, b) collider and c) attaching the necessary interaction scripts for performance. In HoloLens2, interaction with game objects happens with pointers which can be near or far pointers. As the name suggests, near pointers are used for objects very close to the user like in the case of the water jar. For grabbing and placing the water jar, firstly the water jar has been separately imported to Unity 3D scene as a child of the main galley model and a box collider has been adopted to define the shape of it. An inbuilt Object manipulator and Near Interaction grabbable scripts are added to the water jar game object so as to trigger the interaction with it. Finally, to make the user recognize the gesture they make in a Mixed Reality environment, hand tracking has been enabled. A hand mesh displays the rendered mesh of the hand overlapped on the user’s hand as shown in the Fig. 1 Other parameters such as Spatial Awareness, Target Scale (Room scale), Display Option (Occlusion), Pulse Shader Spatial Mesh Prefab have been adjusted to make the Mixed Reality application performance higher. In order to develop shared experiences for both users, the target scenario has been studied to define the features required in the

1510

S. Santhosh and F. De Crescenzio

environment. Hence, a 3D environment using a MR device, Microsoft HoloLens2, with one-to-one sharing experiences has been determined according to the aircraft galley scenario. The collaboration will be synchronous and the physical environments are similar (Figs. 2 and 3).

Fig. 1. Hand interaction with water jar

Fig. 2. Hand mesh profile as visible in the application

Fig. 3. Interaction function.

Multi-user is nothing but sending messages between computers or devices. For example, when user1 in the aircraft galley moves the water jar, this command or the position of the water jar will be sent from user1 over through the server to user2 screen (Fig. 4).

A Mixed Reality Application for Collaborative and Interactive Design

1511

Fig. 4. Multi-user architecture

According to the above description, a networking (for server) and anchor sharing (for position) procedure has been adopted to develop the application. Firstly, networking solution is connected to the existing UNITY project. Photon Unity Networking (PUN) has been chosen for its fast, reliable and easy multiplayer framework. Azure Spatial Anchors are considered for the project which are updated to the project through Software development kits (SDK’s). Azure Spatial Anchors are points of interests which are used to attach the content to physical locations. They represent a physical point in this world that persists in the cloud network. Thus, PUN connects the two HoloLens to share the local position of the virtual content in 3D space whereas Azure updates the position of the users in real time in the shared space. C# scripts are attached to add respective features to the game objects. A debug window is placed in the same scene beside the galley which logs all the behaviors and shows real time update of the program. In order to represent the users, a sphere prefab has been instantiated to record the movements and position of the other user in the MR environment. The galley model prefab is instantiated on both HoloLens through a shared plane where the model is placed. A table anchor designed will align the plane for both users in their physical environment. Finally, 4 button prefabs are placed in the scene below the debug window which configures the azure session as shown in Fig. 5 Both Azure and PUN are configured through Account ID’s and Access Keys specially dedicated for this project (Fig. 6).

Fig. 5. The Aircraft galley scene with debug window and buttons.

1512

S. Santhosh and F. De Crescenzio

Fig. 6. Multi-user framework

3.2 Task Execution Firstly, User1 enacting as crew member wears the Microsoft HoloLens2 and launches the galley application. Second user enacting as the HF expert, wears a second Microsoft HoloLens2 and starts the galley application. As primary user, the user enacting as crew member starts the Azure session by clicking on the button ‘Start Azure Session’ and the debug window logs these steps to show errors, if any. Then, the crew member creates and shares the azure anchor by clicking on the respective buttons which create the anchor at the location of the table anchor and store the information and shares the anchor ID with other users in real-time respectively. Once the anchor is shared, the HF expert starts the azure session to connect to the resource and downloads the anchor information from the shared ID by clicking on ‘Get Azure Anchor’. The crew member will be able to interact with the water jar and other parts of the galley with the hand mesh prefab displaying information of the hand tracking. Crew member picks up the water jar and places it on the galley platform and repeats this task.

Fig. 7. Two users collaborating by wearing Microsoft HoloLens and interacting with virtual water jar.

A Mixed Reality Application for Collaborative and Interactive Design

1513

The human factor expert in the same environment can visualize the entire scenario to evaluate the task performance, reachability and usability functions of the application (Fig. 7).

4 Conclusion and Future Works The illustrated study attempts to elaborate an Extended reality (XR) environment as a co-creative platform for collaboration and interaction between users promoting collaborative decision making during a design review. A product development process usually is a lengthy and convoluted procedure with numerous disciplines being involved to make the end result a success. Thus, their prevails a necessity to verify the designs in every step of the phase before it is physically manufactured. Especially, Human Factors and Ergonomics (HFE) evaluations are playing a vital role in order to study the behavior of human beings in the product environment with the tasks and tools. In this process, there exists a necessity to involve a Human Factors (HF) expert in design review way before physical mock-up of the product in order to derive the task analysis and evaluate the interactions for overall human well-being and reduce time and costs. Virtual environments serve as a substitute to physical mock-ups with notable reduction of costs and time. From the past years, simulated 3D environments in the form of digital mock-up’s are rising to provide visualization solutions of the product in the concept design phase. Particularly, the XR technologies are stretching their way to co-create value experience with product. These technologies offer high potential to virtually visualize, collaborate and interact for a user-centric experience. In addition, with how the covid-19 situation is playing out, there is more emphasis than ever to find efficient ways to work and collaborate, making the push for suitable virtual replacements of the environment just as strong as the real ones. To this extent, this paper outlines a software application built with Mixed Reality (MR) technologies for Microsoft HoloLens2 in the aircraft interior design validation solutions. An Aircraft galley item has been taken in to account to design a multi-user, co-located collaborative environment involving two users acting as a crew member and a human factors expert. Both users access the MR environment of the aircraft galley through Microsoft HoloLens2 and the crew member performs the task of picking the water jar from the shelf of the galley and placing it on the platform. The HF expert visualizes the entire scenario and analysis the task and usability factors of the galley and the human. The features of visualization, task analysis, reachability and usability are to be studied from such an environment. This kind of MR applications which provide a means to work synchronously in both physical and virtual domains reducing domain transition costs and new aspects to design review solutions can bring substantial emergent benefits to the design process. For the future works, the main project will continue with the selection of the parameters to evaluate the effectiveness of the platform. Firstly, the platform will need to be refined and the data about its usability will be based on subjective evaluations collected through participants to experiments and through standard tools as, for example the 5 points Likert scale-based SUS (System Usability Scale). In this phase mainly subjective parameters

1514

S. Santhosh and F. De Crescenzio

can be used even if objective parameters such as the time needed to perform a task, e.g., grabbing the jar and positioning it on the shelf of the galley, can be used as a reference objective measure. Moreover, the impact of these systems on the overall efficiency and efficacy on the product development phase should be evaluated in order to evaluate the actual benefits brought by the validated technology. Dealing with a new design, it is not easy to gather such data. Nevertheless, discussion with manufacturers and designers will be at the base of such evaluation phase. Acknowledgements. The work presented in this paper has received funding from the Clean Sky 2 Joint Undertaking under the European Union’s Horizon 2020 research and innovation program under grant agreement No. 807083.

References 1. Holt, J.E., Radcliffe, D.F., Schoorl, D.: Design or problem solving—a critical choice for the engineering profession. Des. Stud. 6(2), 107–110 (1985) 2. Green, W. and Jordan, P.W.: Human factors in product design: current practice and future trends. CRC Press (1999) 3. Salvendy, G.: Handbook of human factors and ergonomics, 4th edn. John Wiley & Sons, New Jersey (2012) 4. Caple, D.C.: The IEA contribution to the transition of Ergonomics from research to practice. Appl. Ergon. 41(6), 731–737 (2010) 5. Barclay, I., Dann, Z., Holroyd, P.: New product development. Routledge, London (2010) 6. Dul, J., et al.: A strategy for human factors/ergonomics: developing the discipline and profession. Ergonomics 55(4), 377–395 (2012) 7. Brecher, C. (ed.): Advances in Production Technology. LNPE. Springer, Cham (2015). https:// doi.org/10.1007/978-3-319-12304-2 8. Huang, G.Q.: Web-based support for collaborative product design review. Comput. Ind. 48(1), 71–88 (2002) 9. Dieker, L. A., Rodriguez, J.A., Lignugaris/Kraft, B., Hynes, M. C., Hughes, C.E.: The potential of simulated environments in teacher education: current and future possibilities. Teacher Educ. Special Educ. 37(1), 21–33 (2014) 10. Santhosh, S., De Crescenzio, F., Vitolo, B.: Defining the potential of extended reality tools for implementing co-creation of user oriented products and systems. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) ADM 2021. LNME, pp. 165–174. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-91234-5_17 11. Ulrich, K.I., Eppinger, S.D.: Product Design and Development 3rd edn. Tata Mcgraw Hill (2004) 12. Norman, D. A., & Draper, S. W.: User centered system design: New perspectives on humancomputer interaction (1986) 13. Chialastri, A.: Human-centered design in aviation. In: CEUR proceedings 4th Workshop HCP Human Centered Processes, Aviationlab, Rome, pp. 43–47 (2011) 14. De Crescenzio, F., Bagassi, S., Asfaux, S., Lawson, N.: Human centred design and evaluation of cabin interiors for business jet aircraft in virtual reality. Int. J. Inter. Des. Manuf. (IJIDeM) 13(2), 761–772 (2019). https://doi.org/10.1007/s12008-019-00565-8 15. Scaravetti, D., Montagnier, E.: Human Factor Early Involvement in Product Design Processes. In: DS 58–6: Proceedings of ICED 2009, the 17th International Conference on Engineering Design, Design Methods and Tools (pt. 2), Palo Alto, CA, USA, vol. 6, pp. 135–146 (2009)

A Mixed Reality Application for Collaborative and Interactive Design

1515

16. National Research Council.: Health care comes home: the human factors. National Academies Press, Washington, D.C. (2011) 17. Chapanis, A.: Human factors in systems engineering. John Wiley & Sons, Inc. (1996) 18. Bevan, N., Carter, J., Harker, S.: ISO 9241-11 revised: What have we learnt about usability since 1998? In: Kurosu, M. (ed.) HCI 2015. LNCS, vol. 9169, pp. 143–151. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20901-2_13 19. Shackel, B.: Human Factors and Usability-Whence and Whither?. Software-Erqonomie’85: Mensch-Computer-lnteraktion (1985) 20. Hettinger, L.J., Haas, M.W. (eds.) Virtual and adaptive environments: Applications, implications, and human performance issues. CRC Press, New Jersey (2003) 21. De Crescenzio, F., Bagassi, S., Starita, F.: Preliminary user centred evaluation of regional aircraft cabin interiors in virtual reality. Sci. Rep. 11(1), 1–10 (2021) 22. Lee, G. A., Teo, T., Kim, S., Billinghurst, M.: Mixed reality collaboration through sharing a live panorama. In: SIGGRAPH Asia 2017 Mobile Graphics & Interactive Applications, pp. 1–4 (2017) 23. El-Jarn, H., Southern, G.: Can co-creation in extended reality technologies facilitate the design process?. J. Work-Appl. Manage. (2020) 24. Bennett, J. L.: Managing to meet usability requirements: establishing and meeting software development goals. Visual Display Terminals, pp. 161–184. Prentice-Hall (1984)

Posture Interactive Self Evaluation Algorithm Based on Computer Vision Emmanuele Barberi1(B) , Massimiliano Chillemi1 , Filippo Cucinotta1 , Demetrio Milardi2 , Marcello Raffaele1 , Fabio Salmeri1 , and Felice Sfravara1 1 Department of Engineering, University of Messina, Messina, Italy

[email protected] 2 Department of Biomedical, Dental, Morphological and Functional Imaging Sciences,

University of Messina, Messina, Italy

Abstract. Many workers and citizens have been forced to make a lifestyle change in the past two years due to the pandemic emergency. In order to keep a high level of personal health, the doctors suggest to do fitness exercises. Before the pandemic it was possible to do these exercises at the gym or during dedicated session in the office supervised by professional trainers. During the pandemic emergency the gyms were closed, the workers were forced to stay home and the people started to do gym exercises by themselves without the control of a professional figure. This situation could lead to several diseases associated to musculoskeletal disorders if the exercises are performed incorrectly. In this work, an approach based on the pose-estimator application OpenPose is developed. The reference exercise is an isometric squat performed by a professional trainer. During the exercise, thanks to a deep neural network, the pose-estimator gets a series of key-points and vectors which represent the user’s pose. A dataset of videos (for both the correct and incorrect postures) has been used to train several machine learning algorithms. The result is an automatic tool that recognizes incorrect poses during the exercise and helps the performer to correct it. Keywords: Machine learning · Computer vision · Ergonomics · Physiotherapy · Posture assessment

1 Introduction The lifestyle change imposed by the COVID-19 pandemic has pushed many people to spend much of their time at home. In fact, during the emergency, several activities were carried out at home, those related to working and gymnastics, which previously took place elsewhere and in most of the cases with the supervision of a professional trainer. An incorrect posture maintained for several hours during smart-working or during gymnastic activities may lead to the onset of several disabling diseases related to the musculoskeletal apparatus, due to the lack of supervision of a personal trainer. In the scientific literature it is possible to see how the topic of the posture assumed during the work activity, either in standard or smart-working mode, or during the physical exercise has been widely © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1516–1526, 2023. https://doi.org/10.1007/978-3-031-15928-2_132

Posture Interactive Self Evaluation Algorithm

1517

addressed. As far as the working posture is concerned, there are some examples in the literature of the application of the Rapid Upper Limb Assessment (RULA) and Rapid Entire Body Assessment (REBA) methods [1,2]. These methods assess the workers’ risk of developing musculoskeletal disorders based on their position during work. The RULA and REBA methods assign a level of risk to the job carried out by the worker through the evaluation of different scores calculated according to the posture, the weight sustained and the time duration of the position, or the number of repetitions. Both methods evaluate the posture according to certain ergonomic angles. The evaluation of these angles is often automated through computer vision applications like OpenPose. The use of OpenPose for similar purposes has gradually grown. It is an opensource library, developed by Cao et al. [3], based on AI that allows the recognition, within an image, of the human body. In more detail, through a Convolutional Neural Network (CNN), it can recognize the human body by identifying keypoints on it. In the scientific literature there are numerous works that exploit the potential of this tool for the most diverse applications. One of the main advantages of this method is to obtain information on the posture taken by a person, without the need to apply any instrumentation on it [4]. This has led to a wide use of OpenPose in areas where it would be inconvenient to use sensors, such as sports. Carputo et al. [5] assess the posture taken by a motorcyclist studying his center of gravity. Jafarzadeh et al. [6] use this method to estimate the pose of hurdles athletes. Li et al. [7] exploit the capabilities of OpenPose to implement a system capable of evaluating in real time the bat of a baseball player in order to correct its pose through a scoring system. There are also examples of medical use, for example in Stenum et al. [8] where the estimation of the laying is used for the analysis of human gait. As for the physical activity performed at home, in the scientific literature it is possible to find numerous works concerning the different postures of yoga. Narayanan et al. [9] and Rishan et al. [10] use OpenPose to train a classification algorithm to recognize different positions characteristic of yoga. Thar et al. [11] present a work that detects the posture during the execution of a certain yoga position and, through the evaluation of the angles, suggests visual feedback for the correction of the posture. In addition to the above mentioned RULA and REBA methods, the assessment of posture is carried out in the workplace also through OpenPose, as reported by the work of Arora et al. [12] in which an assessment is made of the risk to which workers in the textile industry are exposed, based on the acquisition of the posture to which follows an application of Machine Learning (ML) to carry out a binary classification. In this last case, the classification is performed after the acquisition of the pose by means a ML approach. ML is a branch of AI that studies systems that can learn through experience. This prerogative makes it different from classic computer programming. In fact, computer programming can operate only in the conditions with which it was designed, instead the ML can work even in conditions that it has never observed. This means that ML algorithms are used when it is not possible to know in advance all the conditions that the system will encounter during its operation. One of the techniques with which it is possible to implement the ML is called Deep Learning (DL) [13], which is based on particular Artificial Neural Networks (ANN), called Deep Neural Network (DNN). Aim of this work is to combine these two useful methods (OpenPose and ML) in order to implement a user-friendly system that helps to keep the correct posture during the performing of exercise. The reference exercise studied

1518

E. Barberi et al.

in this paper is an isometric squat. A good evaluation of the posture assumed during the isometric squat was carried out in the work of Yang et al. [14], which propose a method of self-evaluation with feedback, based on the identification of characteristic angles. Another important study on the posture assumed during the isometric squat execution is that of Xie et al. [15] where the 2D coordinates are converted into 3D coordinates (through Human Mesh Recovery tool). As a result, a real-time 3D model of the user is reconstructed in order to give an intuitive feedback. Other works involve the evaluation of the posture through the 3D coordinates obtained through the use of more cameras [16] or also through the combined use of cameras and Kinect sensor [17]. The method proposed in this paper takes advantage of the OpenPose application. The coordinates of these points are used to train ML algorithms through supervised learning. This training allows the system to distinguish the correct positions from the incorrect ones and it suggests the modifications to be made to the laying for a proper exercise. The innovations introduced with the method presented in this work concern the use of a single camera, however positioned (in a range of 180° thus in front of the user), the use of 2D coordinates (raw data, not processed) provided by OpenPose, and a feedback system based on ML, which uses only the coordinates of the keypoints.

2 Material and Methods A brief description of the steps involved during the method proposed in this work is shown in Fig. 1. The first step of the procedure is to record a video dataset of the exercise (isometric squat). The dataset consists of a part of video where a professional trainer performs the exercise with the correct posture, another part where the trainer performs the exercise with bad posture and typical errors. All videos are the inputs of the pose-estimator in order to obtain for each video a set of information about the trainer’s pose during the performance (keypoints, angles, vectors). A part of the dataset (90%)

Fig. 1. General flow of the method

Posture Interactive Self Evaluation Algorithm

1519

has been used to train and validate the ML algorithm. Aim of this training is to help the algorithm to distinguish the videos with bad posture from the ones with good posture. The remaining part of videos (10%) have been used for testing the function obtained by the ML algorithm. Output of the test is the confusion matrix with the confidence parameters of the machine learning trained. 2.1 Reference Exercise – Isometric Squat The reference exercise is the isometric squat. This type of exercise increases the strength of the quadriceps. The isometric position allows to do the exercise without dynamic stress on knees. The muscles are contracted during the entire period of the exercise, a correct position is shown in Fig. 2.

Fig. 2. Correct position of the isometric squat – Left front view, Right lateral view

In the position of the isometric squat, the feet are slightly wider than the shoulders, the direction of the knees is parallel to the direction of the toes, which must be slightly extra-rotated. In the lateral view, the tip of the knees must be aligned with the tip of the toes. The thighs must be almost parallel to the floor (the angle between thigh and shin must be 90°). The head is turned slightly downwards staring a fixed point on the ground 2–3 m ahead. During the isometric position, the heels must be kept firmly on the ground. Generally, the main errors during the execution of this exercise are: rotation of the heels (mainly in lateral and horizontal plane) during the exercise, rotation of the feet (horizontal plane), incorrect position of the head (extra-rotation or intra-rotation in sagittal plane), incorrect direction of knees or toes, incorrect position of the shoulders. In this phase, the incorrect positions investigated (Fig. 3) are: the tip of the knees further forward than the tip of the toes, the incorrect alignment of the knees (valgization of the knees) compared to the toes, and the raised heels.

1520

E. Barberi et al.

Fig. 3. Incorrect positions evaluated in this paper – Left Knees further forward than toes, Centre - Incorrect alignment between knee and toes (valgization), Right Lift of the heels

2.2 Video Dataset The video can be captured with a normal smartphone camera without particular attention to the distance or relative position between the camera and the performer. The format of the video is not important because the pose-estimator accepts all main video formats. Each video has been recorded with a complete rotation of 180° around the professional trainer. The total number of videos in the dataset is 12. Inside the dataset there are four different positions of the exercise and for each position there are three different videos. The group of the videos with the correct position is labeled “Correct Posture”, the group with the knees further forward than the toes is labeled “Bad Posture 1”, the group with the raised heels is labeled “Bad Posture 2” and the group with the valgized knees is labeled “Bad Posture 4”. From each video have been extracted 84 frames that are the input for the OpenPose application. Table 1 shows a resume of the number of videos for each group and the number of frames extracted for the post-processing in OpenPose. Table 1. Total number of videos with description of the groups involved. Datasets description (No. of elements) Classes

In the paper

Video

Frames

Description

Correct Posture

CP

3

254

Correct posture during the exercise

Bad Posture 1

BP1

3

254

Knees further forward than toes

Bad Posture 2

BP2

3

254

Lift of the heels

Bad Posture 3

BP3

3

254

Valgization of the knees

12

1,016

Tot.

2.3 The Pose Estimator and the Use of Keypoints OpenPose is the pose estimator used in this paper. There are several types of applications for the pose estimation but it was decided to use OpenPose because it is an open-source

Posture Interactive Self Evaluation Algorithm

1521

application. During the estimation, OpenPose detects a series of key-points thanks to which it is possible to create a skeleton, as shown in Fig. 4. Table in the Fig. 4 shows the name of keypoints with the respective number defined by the application. In the same image, a skeleton mapped directly on the professional trainer. The output of OpenPose used for training the Machine Learning algorithm is a matrix with the coordinates of key-points. Each frame is a matrix with 25 rows (number of keypoints) and 2 columns, the coordinates X and Y.

Fig. 4. Numbered key-points detected by OpenPose on left, Skeleton mapped on real image on right

2.4 Machine Learning and Used Approach The coordinates of the keypoints found through OpenPose are stored in a JSON file. First of all, the coordinates of the single frame are stored in a vector (1,50). The single frame is the sample of the ML and the 50 coordinates are the features. The matrix obtained in this way has 1,016 rows (total number of frames extracted) and 50 columns (X and Y points flatted in a single row) as shown in Fig. 5.

Fig. 5. Matrix of the dataset

1522

E. Barberi et al.

The algorithms used in this work are intended to carry out a classification operation. In particular, they have the coordinates as input and the class to which these coordinates belong as output. The classes within which the inputs can be placed are four: “CP”, “BP1”, “BP2” and “BP3” (as explained and summarized in Table 1). The above classification was implemented using supervised learning [18]. With supervised learning, ML algorithms are trained using examples. The algorithm, once trained with samples obtained by OpenPose, will be able to classify new sets of coordinates that it has never seen in the examples. Since the classification was implemented by means of supervised learning, it was necessary to carry out a labelling operation. This consists of applying a label to each sample. Labels are nothing more than a string bearing the name of the four classes considered. All the labels are stored in a vector, it has 1,016 rows (number of samples). As suggested by Ripley [19], in each ML procedure it is necessary to implement three distinct phases called “Train”, “Validation” and “Test”. In the first phase, the algorithm learns using a first set of data (train dataset) and looks for a correlation between input data (features) and output data (labels). A second dataset (validation dataset), different from the previous one, allows the algorithm to evaluate its training state. At this stage the algorithm tries to classify these new data and compare it with the associated labels, assessing the eventual mistakes made. These first two phases are usually repeated several times with the aim of improving performances and reducing the error of classification. Once the train and validation phases are completed, the algorithm interrupts learning and is ready to be used for the testing phase. To verify the learning, it is good practice to perform the test phase using a third dataset (test dataset), labelled and containing data never seen by the algorithm during the previous steps. During this last phase, the algorithm then classifies the new data. As for the evaluation of the performances, it is possible to see that both from the validation phase and from the test phase, the algorithms generate the so-called confusion matrices, in which the answers given by the algorithm are shown (predicted classes) with associated labels (true classes). In order to respect these phases, the matrix of dataset (Fig. 5) has been divided in three distinct parts. The “train dataset”, “validation dataset” and “test dataset”. They contain respectively the 70%, 20%, and 10% of the main dataset elements. Table 2 shows the subdivision and the number of samples used for each phase. Table 2. The table shows a datasets description. It is possible to see how much samples are contained in each dataset for each class. Datasets subdivision for the ML algorithm Classes

Train (~70%)

Validation (~20%)

Test (~10%)

Main (100%)

CP

171

57

26

254

BP 1

171

57

26

254

BP 2

171

57

26

254

BP 3

171

57

26

254

Tot

684

228

104

1,016

Posture Interactive Self Evaluation Algorithm

1523

In order to choose the best algorithm for the case under study, different algorithms of ML have been evaluated and their performances compared. The models used are Support Vector Machine (SVM), ANN, K-Nearest Neighbors (KNN), Decision Trees, Naive Bayes, and Discriminant Analysis. To compare the model performance is used a particular index, called Accuracy. The Accuracy is the ratio between the correct prediction and the total prediction. This index has been evaluated both in validation and in test phase.

3 Results and Discussion All the models above mentioned have been trained, validated, and tested, and their performances, in terms of accuracy, are showed in Table 3. Table 3. Used ML algorithms and their performances ML algorithms performances Model type

Validation Accuracy (%)

Test Accuracy (%)

SVM

100.0

100.0

Discriminant analysis

99.6

99.0

ANN

99.6

99.0

KNN

99.1

98.1

Decision trees

93.4

84.6

Naive Bayes

71.1

70.2

The model with the higher level of accuracy both in validation and test phase is the SVM. The final algorithm chosen for the classification is the SVM with the main characteristics showed in Table 4. Table 4. SVM main characteristics SVM model characteristics Preset

Cubic SVM

Kernel function

Cubic

Kernel scale

Automatic

Box constraint level

1

Multiclass method

One-vs-One

The confusion matrices coming from validation and test phases are showed in Fig. 6. As already written, they show predicted classes as a function of those true. On the major

1524

E. Barberi et al.

diagonal it is possible to see the correct prediction, elsewhere there are the wrong ones. Since the accuracy is equal to 100% in both the cases, all the elements that are not in the major diagonal are empty. In fact, all the dataset elements labelled with “CP” have been classified as “CP” and so on with other classes. With these results, the trained model has a high ability to classify the postures assumed during the execution of the isometric squat and, in particular, to identify the type of error committed. This method could therefore be used for the self evaluation of physical exercise because it can analyze in real time each frame, extracting the keypoints and feeding them to the model of ML, which provides feedback to the performer. In this way the performer will know whether the exercise is carried out correctly and if not, he will be warned by the application to correct the posture and he will be advised on how to do it.

Fig. 6. Confusion matrix of validation phase on left, and of test phase on right.

4 Conclusions Based on two technologies (OpenPose and Machine Learning), the study proposes a new method for evaluating the correctness of posture during a gymnastic exercise. The method implemented has been tested on a classic and widespread exercise: isometric squat. A brief description of the correct posture during this exercise and the common errors made by performer has been reported. In this phase, the exercise was carried out by a professional trainer. Several videos have been recorded with a smartphone, three videos for the correct posture and nine videos for three different incorrect postures. From each set of video, 254 frames have been extrapolated and used for the pose-estimation. The number of postures defines the number of classes for the supervised machine learning. Thanks to OpenPose, with these frames, the main keypoints, and therefore a vector of X,Y coordinates of the performer’s skeleton have been extrapolated and used for training the Machine Learning algorithm. The objective of machine learning is to classify the exercise between the four classes defined via the vectors obtained by OpenPose. In an early stage, several machine learning algorithms have been used and compared. The SVM algorithm was the most accurate one and the method proposed is based on this type of machine learning technique. The algorithm recognizes with a high value of accuracy

Posture Interactive Self Evaluation Algorithm

1525

all the classes proposed in validation and test phase. The result is a function that can recognize if the isometric squat is performed in a correct way or with one of the incorrect postures defined by the classes. Thanks to classification, the function can give feedback to the performer in order to eventually correct the exercise without markers or devices attached to the body or without the presence of a professional trainer.

References 1. Cremasco, M.M., Giustetto, A., Caffaro, F., Colantoni, A., Cavallo, E., Grigolato, S.: Risk assessment for musculoskeletal disorders in forestry: a comparison between RULA and REBA in the manual feeding of a wood-chipper. Int. J. Environ. Res. Public Health 16(5), 793 (2019). https://doi.org/10.3390/ijerph16050793 2. Ansari, N.A., Sheikh, D.M.J.: Evaluation of work posture by RULA and REBA: a case study. IOSR J. Mech. Civ. Eng. 11(4), 18–23 (2014). https://doi.org/10.9790/1684-11431823 3. Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: OpenPose: realtime multi-person 2D pose estimation using part affinity fields. IEEE Trans. Pattern Anal. Mach. Intell. 43(1), 172–186 (2021). https://doi.org/10.1109/TPAMI.2019.2929257 4. D’andrea, D., Cucinotta, F., Farroni, F., Risitano, G., Santonocito, D., Scappaticci, L.: Development of machine learning algorithms for the determination of the centre of mass. Symmetry (Basel) 13(3), 1–16 (2021). https://doi.org/10.3390/sym13030401 5. Carputo, F., D’Andrea, D., Risitano, G., Sakhnevych, A., Santonocito, D., Farroni, F.: A neural-network-based methodology for the evaluation of the center of gravity of a motorcycle rider. Vehicles 3(3), 377–389 (2021). https://doi.org/10.3390/vehicles3030023 6. Jafarzadeh, P., Virjonen, P., Nevalainen, P., Farahnakian, F., Heikkonen, J.: Pose estimation of hurdles athletes using openpose. In: International Conference on Electrical Computer Communication and Mechatronics Engineering ICECCME 2021, pp. 7–8, October 2021. https:// doi.org/10.1109/ICECCME52200.2021.9591066 7. Li, Y.-C., Chang, C.-T., Cheng, C.-C., Huang, Y.-L.: Baseball Swing Pose Estimation Using OpenPose, pp. 6–9 (2021). https://doi.org/10.1109/raai52226.2021.9507807 8. Stenum, J., Rossi, C., Roemmich, R.T.: Two-dimensional video-based analysis of human gait using pose estimation. PLoS Comput. Biol. 17(4), e1008935 (2021). https://doi.org/10. 1371/journal.pcbi.1008935 9. Narayanan, S.S., Misra, D.K., Arora, K., Rai, H.: Yoga pose detection using deep learning techniques. SSRN Electron. J., 1–8 (2021). https://doi.org/10.2139/ssrn.3842656 10. Rishan, F., et al.: Infinity Yoga Tutor: Yoga Posture (2020) 11. Thar, M.C., Winn, K.Z.N., Funabiki, N.: A proposal of yoga pose assessment method using pose detection for self-learning. In: 2019 International Conference on Advanced Information Technologies, ICAIT 2019, pp. 137–142 (2019). https://doi.org/10.1109/AITC.2019.8920892 12. Arora, A., Vijayvargiya, A., Kumar, R., Tiwari, M.: Machine learning based risk classification of Musculoskeletal disorder among the garment industry operators. In: International Conference on Inventive Research in Computing Applications ICIRCA 2021, pp. 1193–1198 (2021). https://doi.org/10.1109/ICIRCA51532.2021.9544820 13. Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015). https:// doi.org/10.1038/nature14539 14. Yang, L., Li, Y., Zeng, D., Wang, D.: Human exercise posture analysis based on Pose estimation. In: IEEE Advanced Information Technology, Electronic and Automation Control Conference, vol. 2021, pp. 1715–1719 (2021). https://doi.org/10.1109/IAEAC50856.2021. 9390870

1526

E. Barberi et al.

15. Xie, H., Watatani, A., Miyata, K.: Visual feedback for core training with 3D human shape and pose. In: Proceedings of 2019 Nicograph International, NicoInt 2019, pp. 49–56 (2019). https://doi.org/10.1109/NICOInt.2019.00017 16. Nakano, N., et al.: Evaluation of 3D markerless motion capture accuracy using OpenPose with multiple video cameras. Front. Sport. Act. Living 2, 1–9 (2020). https://doi.org/10.3389/ fspor.2020.00050 17. Kim, W., Sung, J., Saakes, D., Huang, C., Xiong, S.: Ergonomic postural assessment using a new open-source human pose estimation technology (OpenPose). Int. J. Ind. Ergon. 84, 103164 (2021). https://doi.org/10.1016/j.ergon.2021.103164 18. Cunningham, P., Cord, M., Delany, S.J.: Supervised learning. In: Cord, M., Cunningham, P. (eds.) Machine Learning Techniques for Multimedia. Cognitive Technologies. Springer, Berlin, pp. 21–49 (2008). https://doi.org/10.1007/978-3-540-75171-7_2 19. Ripley, B.D.: Pattern Recognition and Neural Network (1996)

A Review of Benchmarks for Visual Defect Detection in the Manufacturing Industry Philippe Carvalho1(B) , Alexandre Durupt1 , and Yves Grandvalet2 1 2

Roberval, Universit´e de technologie de Compi`egne, Compi`egne, France {philippe.carvalho,alexandre.durupt}@utc.fr Heudiasyc, UMR CNRS 7253, Universit´e de technologie de Compi`egne, Compi`egne, France [email protected]

Abstract. The field of industrial defect detection using machine learning and deep learning is a subject of active research. Datasets, also called benchmarks, are used to compare and assess research results. There is a number of datasets in industrial visual inspection, of varying quality. Thus, it is a difficult task to determine which dataset to use. Generally speaking, datasets which include a testing set, with precise labeling and made in real-world conditions should be preferred. We propose a study of existing benchmarks to compare and expose their characteristics and their use-cases. A study of industrial metrics requirements, as well as testing procedures, will be presented and applied to the studied benchmarks. We discuss our findings by examining the current state of benchmarks for industrial visual inspection, and by exposing guidelines on the usage of benchmarks. Keywords: Defect detection

1

· Visual inspection · Machine learning

Introduction

The field of industrial defect detection is a subject of intense research. Most publications present new datasets and new algorithms evaluated using industrial defect datasets. Gao et al. [12] and Chen et al. [7] present two reviews on recent advances in industrial defect detection, which show that a large number of algorithms have been tested on a large number of datasets. This process of benchmarking, which consists of comparing a certain number of algorithms on a common dataset, using the same metrics, is important because it gives a fair estimate of the performance of an algorithm, and allows comparison with other algorithms. Several kinds of benchmarks can be made: defect detection (detecting whether an image contains a defect), defect classification (identify which type of defect it is) and defect segmentation (identify the boundaries of the defect on the picture) (cf. Fig. 1). In our work, we focus on defect detection in the production of metallic and plastic parts. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1527–1538, 2023. https://doi.org/10.1007/978-3-031-15928-2_133

1528

P. Carvalho et al.

Fig. 1. Defect detection, classification and segmentation examples. Sample taken from the KolektorSDD dataset [19]

In Sect. 2, we will study how performance is assessed for benchmarking and what results they depict. In Sect. 3, we will review and compare the datasets most usually used in visual defect detection benchmarks. Section 4 will discuss the main findings of this article and present our guidelines regarding the current state of the benchmarks in industrial defect detection.

2

Industrial Needs for Visual Defect Detection

The detection of defects is essential in the manufacturing industry, since the presence or absence of a defect determines the validity of the analyzed item. We will refer to this task as defect detection. It may also be useful to determine the type of defect detected, which we will refer to as defect classification. This can be useful for gathering statistics on the types of defects most often encountered, and for performing a diagnosis on the manufacturing process. Finally, it is also useful to retrieve the location of the defect in the image, which we will refer to as defect segmentation. This location makes it easy to check the result of the defect detection. 2.1

Metrics for Classification

The metrics used in visual defect detection and classification often originate from the field of machine learning, namely precision, recall and F-score. Other measures can be derived from a confusion matrix but are seldom used. In industrial visual inspection, defect detection algorithms must be able to detect any kind of defect, including defects that have not been seen by the classifier during training, and do so on a real-time production line. Other measures can be introduced here to better represent the performance of these algorithms in this setting: the false positive rate (FPR) and the false negative rate (FNR). The accuracy measure, which is sometimes used to describe the performance of a machine learning algorithm, is not relevant in industrial defect detection due to the strong imbalance between classes. The aforementioned statistics are calculated as follows:

False Positive Rate (FPR) =

FP FP + TN

(1)

Visual Defect Detection in the Manufacturing Industry

1529

Table 1. Confusion matrix used to gather evaluation statistics Ground Truth Positive Negative Prediction Positive True positives False positive Negative False negative True negative

False Negative Rate (FNR) = Precision =

FN TP + FN

TP TP + FP

(2) (3)

TP = 1 − FNR (4) TP + FN 2TP P r Re = (5) F-score = 2 P r + Re 2TP + FP + FN TP + TN (6) Accuracy = TP + TN + FP + FN In this context, positives refer to the rare events, that is, defective items, and negatives refer to the majority class, that is, non-defective items. This is the vocabulary used in most articles as well as in the field of statistics. Measures such as the average precision (AP) or the Area Under the ReceiverOperator Curve (AUROC, or AUC), although very popular, do not give a proper estimate of the performance of a decision rule. Rather, these measures estimate the performance of a decision rule when modifying its decision threshold, which makes it a measure of a family of decision rules. Furthermore, the AUROC is an inadequate estimator in samples that contain very few positive examples. Nevertheless, these metrics can still be used to choose the best classifier threshold. Recall =

2.2

Metrics for Segmentation

Segmentation performance is not as simple to evaluate as classification performance: a classification is either correct or incorrect, while a segmentation that is not pixel perfect can still be considered satisfactory. Many measures can be defined to evaluate a segmentation. Like in the classification case, the accuracy, defined as the ratio of correctly classified positive and negative pixels over all pixels, can be used. The estimator is heavily biased because of class imbalance and should never be used to measure segmentation performance. Zavrtanik et al. [22] use the average pixel-wise AUROC and AP to estimate segmentation performance on the MVTec dataset [3]. Tabernik et al. [19] again use pixel-wise average precision. Lei et al. [13] use Intersection over Union (IoU) to determine the segmentation performance.

1530

P. Carvalho et al.

The metrics based on AUROC and AP are, again, measures of a family of predictors, that can help choose a specific threshold for the predictor. Many other metrics may be used for segmentation, such as the Dice index, popular in the medical image segmentation field [11]. The metrics are computed as such, using G as the label ground truth and P as the predicted pixels: Dice = IoU = Boundary IoU =

2|G ∩ P | |G| + |P |

(7)

|G ∩ P | |G ∪ P |

(8)

|(Gd ∩ G) ∩ (Pd ∩ P )| |(Gd ∩ G) ∪ (Pd ∩ P )|

(9)

where Gd is the pixels located within a distance d from the border of the ground truth segmentation, and Pd is the pixels located within a distance d from the border of the prediction segmentation. 2.3

Supervision Level

Another fact to take into account is the supervision level during training. Some algorithms, both in segmentation and classification, need stronger supervision signals for training, that is, need more comprehensive labels. In the manufacturing defect detection community, as discussed in Bozic et al. [4], the supervision levels are defined as follows: – Strong supervision: refers to training protocols using positive and negative samples, including the locations of the defects in positive samples. The location is usually given on a separate image file as a mask over the defect in the original image. This mask can be very precise (pixel-wise overlay over the defect) or more weakly defined (bounding box around the defect, such as the ellipses given in the DAGM dataset [21]). – Mixed supervision: refers to training protocols using positive and negative samples. Some of the positive samples give the precise defect location, others do not. They are only trained using a boolean index, indicating whether the image is defective or not. – Weak supervision: refers to training protocols using positive and negative samples. No defect location is given here: all labels are boolean (defective/nondefective). – No supervision: refers to training protocols using only negative samples. The objective is to be able to detect defects during testing, despite none have ever be seen in training.

Visual Defect Detection in the Manufacturing Industry

2.4

1531

Training and Evaluation Procedure

Finally, an important part of performance assessment lies in the procedure used for training the model and evaluating it. If the procedures are not harmonized between different articles, results cannot be compared and the benchmarks comparing several algorithms on a given dataset will be flawed. A valid evaluation procedure, in an industrial context, requires data that has never been seen during training to estimate its performance on data that it will see in the future. A training procedure usually consists of splitting a dataset in three parts, referred to as the training set (usually between 50% and 70% of the data), the validation set (15% to 30% of the data) and the testing set (15% to 30% of the data). These percentages may vary beyond these bounds – the DAGM testing set contains 50% of the data. – The training set is used to directly train the model. – The validation set is used for a first estimation of the performance of the model. It is mostly used for comparing different options and to select one. – The test set is used only once, when all the options of the algorithm have been selected. It is used to evaluate the model globally (including option selections) on a never-before-seen data set. Such a procedure ensures that the published results provide an unbiased estimate of the performance of the model on test data. This test data is analogous to the data to those on which the model will have to detect defects on production lines. The defects that will then be observed will all be previously unseen to the classifier: that is why it is particularly important to test performance on unseen data. Published datasets should provide distinct training and testing sets, to ensure the repeatability and comparability of the results obtained when testing an algorithm.

3

Review of Existing Benchmarks

A number of datasets have been created in order to train and test algorithms in manufacturing defect detection and estimate their performance. It is important to have a great variety of datasets, because there are a lot of different industries where defects do not have the same properties at all. In order to compile the following list of datasets, we have decided to focus on industrial images showing metallic or plastic parts in a production line. We have started by analyzing the DAGM dataset, used for the thesis work of Dekhtiar, 2019 [10], and already presented as popular for industrial defect detection. We have found several dozens of articles using this dataset, and analyzed the other datasets that these articles were using, if they were compatible with our work. We have decided not to keep datasets which were referenced once or twice, which are often private datasets.

1532

P. Carvalho et al.

Fig. 2. Examples of items for each dataset

We have compared our findings with the work of Chen et al. [6]. While the list of datasets given is much larger, most of the datasets given are not compatible with our focus, and did not achieve sufficient popularity to impact the publications of the field and will thus not be analyzed here. We will present a succinct description of each dataset. Table 1 presents and compares the characteristics and use-cases of each dataset. 3.1

The DAGM Dataset

The DAGM dataset [21] was published as part of the 2007 DAGM symposium (Deutsche Arbeitsgemeinschaft f¨ ur Mustererkennung e.V., the German chapter of the IAPR (International Association for Pattern Recognition)). The DAGM and GNNS (German chapter of the European Neural Network Society) proposed a competition on the subject of Weakly Supervised Learning for Industrial Optical Inspection, in order to help improve visual defect detection algorithms in industrial settings. The dataset is divided in two parts. The first part, containing 6 classes, is commonly referred to as the development dataset. The second part contains 4 classes and is referred to as the competition dataset. Each development class contains 1000 non-defective images and 150 defective images. Each competition dataset contains 2000 non-defective images and 300 defective images. All images are 512×512 pixel 8-bit grayscale. Defects are localized by an ellipse surrounding its precise location.

Visual Defect Detection in the Manufacturing Industry

1533

Each class is also divided in a training and a testing sub-dataset. They each contain half of the data, but the number of defective and non-defective items is not balanced between both sets. The dataset was generated artificially, but was designed to resemble real defect detection problems in the manufacturing industry. Each image contains at most one defect, whereas real products can have several defects. The images (both texture and defect) are artificially generated using statistical methods. This dataset is among the most widely used datasets in industrial defect detection. However, it is an artificially generated dataset, and as such exhibits some statistical bias compared to real production data. For example, Bozic et al. [5] have proposed an algorithm that solves all 10 classes of the DAGM test set. 3.2

NEU Surface Defect Database

The Northeastern University Surface Defect Database was published in 2013 by Song et al. [18]. The dataset is constructed using greyscale pictures of defective hot-rolled steel sheets. The 1800 images are 200×200 pixels in size and are categorized into six defect types: crazing, inclusion, patches, pitted surfaces, rolled-in scales, and scratches. The defects are localized by a bounding box. This dataset accurately classifies defects according to their type, which makes it interesting for evaluating defect type classifiers. However, it is also one of the smallest datasets, both in terms of image size and sample size. Furthermore, this dataset does not contain dedicated testing data, which makes it impossible to compare two metrics of algorithms tested on this dataset since the testing datasets will contain different items. 3.3

Severstal Steel Defect Dataset

The Severstal Steel Defect Dataset was published on Kaggle in 2018 [16] by PAO Severstal, a major Russian steel manufacturing company1 . This dataset contains 12,572 pictures of 512×256 pixels representing defective steel sheets and surfaces. The original Kaggle competition also featured a number of test images but their labels have never been published. This dataset has the particularity of differentiating four types of defects, but the typology of these defects is not explained. In addition, every defect is located pixel-wise on the image. Very little detail was given about the creation of this dataset. Some images that do not have any defects show marks or peeling textures. Under these conditions, detecting defects can be a very difficult task because even non-defective items can vary a lot in appearance. We can also ask how the choice between defective and non-defective items was made, and why this choice makes sense in industry. This ambiguity has already been raised in the literature [5].

1

https://www.severstal.com/eng/.

1534

3.4

P. Carvalho et al.

KolektorSDD

KolektorSDD is a dataset published in 2020 by Tabernik et al. [19]. The dataset is constructed using images of defected electrical commutators. More specifically, the photographs depict defect of the plastic embedding of electrical commutators. The dataset contains 399 images, of which 52 images contain a defect and 347 do not contain defects. Defects are precisely located by a pixel-wise mask that covers the exact shape of the crack. Images are grey-scale, 500 pixels wide and 1240 to 1270 pixels high. The authors recommend resizing the images to 512×1408 pixels.2 This dataset was collected using real defective items, but no precise information was given on the collection method (type of camera, environment, lighting, etc.). These parameters could have a significant influence on the quality of the photographs and the results of learning algorithms. For example, Bozic et al. [4] have solved the test set of this dataset. 3.5

KolektorSDD2

KolektorSDD2 is a dataset published in 2021 by Boˇziˇc et al. [5]. The dataset is constructed using color photographs of defective production items: as such, it is a different dataset than KolektorSDD. The dataset contains 3335 images of approximately 230×630 pixels. Of these images, 356 contain visible defects and 2979 are not defective. The dataset is already split in a training set containing 246 defective images and 2085 nondefective images, and a test set containing 110 defective images and 894 nondefective images. Defects are localized using pixel-wise masks covering the exact shape of the defect. The defects vary in shape, color, size and type (scratches, spots, imperfections, etc.) Again, no information on the precise data collection conditions were given. However, this dataset is much more complex than its predecessor (more types of defects, addition of color channels, more types of objects). To this day, no method is able to successfully solve this dataset. 3.6

MVTec Anomaly Dataset

The MVTec Anomaly Dataset was published by MVTec Software in 2019 [2,3]. This dataset contains 5354 images of different types of objects and textures. The dataset presents multiple categories of objects: carpet, grid, leather, tile, wood, bottle, cable, capsule, hazelnut, metal nut, pill, screw, toothbrush, transistor, zipper. The pictures are color images, with sizes between 700×700 and 1024×1024 pixels. Each category is split into a testing set and a training set, like the DAGM dataset. The number of items in each group is different in each category. In total, 2

See http://www.vicos.si/Downloads/KolektorSDD.

Visual Defect Detection in the Manufacturing Industry

1535

the are 3629 training images and 1725 testing images. Moreover, all training images depict non-defective items. This dataset is particularly suitable for training non-supervised algorithms, because of the absence of defective items in the training dataset. It also takes a very different form than the previous datasets, because of the inclusion of non-textured objects such as screws, metal nuts, or even hazelnuts. 3.7

Dataset Comparison

This section collates the characteristics of each dataset and discusses their main advantages and limitations for industrial defect detection. – The DAGM Dataset is one of the most popular datasets available in industrial defect detection. Since 2007, it has been used in dozens of articles [5,17,22]. It contains 16,100 images in 10 classes which vary in terms of texture and defect type. The defects are annotated using bounding boxes, which can make segmentation harder since the bounding boxes contain non-defective areas. The dataset is artificially generated to resemble small metallic surfaces. – The NEU Surface Defect Database is a popular defect dataset [8,9]. It contains 1,800 images, which makes this dataset rather small, but has the advantage of categorizing the different defect types. This dataset, showing real production line protographs, depicts manufactured steel sheets. However, this dataset does not contains defect-free items, and its lack of a default testing/training makes objectively comparing performances on this dataset impossible. – The Severstal Steel Defect Dataset is one of the largest industrial defect detection datasets, with 26,664 images. It has quickly gained popularity since its publication in 2019 [1,5]. It shows large images of steel sheets with large variations in appearance, and classifies its defects into four classes. However, the typology of the defects are not given, which makes the classification task seem arbitrary and difficult to interpret, and might even be the result of annotation errors [5]. Furthermore, this dataset does not contain the ground truths of the testing data, which makes only the training data usable for both training and testing and makes objective performance comparison impossible. – The KolektorSDD dataset is a small dataset of 399 grayscale images showing plastic embeddings of electrical commutators. It was published in 2020 and has since gained some popularity [13,14]. This dataset has mostly been superseded by KolektorSDD2, which is very similar, although KolektorSDD2 does not contain the items of KolektorSDD. – The KolektorSDD2 dataset is a dataset of 3,335 color images showing defective production items. It was published in 2021 but has not yet reached widespread popularity, being used in less than a dozen articles to this day [5,13]. This dataset, although larger than its previous version, is still rather small, and shows very few examples of defective items (356 in both the training and testing sets). Furthermore, the nature of the depicted items is unknown, unlike KolektorSDD.

1536

P. Carvalho et al.

– The MVTec Anomaly Dataset is a dataset of 5,354 images in 15 classes. This dataset was published in 2019 and its popularity is growing rapidly, both in the industrial defect detection community [15,22] as well as in the wider computer vision community [20] due to its great diversity of classes. These classes represent varied objects, from carpet textures to photographs of manufactured screws and bolts and even toothbrushes and hazelnuts. Not all classes of objects depict metallic or plastic manufactured items : however, this is the only dataset that proposes photographs of discrete manufactured items as opposed to textured items. These categories represent a small fraction of this dataset, which means that this dataset contains few images coming from industrial visual inspection. Furthermore, this dataset does not contain defective items in its training sets, meaning that only unsupervised training is possible. Table 2 compares the use-cases of the datasets. This comparison was made by analyzing the properties of the datasets. Table 2. Comparison of the described datasets: Detection is for datasets suitable for defect detection (Y if true, N otherwise); Classification is for datasets suitable for defect classification (Y if true, N otherwise); Labeling describes whether the defect boundaries are localized by a bounding box or a pixel-wise mask; Training defects describes the presence of defects in the training set (Y if present, N if not). Dataset

Detection Classification Segmentation

Training defects

DAGM1 [21]

Y

N

Bounding box

Y

NEU2 [18]

N

Y

Bounding box

Y

[16] Y

Severstal

Steel3

Y

Pixel-wise mask Y

KolektorSDD4 [19]

Y

N

Pixel-wise mask Y

KolektorSDD25 [5]

Y

N

Pixel-wise mask Y

MVTec6 [3] Y Y Pixel-wise mask N 1 https://hci.iwr.uni-heidelberg.de/content/weakly-supervised-learning-industrialoptical-inspection 2 http://faculty.neu.edu.cn/songkc/en/zdylm/263265 3 https://www.kaggle.com/c/severstal-steel-defect-detection 4

https://www.vicos.si/resources/kolektorsdd/ https://www.vicos.si/resources/kolektorsdd2/ 6 https://www.mvtec.com/company/research/datasets/mvtec-ad/ 5

4

Proposed Guidelines and Conclusion

We reviewed the industrial needs for defect detection and defined a number of metrics that are used to evaluate an algorithm, both for segmentation and detection/classification. It is important that these metrics can be compared fairly between different articles when testing against the same benchmark. Therefore, we consider that it is important that a dataset be published with a default training/testing set split, so that tests can be run against the same items and

Visual Defect Detection in the Manufacturing Industry

1537

are thus made comparable, and that algorithms be trained and tested using these datasets. Such datasets include DAGM or KolektorSDD. Furthermore, it is important to consider the precision of the labeling. Labeling can indicate the typology of the defect by categorizing different defect types, and segment the defect using more or less precise indicators, namely bounding boxes and pixel-wise masks. Datasets which use more accurate labeling, and display a clear defect typology, such as MVTec or Severstal Steel, with defect classification and pixel-wise segmentation, should be adopted. Real-world production datasets should also be preferred because they can exhibit some variance due to the production setting, such as ambient lighting, temperature, or camera settings. Testing algorithms against these real-world datasets give results that are likely to better represent the capabilities of the algorithm on a production line, and their robustness to disturbances. Only DAGM is generated artificially. However, Table 2 shows that no single benchmark, apart from MVTec which can only be trained using non-supervised algorithms, satisfies all desiderata. Therefore, the adequate benchmark must be chosen depending on the task at hand: namely, the classification or segmentation task that we wish to solve, and the supervision level of the algorithm. We advocate the creation of a new benchmark which satisfies all criteria at once, namely the presence of defective and non-defective items in training and testing sets, the typology of defects for defect classification, and the segmentation of the defects using pixel-wise masks to allow training of supervised segmentation algorithms. The MVTec Dataset comes closest to this benchmark’s requirements, but does not contain defective items in the training set. Acknowledgements. This work was carried out with the support of the Agence Nationale de la Recherche through the TEMIS ANR-20-CE10-0004 project.

References 1. Akhyar, F., Lin, C.Y., Kathiresan, G.S., Surianarayanan, B., Hsu, C.Y.: Detectors++: the robust baseline for a defect detection system. In: IEEE International Conference on Consumer Electronics (2021). https://doi.org/10.1109/ICCETW52618.2021.9603063 2. Bergmann, P., Batzner, K., Fauser, M., Sattlegger, D., Steger, C.: The MVTec anomaly detection dataset: a comprehensive real-world dataset for unsupervised anomaly detection. Int. J. Comput. Vision 129(4), 1038–1059 (2021). https://doi. org/10.1007/s11263-020-01400-4 3. Bergmann, P., Fauser, M., Sattlegger, D., Steger, C.: MVTec AD - A comprehensive real-world dataset for unsupervised anomaly detection. In: Conference on Computer Vision and Pattern Recognition (CVPR). pp. 9584–9592. IEEE (2019). https://doi.org/10.1109/CVPR.2019.00982 4. Boˇziˇc, J., Tabernik, D., Skoˇcaj, D.: End-to-end training of a two-stage neural network for defect detection. In: International Conference on Pattern Recognition, pp. 5619–5626. IEEE (2021). https://doi.org/10.1109/icpr48806.2021.9412092 5. Boˇziˇc, J., Tabernik, D., Skoˇcaj, D.: Mixed supervision for surface-defect detection: From weakly to fully supervised learning. Comput. Industry 129, 103459 (2021). https://doi.org/10.1016/j.compind.2021.103459

1538

P. Carvalho et al.

6. Chen, H., Hu, Q., Zhai, B., Chen, H., Liu, K.: A robust weakly supervised learning of deep Conv-Nets for surface defect inspection. Neural Comput. Appl. 32(15), 11229–11244 (2020). https://doi.org/10.1007/s00521-020-04819-5 7. Chen, Y., Ding, Y., Zhao, F., Zhang, E., Wu, Z., Shao, L.: Surface defect detection methods for industrial products: A review. Appl. Sci. 11(16), 7657 (2021). https:// doi.org/10.3390/app11167657 8. Cohn, R., Holm, E.: Unsupervised machine learning via transfer learning and k Means clustering to classify materials image data. Integ. Mater. Manuf. Innov. 10(2), 231–244 (2021). https://doi.org/10.1007/s40192-021-00205-8 9. Dai, W., Erdt, M., Sourin, A.: Self-supervised pairing image clustering for automated quality control. Vis. Comput. 38, 1181–1194 (2021). https://doi.org/10. 1007/S00371-021-02137-Y 10. Dekhtiar, J.: Deep Learning and Unsupervised Learning to automate visual inspection in the manufacturing industry. Ph.D. thesis, Universit´e de Technologie de Compi`egne (2019). http://www.theses.fr/2019COMP2513 11. Eelbode, T., et al.: Optimization for medical image segmentation: theory and practice when evaluating with Dice score or Jaccard index. IEEE Trans. Med. Imaging 39(11), 3679–3690 (2020). https://doi.org/10.1109/TMI.2020.3002417 12. Gao, Y., Li, X., Wang, X.V., Wang, L., Gao, L.: A review on recent advances in vision-based defect recognition towards industrial intelligence. J. Manuf. Syst. 62, 753–766 (2021). https://doi.org/10.1016/J.JMSY.2021.05.008 13. Lei, L., Sun, S., Zhang, Y., Liu, H., Xu, W.: PSIC-Net: pixel-wise segmentation and image-wise classification network for surface defects. Machines 9(10), 221 (2021). https://doi.org/10.3390/MACHINES9100221 14. Li, Y., Chen, Y., Gu, Y., Ouyang, J., Wang, J., Zeng, N.: A lightweight fully convolutional neural network of high accuracy surface defect detection. In: Farkaˇs, I., Masulli, P., Wermter, S. (eds.) ICANN 2020. LNCS, vol. 12397, pp. 15–26. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61616-8 2 15. Roth, K., Pemula, L., Zepeda, J., Sch¨ olkopf, B., Brox, T., Gehler, P.: Towards total recall in industrial anomaly detection (2021). https://arxiv.org/abs/2106.08265v1 16. Severstal: Severstal: Steel defect detection (2019). https://www.kaggle.com/c/ severstal-steel-defect-detection/overview/description 17. Siebel, N.T., Sommer, G.: Learning defect classifiers for visual inspection images by neuro-evolution using weakly labelled training data. In: IEEE Congress on Evolutionary Computation, pp. 3925–3931 (2008). https://doi.org/10.1109/CEC.2008. 4631331 18. Song, K., Yan, Y.: A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Appl. Surf. Sci. 285, 858–864 (2013). https://doi.org/10.1016/j.apsusc.2013.09.002 ˇ 19. Tabernik, D., Sela, S., Skvarˇc, J., Skoˇcaj, D.: Segmentation-based deep-learning approach for surface-defect detection. J. Intell. Manuf. 31(3), 759–776 (2019). https://doi.org/10.1007/s10845-019-01476-x 20. Venkataramanan, S., Peng, K.-C., Singh, R.V., Mahalanobis, A.: Attention guided anomaly localization in images. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) ECCV 2020. LNCS, vol. 12362, pp. 485–503. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58520-4 29 21. Wieler, M., Hahn, T.: Weakly supervised learning for optical inspection. In: 29th Annual Symposium of the German Association for Pattern Recognition (2007) 22. Zavrtanik, V., Kristan, M., Skoˇcaj, D.: DRÆM – a discriminatively trained reconstruction embedding for surface anomaly detection. In: International Conference on Computer Vision, pp. 8330–8339. IEEE (2021)

Education and Representation Techniques

Soft Skills Teaching Proposal for “Designers” Edoardo Giovanni Maria Rovida1(B)

, Alberta Gianotti2

, and Giulio Zafferri2

1 Politecnico di Milano, via la Masa 1, 20156 Milano, Italy

[email protected] 2 Officina della conoscenza, Milano, Italy

Abstract. The authors are defining and organizing the first specific teaching experience for industrial product designer education, in which the word “designer” strictly refers to the engineering field. The key steps concerning Soft Skills and the said program are listed and briefly commented in the proposed paper as follow (i) prerequisites: the initial knowledge of a common designer, in the Soft Skills field, is underscored as the foundation for the described program; (ii) objectives: the standard of knowledge to be achieved by participants is defined; and (iii) Concepts: the standard of knowledge - between objectives and prerequisites - is established. It can be analyzed to acquire the concepts, i.e., the parts of knowledge linked to a name and a definition. The list of concepts can be identified with that of the main Soft Skills to be conveyed. The “top” five Soft Skills are: (1) appreciate cooperation and feedback: no design is perfect and improvement is always viable; (2) think with an open mind; (3) develop empathic behavior; (4) be skilled in developing genuine communication; and (5) acquire a problem-solving attitude. The aim of the paper is to encourage engineers to achieve Intellectual Fusion, the convergence of the left hemisphere of the brain (Hard Skills) and the right one (Soft Skills). This may turn engineers into people who can “make a difference”. It is essential to note that Hard Skills can be considered individually and undergo no changes, while Soft Skills must be viewed with a holistic approach, and may vary continuously, consistently with alterations in the person’s role. Keywords: Soft Skill · Teaching · Designer · Difference Maker Personas

1 Introduction The authors proposed a paper [1] underscoring the important role of Soft Skills in the professional activity of designers. It won the second award as “best paper” in the “Education session”. This paper can be considered a sequel to the previous one. Indeed, it underscores the role of Soft Skills in the designer’s activity, and provides some proposals for a Soft Skills teaching experiment for designers. In other words, it presents the project of a teaching experiment for designers. This paper indicates a state of the art after demonstrating the importance of Soft Skills for designers. Subsequently, after formalizing the objectives of a teaching experiment, they are expressed as the designer’s professional profile. Once © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1541–1551, 2023. https://doi.org/10.1007/978-3-031-15928-2_134

1542

E. G. M. Rovida et al.

the Soft Skills considered here have been identified, they are related to the designer’s behavior. The teaching experiment closes with a workshop where designers are invited to examine how the application of Soft Skills could improve certain cases they encountered. The main contribution of this paper is the presentation of a synthesis of the authors’ teaching experiences in the specific area of Soft Skills. Feedback from colleagues interested in the topic will be much appreciated.

2 State of the Art Awareness is rising concerning the importance of using Soft Skills also in scientifictechnical professions. An important survey [2] presents significant Soft Skills for general activities. Many books and links address the topic of Soft Skills: the authors have found e.g., the following [3–6], but very few explore Soft Skills for designers, such as those cited herein [7–9]. Among them, the website [7] presenting and explaining 10 key Soft Skills for designers is particularly interesting. The presentation opens by stating that “design is rarely only about design”. This phrase confirms that the designer’s job is not only to design industrial products but also to do many other things, and Soft Skills play an important role in this context. Soft Skills listed in the above website include: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Communicating design to non-designer Empathy Time management Listening Giving and receiving feedback Presentation Teamwork Question everything Flexibility Persistence. The top 5 Soft Skills for a designer listed on the website [8] include:

1. 2. 3. 4. 5.

Overcommunication Self-Confidence Question everything Ability to explain your thought process Trust your intuition.

The website [9] presents 20 Soft Skills, as result of a deep survey that involved many teachers in 26 countries. Such Soft Skills are as follows: Communication, Critical thinking, Creativity, Problem solving, Curiosity, Research and exploration, Decision making, Open-Mindedness, Systematic thinking, Empathy, Collaboration, Participation, Flexibility, Learn to learn, Team-Working, Self direction, Ethic, Judgement, Leadership, Entrepreneurship. The experience of the authors is relative to the introducing some Soft Skills in the Engineering programs, expanding some experiences already carried out.

Soft Skills Teaching Proposal for “Designers”

1543

1. Seminar for Master’s students in Mechanical Engineering, started in 2018 and relative to some Soft Skills particularly interesting in technological field. 2. Cooperation with the Milan Order of Engineers, consistent in a 2 h Seminar relative to development of Soft Skills in particular. 3. Course proposed to the Milan Order of Engineers, with the aim of upgrading the “Personal Acumen” thanks to the application of Soft Skills in addition to Hard Skills. The experiences and proposals of the authors are presented in [10, 11].

3 Purpose of the Proposed Teaching Experiment The first consideration is that a designer has an excellent preparation in terms of Hard Skills (“esprit de geometrie” as per Blaise Pascal, 1623), in other words the specific professional skills. However, competence concerning interpersonal relationship (typical of Soft Skills), is generally not as good, if not entirely lacking. In fact, the designer almost never works alone, and this enhances the importance of expertise in the field of Soft Skills. The purpose of the proposed teaching experiment is to fill this gap, precisely to create the new figure of a designer who can “make a difference” by applying all Soft Skills to his professional activity.

4 Prerequisites Prerequisites considered are the designer’s professional profile, as proposed by authors [6] and expressed as follows: 1. identify the customer’s needs in terms of product behavior through its life cycle phases; 2. with regard to points identified in section a), formalize the product’s general function. 3. analyze the general function with reference to the highlighted needs and, if necessary (particularly, if the general function is characterized by a high level of complexity), specify component functions; 4. provide all necessary information concerning the specified function, such as forms, dimensions, materials, tolerances; the above functions should be performed, respectively, by subassemblies of the product; 5. provide a congruent synthesis of all the above subassemblies to achieve the complete product; 6. verify if the product is congruent with the needs of the customer.

5 The Soft Skills Considered The Soft Skills considered are based on project Lever UP [12], conceived at the Politecnico di Milano Foundation. One of the authors was involved in the initiative. An example of such Soft Skills is given in Table 1.

1544

E. G. M. Rovida et al. Table 1. Soft Skills as per project lever up of the Politecnico di Milano Foundation

No.

Soft Skills

Description

1

Communication

Ability to talk to others and to express myself in a clear and complete way

2

Result orientation

Ability to act persistently and to perform tasks effectively

3

Digital Thinking

Ability to use digital media in an active and conscious way

4

Teamwork

Ability to work actively and to cooperate towards common goals, respecting everyone’s roles

5

Leadership

Ability to lead a group of people, motivating everyone towards common goals and delegating tasks

6

Planning

Ability to coordinate my work with that of others, using available resources to obtain effective results

7

Responsibility

Awareness of the consequences of one’s actions and the ability to ask for help

8

Flexibility

Ability to adapt behaviors and actions to the changes that occur

9

Problem-solving

Ability to identify problems and implement valid solutions

10

Empathy

Pay attention to the feelings and inclinations of the people I work with

11

Learning to learn

Ability to learn new things and to cope with the related difficulties

12

Innovation and creativity

Ability to create new things, thinking outside the box and using specific opportunities

In addition, some other Soft Skills were also considered (Table 2). Table 2. The additional Soft Skills considered. No.

Soft Skills

Description

1

Courtesy

Kindness in dealing with others

2

Ethics

Behavior towards the concepts of good and evil

3

Talent Development

A predominantly innate trait corresponding to an inclination to adopt a certain behavior in the face of problems and situations

4

Human aspects of technology

How to interpret technology in favor of humans and not against them

5

Awareness

Evaluation of the position between positive and negative behaviors

Soft Skills Teaching Proposal for “Designers”

1545

6 Soft Skills and the Designer’s Professional Profile Details are provided for the Soft Skills listed in Tables 1 and 2, with reference to the designer’s professional profile (Table 3). Table 3. Soft Skills and the designer’s professional profile. Soft Skills

Designer professional profile

Communication

Listen to the customers’ needs, ensure they are correctly understood, and transmit them to collaborators, during all phases of the project Cooperate with those who draw up the instructions for use of the product in order to minimize the risks associated with the use of the product Furthermore, at any time, when communicating something to somebody, precisely identify the requirements (i.e., the skills) of the person addressed, what increase in knowledge it must achieve through communication and, in this regard, what to communicate (i.e., content) and how to communicate it (i.e., style) Make sure that professional competence is congruent with the tasks, and ensure updates

Courtesy

Behave in such a way with all the people you interact with (i.e., customers, collaborators, colleagues, suppliers) as to avoid what they would not like you to either do or say

Flexibility

Quickly perceive every change (of the market, the company, the design studio where the professional works, the needs of customers, the behavior of colleagues and collaborators) and adapt personal behavior to this

Ethics

Make sure that every statement is true, and that every action is right, improves interpersonal relationships and is an advantage for the people involved

Responsibility

Awareness of one’s abilities and limitations. Ability to complete the tasks undertaken Accountable for one’s actions, accepting the consequences of any wrong choices, without blaming others

Teamwork

Ability to set up a design team and guide it, identifying individual talents and assigning congruent tasks to maximize the result

Empathy

Recognize the problems of each person you relate with (i.e., customer, collaborator, supplier), his or her way of seeing things, besides the mentality and, of course, behave accordingly

Leadership

If at the head of a design team, inspire trust in collaborators, clearly identify the objectives, clearly define the procedures, and clearly identify each one’s talents Be skilled in involving good people in a new, valid and ethical project and activities, always adopting critical analysis

Learning to Learn Ensure continuous learning and stay updated

1546

E. G. M. Rovida et al.

7 General Configuration of the Teaching Experiment 7.1 Initial Test Two tests will be initially performed, and later repeated at the end of the teaching experiment for a self-assessment of the results. The first test can be found at the following link: https://www.arealme.com/left-right-brain/it/. The purpose of this test is to self-evaluate which one of the two brain hemispheres is prevalent, whether the right one (emotional side) or the left one (rational side). The second test foresee the analysis of each Soft Skill with the goal to highlight the constituent concepts. For each concept, one or more questions in relation to the personal behavior have been identified. The answers are expressed by using a N.P.S. (Net Promoter Score) scale, modified by authors in relation to the specific requirements. In addition, the answers are processed by a dedicated algorithm. The evaluation results are as follows: 1. 0–4 Job holder (Good) 2. Over 4–6 Actual version of yourself (Great) 3. Over 6–8 Unique Person (A puzzle composed of pieces of personality: life experience, knowledge and emotion) 4. Over 8–10 Difference Maker Personas (Attitude as greatest asset)

Table 4. Personal branding matrix. Personal branding matrix Job holder

Actual best version of yourself

Unique person

Difference maker personas

Job Holder and Actual Best Version of Yourself profiles may be achieved by adopting - in addition to technical knowledge - behavior as per the following skills: 1) Appreciate cooperation and feedback: no design is perfect and it can always be improved; 2) Think with an open mind; 3) Develop empathic behavior; 4) Be able to develop real communication; 5) Develop a problem-solving attitude. While some of the Soft Skills listed below are most directly related to the designer profession, all are significant for becoming a unique person or a difference maker, according to their different life roles, Communication, Courtesy, Flexibility, Ethics, Responsibility, Teamwork, Empathy, Leadership, Learning to Learn, Problem-solving, Digital Thinking, Planning, Result Orientation, Awareness, Imagination and Creativity, Human Aspects of Technology, Talent development. 7.2 Preparatory Part Since Soft Skills are a new topic for most designers, this part of the teaching experiment focuses on encouraging the designers to become familiar with the philosophy, which underpins this theme.

Soft Skills Teaching Proposal for “Designers”

1547

The slides created by the authors aim to make designers “breathe” the atmosphere of Soft Skills. The authors deem that such slides must convey this objective starting from the cover (Fig. 1). It is represented here in order to emphasize how “mirror neurons” are one of the dominant motifs of this part of the teaching experiment described herein. The authors believe that the above concept could be very important also, and particularly, for a “Difference Maker” designer. This is especially important both in learning processes and in interpersonal relationships and, above all, in the development of empathy.

Fig. 1. The cover of the series of slides utilized in his part of the teaching experiment. The concept of “mirror neurons” is highlighted starting from the cover (Courtesy of Maestro Luciano Beretta, graphic designer).

Figure 2 presents, as an example, a slide used in this part of the teaching experiment. The title of the slide is “Development of ‘Personal Acumen’”, and the concept developed is that the integration between Intellectual Quotient (IQ) (left side of the brain) and Emotional Quotient (EQ) (right side of the brain) (in which Soft Skills play an essential part) can generate Intellectual Fusion (IF).

1548

E. G. M. Rovida et al.

Fig. 2. Intellectual fusion as integration between Intellectual Quotient and Emotional Quotient.

The two hemispheres represented graphically express the concept effectively. Figure 3 highlights an impressive phrase by Albert Einstein.

Fig. 3. Compelling statement by Albert Einstein.

Figure 4, on the other hand, shows how, with Intellectual Fusion a person, and, therefore, in this case, a designer, can be a Transition Person capable of transforming Negative Patterns into Positive Patterns.

Soft Skills Teaching Proposal for “Designers”

1549

Fig. 4. Soft Skills can challenge a designer to become a Transition Person by using mirror neurons to transform negative patterns into positive ones.

7.3 Guided Examples This paragraph applies the Soft Skills presented in Table 3, as an example, to certain professional situations linked to the designer’s activity. Individual cases are discussed collectively. Table 4 provides some examples of guided cases that will be explained during the course. Soft Skills

Example

Communication A buys a machine built by B and accuses B of not having built it well, while he is accused by B of not knowing how to use it: A has built the machine well and B is able to use it, but A and B did not communicate well before the purchase Example of poorly written instructions for use and example of the same improved Courtesy

Design office in which the “atmosphere of terror” is in force: arrogant boss, and subordinates terrified at the idea of making a mistake

Flexibility

Design office where the boss says “we have always done this and it has always gone well, so why change with all the risks involved?” Each technical system evolves through 3 stages: start-up, maturity and obsolescence. It is important for the designer to quickly perceive the stage reached A highly professional but not very flexible IT engineer called to give a simple seminar on CAD says, “I have a 700-slide presentation”. When requested to reduce the number, considering the target of the presentation, he replies once again, “I have a 700-slide presentation”. So it was, and the presentation was a disaster

Responsibility

Ability to admit one’s own design mistake and correct it (the best person is one who mistakes the least number of errors and knows how to correct them). Ability to recognize that, at times, it is very hard to get to the truth (continued)

1550

E. G. M. Rovida et al.

(continued) Soft Skills

Example

Teamwork

Design Method Exercises. The teacher proposes an exercise consisting of a diagram, inviting the students to identify one or more constructive solutions. The task is assessed by examining every solution proposed by each student, with an articulated discussion on the pros and cons of each option. All students participate enthusiastically in the discussion

Empathy

In the designer’s conversation with the client, empathy is precious to understand what the client wants, beyond words, and to help him express it

Leadership

The head of a design office issues a directive today, and tomorrow, perhaps after a rant, he establishes the reverse because, in the meantime, he has changed his mind and forgotten what he said before

7.4 Workshops This final part is devoted to the personal activity of participants. After learning the basic philosophy of Soft Skills, how the general definition of Soft Skills can be applied to the designer’s profession, and having followed some commented case studies, participants are invited to consider a case they have encountered in the profession, and how they could improve the solution by applying Soft Skills. The two tests described above are repeated after the workshops.

8 Conclusions Starting from the above proposal, the authors are actively involved in organizing one of the first Italian teaching experiments of “Soft Skills for designers”. The authors thus hope to contribute to the creation of the “Designer Difference Maker”. The results of the first experiment will be presented in another paper. The comparison between the above proposal and the results of the teaching experiment could be interesting for the upgrading process of this activity.

References 1. Rovida, E., Zafferri, G.: Proposal about the introduction of the soft skills in the teaching of product development. In: Cavas-Martínez, F., Eynard, B., Fernández Cañavate, F.J., Fernández-Pacheco, D.G., Morer, P., Nigrelli, V. (eds.) Advances on Mechanics, Design Engineering and Manufacturing II. LNME, pp. 683–695. Springer, Cham (2019). https://doi.org/ 10.1007/978-3-030-12346-8_66 2. Robles, M.M.: Executive perceptions of the top 10 soft skills needed in today’s workplace. Bus. Commun. Q. 75(4), 453–465 (2012) 3. Kahnemann, D.: Thinking Fast and Slow. Penguin Books (2011) 4. Owen, J.: Le qualità dei leader. 65 Soft Skills fondamentali per esercitare la leadership. ROI Edizioni (2018) 5. Vidotto, G., Ghisi, M., Sergotti, M.C.: Allenare le Soft Skills. Teoria e Pratica. Marsilio (2021)

Soft Skills Teaching Proposal for “Designers”

1551

6. https://positivepsychology.com/empathic-listening. Accessed 27 Mar 2022 7. https://uxdesign.cc/10-most-important-soft-skills-for-designers-66d17462d4b4. Accessed 27 Mar 2022 8. https://phase.com/magazine/5-soft-skills-every-product-desinger-should-master/. Accessed 27 Mar 2022 9. De Freitas, A.P.N., Almendra, R.A.: Soft skills in design education, identification, classification and relations: proposal of a concept map. In: Proceedings of the 23rd International Conference on Engineering and Product Design Education (2021) 10. Rovida, E., Zafferri, G.: The importance of Soft Skills in Engineering and Engineering Education. Springer, Cham (2021)https://doi.org/10.1007/978-3-030-77249-9 11. Rovida, E., Zafferri, G.: I progettisti e la creazione di valore nell’impresa 5.0. Il Progettista Industriale (2021) 12. https://www.fondazionepolitecnico.it/progetti/cultura-e-capitale-umano/lever-up/. Accessed 27 Mar 2022

A New Approach to Teach Conceptual Design Integrating Additive Manufacturing Constraints Robin Kromer1(B) and Elise Gruhier2 1 University of Bordeaux, I2M Bordeaux, 351 cours de la Libération, Talence, France

[email protected] 2 ENSAM, CNRS, I2M Bordeaux, 351 cours de la Libération, Talence, France

Abstract. In the context of the Industry 4.0, new processes have appeared, such as the additive manufacturing (AM) process. Therefore, new approaches to design parts have to be developed to integrate process constraints. It is very difficult for teachers to effectively guide students during conceptual design for AM, even though various idea generation techniques and methods are available. AM requires an important preparation and compromise in design phases. In addition, design need to be generated in a digital environment. Among the various steps, critical impacts on the final part quality are linked to part orientation. So, this paper focuses on the conceptual design phase to educate future technician and engineers to the design for additive manufacturing. Pilot-study on the teacher’s role interacts through active pedagogical tool with students. They need to think in 3D and create directly in 3D. The propose education development use an immersive tool to consider the process constraints. Thereby, students need to deal with an AM process chain. New approaches are analyzed based on the design guidelines for Additive Manufacturing, which were developed by the students themselves. Also, the students estimated opportunities and limits linked to product-process relationship. Finally, the success of the new course contents and form is reviewed by a student evaluation. Keywords: Immersive tool · Teaching · Additive manufacturing · Design · Game

1 Introduction Students’ learning needs to endure and make sense of complexity now and in the future. Students should go beyond the learning of facts and skills. Learning knowledge and skills is difficult despite available data and information. It’s easy to spot details but hard to see patterns. For students to think what Additive Manufacturing (AM) can offer, they need opportunities to head up. They should generalize, summarize, and draw conclusions by looking at their learning in a holistic way. Factual and conceptual levels of thinking is proposed by intentionally designing active learning about additive manufacturing. They can construct understanding and facilitate transfer to a new era of Industry 4.0. Three questions can be proposed to help students go from surface to deep learning:

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1552–1563, 2023. https://doi.org/10.1007/978-3-031-15928-2_135

A New Approach to Teach Conceptual Design Integrating

1553

• From the facts or skills being taught, which concepts are they learning? • Which connections between concepts would make students? • What opportunities for application and transfer can be designed to help students? These questions define different levels of conceptual thinking. If students do not have a strong understanding of specific concepts, they will struggle to see patterns and make connections between them. Likewise, active pedagogical methods and immersive tools are developed to teach “Design for Additive Manufacturing” (AM) [1]. Current courses in engineering design are well adapted to conventional design and manufacturing processes. First, students have to identify customers’ needs and define the functional analysis. The system is then divided into smaller parts, called functional elements, which describe what each part needs to realize. However, the intention of the design or solution is not included yet. Nowadays, in regards to the Industry 4.0, new methods and tools to manufacture products are used in companies such as additive manufacturing. This process allows the production of complex products and enables to generate meta-structures (lattice or skeleton) directly from functional elements. Besides, it opens up new opportunities for innovative and high-performance products, but this technology implies considering new manufacturing constraints as early as possible in the design phase. Moreover, complex geometries can hardly be represented using papers and 2D drawings. 3D sketches and sculpting are required to quickly create and visualize what students have in mind [2]. Today, it is often observed that additive manufacturing offers many possibilities and degrees of freedom to production engineering, which design technicians/engineers are not aware. They therefore do not use full potential and industrial needs cannot be filled. AM generative methods and the interaction of the individual part of a system are associated to functionalities. However, in-depth understanding of how the systems work and the tools used is precisely what is needed in the area of additive manufacturing. They can develop the right approach to design up to the relevant production task. This paper hence describes a method for teaching about “design for AM”. Design and practical use are also taught. They need experiments to critically evaluate designs using those new technology. The aim of this is to give students practical design skills and teach them about options offer by AM at part and product level. First, a literature review about teaching methods and design for AM is presented. Second, teaching sequences and tools are described. Third, a case study is presented and discussed.

2 Literature Review 2.1 Active Pedagogical Guidelines Problem-Based Learning (PBL) is a student-driven pedagogy. Students learn about a topic through experience. Complex real-world problems are used. They solve an openended problem related [3]. It promotes student learning of concepts and principles. The PBL process does not focus on problem solving, but allows active development of other skills and attributes. Knowledge acquisition, enhanced group collaboration and communication are skills that can be gained [4]. It was applied in a context of our industry Design for Additive Manufacturing courses.

1554

R. Kromer and E. Gruhier

Data-Driven Learning (DDL) has been recognized as one of the most important aspects of content and value generation in the 21st century. DDL is a good practice to align theories and practices. Learners observe patterns, meanings or other aspects through data analysis. DDL changes learning environment where teacher is no longer the only authoritative owner of knowledge, but rather a “consultant”. Additive manufacturing is associated into 4.0 industry. Complex data and its analysis can contribute to geometry/process/material interaction. 2.2 Additive Manufacturing Process The AM technology has changed expert practice, and the knowledge and know-how related to this process are still evolving with the processx. AM experts use various strategies to design or manufacture parts precisely but the knowledge of how the process occurs is not well understood or formalized. Gibson et al. [5] see many steps in the CAD-to-Part lifecycle which start from STL formats, part removal, clean-up and postprocessing. AM is seen as a long process. It starts from the design phase (part geometry and optimization) up to the quality control of the final part. They are all in interaction For instance, support removal, “depowdering” or post processing are activities that have to be considered because it is time consuming. However, there is also a step of preparation that clusters design phase. This event needs to be considered as early as possible. That’s why a novel approach to aid students during the conceptual design has been developed, tested with French engineering students and will be presented in this paper. The objective is to use serious-game and immersive tools to increase the interaction, immersion and imagination. Three game steps are defined and related to the students’ skill level. 2.3 Immersive Tool for Teaching The three properties of Immersive Tools (IMT) (virtual reality or augmented reality) are expressed by Burdea and Coiffet [6] as the “3 I’s”: Immersion, Interaction and Imagination. That definition well sums up the interest of this technology. For instance, the user can be fully immersed into a specific environment (as if it existed) without physically creating it. Moreover, the user can directly interact with his environment and can thus detect and solve issues more easily or more quickly. IMT is also quite close to the real world and hence do not prevent human imagination. IMT has first appeared in the gaming domain, but is currently used in various domains, such as surgical [7], anatomy [8], music teaching [9] and so on. The technology is also widely used in the domain of engineering and training. Main research works focus on the validation of the design. Halabi [10] uses IMT in digital prototyping to evaluate the design of his students. Abulrub [11] makes project reviews with IMT and Castronovo [12] detect mistakes and review students’ skills in the form of a game in construction projects. Wolfartsberger [13] uses IMT to easily assemble and disassemble parts from a product during a project review. That enables every project stakeholder to participate in the meeting even if they are unfamiliar with CAD softwares. In the training domain, IMT is integrated to engineering students’ course through projects (e.g. in Karlsruhe Institute of Technology in Germany). These projects enable them to enhance their IMT knowledge and re-use them on future projects [14].IMT has many advantages:

A New Approach to Teach Conceptual Design Integrating

• • • • •

1555

Enhance students’ motivation and creativity [6]; Prepare students to the Industry 4.0: acquaintance to IMT [4] Improve communication and interactions among a group of students [13] Explain complex, theoretical and hidden concepts [15] Train in a real environment at scale one [4]

Besides, IMT is also used to realize 3D sketches. Study has been pursued to evaluate the impact of IMT on early design stages, and concept generation has been improved with the use of these tools [16]. In sketch-based modelling, Wang et al. [17] have developed algorithm to reconstruct surfaces of 3D models. Besides, De Klerk et al. [18] have made an IMT environment to explore and quickly create simplified models at different scales in the early design stages. The authors mention that a CAD software is unnecessary at this stage, as required precision is not so high. Interactive 3D model reconstruction has also been developed so that designers can evaluate in a short amount of time the potential of numerous design variations [20].

3 DfAM Courses 3.1 Structure The presented organization is setting up at Bordeaux University (in France) for third year bachelor’s degree students in Mechanical Design and Production. The objective of the whole course is to teach to students the link between product / process / material. Concerning their background, they have solid knowledge on advanced design, such as surface generation and parametric design, and they have followed an initiation to the additive manufacturing process. The development was guided by the need for industry 4.0. New technician will develop new product using advanced manufacturing technology so it is stated an importance of linking design and AM. Students can: • • • •

Specify capabilities, limitations of AM technologies Quantify and select AM technologies for specific design-manufacturing applications Define causes of errors and irregularities Apply DfAM for an innovative challenging design and manufacturing application.

These objectives are synthetization and knowledge of AM. Instructors provide a high-level introduction to AM technologies. Functional classification framework is used to provide possibilities and limits based on benchmark analysis [18]. Case studies and commercial applications are used for motivation with an industrial context. Different elements are presented: • Identifying Opportunities: identify AM product development opportunities and customer needs. • AM Project Planning and Economics: impact of the digital manufacturing paradigm. • AM Concept Generation: customization, low-volume production, assembly reduction, and complex geometry.

1556

R. Kromer and E. Gruhier

• AM Embodiment Design: structure and topological optimization + AM tolerancing considerations for various part features (e.g., through holes, snap-fits, living hinges, etc.) • AM Detailed Design: AM common build strategies (and potential errors) caused by part orientation, poor interlayer bonding, and resolution limitations Then, active pedagogical methods are used in three steps (cf. Fig. 1): Benchmarking Part, Dissection/Selection and Design Problem. This course has been conceived as a learning game to motivate students. The first module (A) teaches to students: is support necessary? What should be the angle during printing? What is the best part orientation? What is the impact of these constraints on the manufacturing, the precision, the roughness, the properties? The second module (B) allows to evaluate design rules based on 3D printed part with functional dimensioning and tolerancing (FD&T). The idea is to highlight causal effect, which ensures all requirements link to specific manufacturing constraints. And finally, the third module (C) introduces a 3D sketch approach as a competition between teams. DfAM is then explored creating a new product that is suitable for AM only.

Fig. 1. Course configuration

3.2 Step A A global approach, Design For Manufacturing (DFM) encompasses a set of activities such as: the choice of processes, the choice of materials and the evaluation of the manufacturability of a product. This approach is linked to factors that influence the decision-making process. The paper [19] classified AM benchmarking into three types: (i) geometric, (ii) mechanical, and (iii) process benchmark. It is used to measure the geometric features of a part (i.e. tolerances, accuracy, repeatability and surface finish). It is used to analyze the mechanical properties to establish with process related parameters [20] Finally AM has lack of precision and poor dimensional accuracy so a process benchmark is required [21]. Precision and accuracy are critical to the fundamental layering mechanism and require some of the error correction methods. There are in this case different immersive tool examples providing information about the process such

A New Approach to Teach Conceptual Design Integrating

1557

as layer size, hot-end temperature, acceleration-deceleration effects (Fig. 2). It helps to understand cause and effect in the process and associated to benchmark features.

Fig. 2. Modelling hot-end temperature of FDM process

The first PBL activity of the course is focused on capabilities and limitations of AM technologies. Students actively explore AM consideration by measuring features on a benchmark one of the three metrics: resolution, accuracy, or surface finish. A castle inspired from Vauban’s architecture is used. A branch is composed of six zones dedicated to the generation of a basic geometric model and requirements to test the accuracy and repeatability (Fig. 3). The fabricated benchmark artifact is easily measurable using a 3D scanner or a coordinate measuring machine. The designed artifact, e.g., should be large enough to test the performance near the edges as well as near the center. Substantial number features are integrated in the model, as well as holes, pockets, and bosses, and almost all the other features mentioned in previous section.

Fig. 3. Vauban’s architecture enabling the benchmarking of AM machines and component evaluations

The part enables students to observe the effects of potential sources of AM build errors (e.g., printers, materials, layer thickness, etc.) on the chosen metric FD&T. The factors related to the characteristics of the product join the possibilities of shaping the product which influence manufacturability. Finally, students prepare a “pitch” on four letters that have geometric features to show their knowledge on AM possibilities. The students will mark the other teams on five points. 3.3 Step B The definition of the shape of the product is closely linked to the choice of AM processes in interaction with the identified material. Thus, many criteria and design rules

1558

R. Kromer and E. Gruhier

will influence the functionality and quality of the product. It is in the mastery of the simultaneous processing of several interconnected criteria, in an integrated design approach, that DFAM resides. The use of 3D printing technologies requires good design and manufacturing practices that must be assimilated very early in order to be considered at the product definition stage. The aim is also to minimize the difficulties and the manufacturing costs. The skills acquired through this step are: • To understand the implication of Additive Manufacturing processes and materials in the process of defining and designing a product • To manage several technological criteria simultaneously in order to control the design, quality and manufacturing costs • To optimize the management of a design project for AM through the DFAM method At this step, students imagine what can be done but they only oversee the problem. An introduction time is then proposed about placement on a building platform and simple shape examples are discussed for interaction and imagination reaction. Immersion is given by the “architect features” cards. They have cards about causes and need to discuss about consequences in terms of process, cost, properties etc.… This introduction uses immersive tools to obtain 3D representation of build platform mechanisms. They need to have understood the eight rules of Mbow et al. [23]. Afterwards, an activity is proposed to generate data for DDL. The idea is to give 3D printed parts made with three technologies (Fused Filament Fiber, Direct Light Projection and Laser Powder Bed Fusion) with the same orientation of build-up. First, they classify defects (observed with eyes) in the build-up direction. This operation could also be developed with metrology tool or insitu diagnostic. Second, they list features. They refer to a previous benchmark analysis work (Table 1). Each group has three objects analyzed and their development are shared. Table 1. Pattern development (Douin et al. 2022) Scheme

Name Extrusion

Complement Orthogonal Swept

Variable section volume Hollow volume Shell Horizontal hole

With support Without support Right side up

Scheme

Name

Complement

Vertical hole Overhang surface Bridge

Upside down

Rib

With support

Slot

With support Without support With support Without support

Without support

An initial discussion is launched about AM rules development with an example. Students try to generate relationships between a pattern and a defects library. They define AM rules with standardization map. The objective is to complete a table of AM

A New Approach to Teach Conceptual Design Integrating

1559

rules. For example, they can locate a defect within two branches with a half-sphere shape. It occurs before the merging with the branches. They can associate the defect to "bridge". This corresponds to a rule of overlap limits. It illustrates the collapsing of a surface and the idea to optimize using “Gothic” arches for instance. Students have data on the capabilities of leading AM process/material for different patterns and organize it into a table that offers side-by-side comparisons of the alternatives. They present three AM rules (group by group) and they win a point if it is justified. A discussion is proposed with the other groups. Whenever necessary, assistance is provided to refine their understanding with specific manufactured part using the three technologies. Knowledge of manufacturing processes and materials are applied to identify the most likely health-material and geometry association. Many students find this task difficult than expected. It requires considerable knowledge of AM processes. Multi-scale cause can contribute to specific defects, microstructure or deviations. Strengths and capabilities and a sophisticated reasoning about the most likely fabrication path is difficult and need experience. 3.4 Step C During previous steps, attendees are exposed to factors that affect the print quality and economic viability. The theoretical topics covered are ordered to make a problembased exercises. The first part of the curriculum covers some of the theoretical aspects and design rules of printing. Then, they do a design exercise about a block manifold into an AM specication. The block manifold is a block of metal with holes drilled into it and its functionality is to allow hydraulic fluid to go from a source to several destinations. The requirements can concern technical, weight or functional. The students start with an idea and draw with the 3D sketch tool. The first concept considers only input/output localization and general space definition. They answer to the requirements considering the process constraints. One objective asked in PBL is minimize weight (Fig. 4). The attendees are first shown removing unrequired material from the block manifold, through a simple ‘shell’ operation, would result in. The result is a ‘minimal’ set of pipes. Functional surfaces are defined and a global overview of DfAM is discussed. Finally, the objective is to make the manifold as light as possible but, it needs to be manufacturable with as little post-processing labor. Figure 3 shows an example after shell tool use, redesign cylinder with fixture supports and overhang analysis. Different tools are proposed and DfAM steps are driven.

Fig. 4. Two orientation analysis with support structure

1560

R. Kromer and E. Gruhier

The team starts to finalize design ideas. The first thing is to discuss the print orientation for the manifold based on functional surfaces and post-machining. They conclude that it contributes function and complexity operation. Whenever they are not sure, they are encouraged to save their work in STL format. They are asked to present their idea with arguments to an expert. They can also visualize with the AM software the support material and indicators such as proximity or thickness for their current design. This teaches them the importance of being able to quickly switch back and forth between the different tool applications. It is required for different aspects AM (numerical twin development). They can for instance have sensibility analysis of thin wall or thin gap which contributes to bad printing in metallic part. After that, the part is saved into OBJ format and use an IMT to evaluate post-processing. This immersive time is interesting to imagine hand operation required and avoid inaccessible zones to remove support or finishing operation (Fig. 5). Students can also criticize support structure depending on support removal forces that will be applied.

Fig. 5. Thin wall, thin gap, overhang analysis to optimized manufacturability and support removal through IMT integration

Finally, each team pitches and is evaluated by other teams. The idea is that students must be critical about a concept by arguing about the advantages and disadvantages of other ideas. The team then needs to propose a solution according to the comments. Design strategies is surprisingly varied, but they are mostly successful. There is not a single correct way of designing for AM but they all can justify with quantitative information and iteration. There are many different solutions, each of which has different implications on the quality and function of the part. Once the attendees have finished their designs, they are purposefully not corrected, as these faults will help to promote further learning. Learning experience is driven for instance with the difficulty to remove support material. This truly is an eye-opener as, upon attempting to remove support. They immediately grasp why it is so important. It is worthy to check post AM operation. Also, geometric deviation or defects can be observed. It can be so hard to remove and therefore add costs implication to the part. This hands-on experience is probably the element that can reach DfAM usefulness in a significant way.

4 Difficulties and Validation of the Proposed Approach The significant change in the designs that novices produced from step A to step B is drastic. Theoretical knowledge using active pedagogical strategy was worthy. However,

A New Approach to Teach Conceptual Design Integrating

1561

it gives us an idea to increase this development for specific AM processes. A significant decrease in poor designs is noticed. The presentation of new concepts seems beneficial. One key aspect of this study was that the designs were evaluated and criticized by the students themselves. This effectively forced novices to evaluate their designs and iterate when they were not good enough. In the second validation step, no differences were seen with data study and the group that did not manage to overcome AM rules. This result was expressed by students as ‘time consuming’. They admit formalization rules is hard and they prefer transmissive interaction in this case. However, very interesting questions and remarks were developed in this session. Time for note making is required or a feedback document should be proposed. Nevertheless, the extracted information highlights the need for a “re-design” of the considered first attempt based on AM experience. As presented by Carfagni et al. (2020), actual roles of prototypes in design processes is very important. The step C enabled the students to work with DFAM thinking in order to allow students to better exploit the potentialities of AM. However, the problem appears to be quite complex because the students had no experience with the different technologies. There is the need of additional data to comprehensively develop successful academic programs. All students wanted to experiment process. Part examples were not enough and they mixed their design method with conventional tools. Accordingly, besides the need of additional information for AM purposes, experiment highlights that results from different technological backgrounds can be very different or even conflicting with each other. They do not design if they have to use different technology and they have difficulties to obtain general survey. Finally, the challenge of the step C changed everything. The possibilities to check their design and 3D visualization help them to mostly find ways for manufacturability. This leads to believe that students propose almost perfect designs. However, references for post-processing or residual stresses evaluation should be included. They express good feedback to have 3D models with simple operations to generate functional surfaces and support structures. The teacher should be aware that from scratch it is almost impossible. A guide is necessary and CAD skills should not act.

5 Conclusion and Perspectives This paper has presented a novel approach to integrate additive manufacturing constraints in the conceptual design phase. Game approaches enable interaction and teamwork. The objective was an awareness of cross-disciplinary around additive manufacturing. Design driven by manufacturing factors simultaneously considers design goals and manufacturing constraints to identify manufacturing issues and facilitate their resolution during design. During the course, the students received insights on the following topics: From the concept, design, data preparation, and post-processing using IMT, the students experienced the whole AM process chain. The systematical approach of the course led the students through the team project where they received expert feedback and real-life experiences from various prototypes. By applying PB and DDL, students successfully developed skills which are important for engineers, especially in the field of product. They can be applied to different students and adapted to skill levels. They have been

1562

R. Kromer and E. Gruhier

tested for two years now and are still in development. The key words were to place the right material at the right place for the best reasons and they got it.

References 1. Yavari, M.R., Cole, K.D., Rao, P.K.: Design Rules for Additive Manufacturing, Understanding the Fundamental Thermal Phenomena to Reduce Scrap. Mechanical & Materials, Engineering Faculty Publications 416 (2019) 2. De Klerk, R., Mendes, D.A., Pires, M.D., Pinto, D.J., Jorge, J., Simoes, L.D.: Usability studies on building early stage architectural models in virtual reality. Autom. Constr. 103, 104–116 (2019) 3. Aalborg PBL, Aalborg Universitet, 2015 4. Pérez, L., Diez, E., Usamentiaga, R., Garcia, D.F.: Industrial robot control and operator training using virtual reality interfaces. Comput. Ind. 109, 114–120 (2019) 5. Gibson, I., Rosen, D.W., Stucker, B.: Design for additive manufacturing. Addit. Manuf. Technol. (2015). https ://doi.org/https://doi.org/10.1007/978-1-4939-2113-3_17 6. Burdea G.C., Coiffet P.: Virtual Reality Technology, 2 edn. 464 pages. John Wiley & Sons (2017) 7. Moro, C., Stromberga, Z., Raikos, A., Stirling, A.: The effectiveness of virtual and augmented reality in health sciences and medical anatomy. Anat. Sci. Educ. 10(6), 549–559 (2017) 8. Mathur A.S.: Low cost virtual reality for medical training. In: IEEE Virtual Reality, France, 23–27 March (2015) 9. Innocenti, E.D., et al.: Mobile virtual reality for musical genre learning in primary education. Comput. Educ. 139, 102–117 (2019) 10. Aebersold M.: Simulation-based learning: no longer a novelty in undergraduate education. Online J. Issues Nurs. 23(2), 1 (2018) 11. Halabi, O.: Immersive virtual reality to enforce teaching in engineering education. Multimedia Tools Appli. 79(3–4), 2987–3004 (2019). https://doi.org/10.1007/s11042-019-08214-8 12. Abulrub A.G., Attridge A., Williams M.A.: Virtual reality in engineering education: the future of creative learning. In: International Conference IEEE EDUCON. Princess Sumaya University of Technology in Amman, Jordan (2011) 13. Castronovo F., et al.: Design and development of a virtual reality educational game for architectural and construction reviews. In: 126th American Society for Engineering Education Annual Conference and Exposition, Florida, 15–19 June (2019) 14. Carfagni, M., Fiorineschi, L., Furferi, R., Governi, L., Rotini, F.: , Usefulness of prototypes in conceptual design: students’nview. Int. J. Interac. Des, Manuf. 14(4), 1305–1319 (2020), ISSN 1955–2513 15. Rieuf, V., Bouchard, C., Meyrueis, V., Omhover, J.-F.: Emotional activity in early immersive design : Sketches and moodboards in virtual reality. Des. Stud. 48, 43–75 (2017) 16. Wolfartsberger, J.: Analyzing the potential of virtual reality for engineering design review. Autom. Constr. 104, 27–37 (2019) 17. Wang, K., Zheng, J., Soon, S.H.: Progressive sketching with instant previewing. Comput. Graph. 81, 9–19 (2019) 18. Douin, C., Gruhier, E., Kromer, R., Christmann, O., Perry, N.: A method for design for additive manufacturing rules formulation through spatio-temporal process discretization. In: 32nd CIRP Design Conference. Elsevier (2022) 19. Rebaioli and Fassi: A review on benchmark artifacts for evaluating the geometrical per formance of additive manufacturing processes. Int. J. Adv. Manuf. Technol. 93, 2571–2598 (2017)

A New Approach to Teach Conceptual Design Integrating

1563

20. Wong, Y.S., Fuh, Y.H., Loh, H.T., Mahesh, M.: Rapid Prototyping and Manufacturing Benchmarking, 1st edn., Software Solutions for RP, pp. 57–94 (2002) 21. Cajal, C., Santolaria, J., Velazquez, J., Aguado, S., Albajez, J.: Volumetric error compensation technique for 3D printers. Proc. Eng. 63, 642–649 (2013) 22. Mbow, M.M., Vignat, F., Marin, P., Perry, N.: 16e Colloque National S-mart, Les Karellis, France (2019) 23. Häfner, P., Häfner, V., Ovtcharova, J.: Teaching methodology for virtual reality practical course in engineering education. Proc. Comput. Sci. 25, 251–260 (2013)

Project and Problem Based Learning for Students and Teachers Involvement in a Multidisciplinary Engineering Project Camille Pédarriosse(B) , Lionel Roucoules, Florian Huet, Esma Yahia, Arnaud Polette, and Mathias Kleiner Arts et Métiers Institute of Technology, HESAM Université, LISPEN, 13617 Aix-en-Provence, France [email protected]

Abstract. The aim of this work is to analyze the contributions and points of vigilance of the implementation of multidisciplinary project-based learning (PBL) at of Arts et Métiers engineering school. The choice of PBL method stemmed from a desire to 1) recreate dialogue and collaborative work between teachers, 2) make students be eager to accomplish themselves personally and to progress in their learning. Its implementation has shown the importance of organizing moments of exchange between teachers and building a common vocabulary in a pedagogical project involving several disciplines. Without representing a solution to all the teachers’ problems, the implementation of the PBL has allowed authors to work in a very concrete way on writing lesson sequences with several hands. The current goal of the pedagogical teams is to continue this work in an effort of continuous improvement by ensuring moments of regular dialogue between teachers and field surveys with students. This article therefore presents the context and the theoretical paths followed which enabled us to choose the implementation of project-based learning (I. Context and Problem, II. State of the Arts). It also presents the whole phase of choosing the subject, accompanying the teachers, scripting and implementing the project (III. The PBL method, choice and IV. Implementation). Finally, the first results and improvement appear in a fourth part (V. First results) and allow us to present the contributions of this experience. Finally, part VI depicts conclusions and future work. Keywords: Motivation · Project-based learning · Multidisciplinary learning

1 Context and Problematic 1.1 General Context Arts et Métiers is one of France’s oldest engineering schools. The main curriculum is based on three-year specializing in mechanical, industrial and energy engineering. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1564–1575, 2023. https://doi.org/10.1007/978-3-031-15928-2_136

Project and Problem Based Learning

1565

The pedagogical experience explained below concerns the first and second years of the curriculum. The school welcomes about 150 each year, the permanent teaching staff is composed of 40 teachers. The collection of qualitative data, as well as the first conclusions and roadmap for improvement, are mainly focused on a second-year group of students who followed the energy engineering course and specially mechatronic prototyping and decision making. 1.2 The Issues of Students Engagement The lack of engagement of students in their learning is a recurrent point of discussion among teachers. This lack of engagement seems to be multifactorial, based on the profile of our students, on the admission process at Arts et Métiers and on a necessary reflection of the way that students understand the way of learning. The New Z Generation Profile of Learners. Generation Z, without being strictly identified temporally, is the name given to people born after 1993. According to many studies [1, 2] on this generation of digital natives, they have the ability to: – – – –

Navigate between the real and virtual worlds without this border being a problem Quickly search, find and share information Create content, especially on the Internet Multitask, which can have positive effects but negatively impacts their ability to concentrate.

For these students, doing is more important than theoretical knowing. Learning can take place in any situation (in and out of school). They also have a vision of coconstruction of knowledge and skills and a natural impulse to collaborate [2]. This generation is looking for feedback in the context of work and studies. This need can be explained by the use of communication tools and networks on which feedback is permanent [1]. That conclude to a gap between what students expect from their studies and the traditional way to transmit knowledge. Students operate in a logic of content sharing, group work and permanent exchanges. Indeed, information (including teachers’ lessons) is accessible everywhere at every time. Students do not feel the need of attending lessons any more. Access the information is not learning, the question of teaching value is raised. How to give back value to learning in the eyes and minds of students? A Regulated and Conventional Admission Process. Ninety percent of the students who pass the entrance competition to the school come out of a two-year postbaccalaureate program that is very demanding in terms of quantity and pace of work. During these two years, the work required is very closely supervised. Preparing for the competitive examination also requires significant revisions relating to ‘theoretical’ learning outcomes. The teaching scenarios, when they exist, are there to tell a story and not to highlight transversal skills on a real industrial engineering problem. So far, they have been in a ‘scholar’ context: the rhythm (i.e. the agenda) is controlled by the

1566

C. Pédarriosse et al.

schedule of the educational sessions, which are, moreover, classic (lessons, exercises, practices). As soon as students are admitted at Arts et Métiers, they are placed in an environment that authors qualify as ‘professional’. The objective is to train students to address industrial issues that must be supported by both industrial projects and fundamental skills. Synthesis. Even though most of our students meet the characteristics of generation Z, they represent a public with varied expectations and work habits: from the very academic who need a strong supervision, to students who expect more autonomy [3, 4]. The challenge of our pedagogy is therefore to be able to change the context by creating a progressive overlap to accompany the students, to remove the confusion between the two ‘scholar’ and ‘professional’ contexts. The learning evolutions proposed in this paper aims at changing the following reports: – Students seem to be working towards a diploma and not towards the acquisition of the knowledge necessary for a good engineer (‘scholar’ context). – The evaluation as practiced today does not contribute to a better understanding and acquisition of learning, but looks like a meaningless race for points (‘scholar’ context). – Students seek autonomy in their work but are not very well organized when it is given (lack to integrate a ‘professional’ context). – The most theoretical lessons would benefit from being contextualized (overlap between ‘scholar’ and ‘professional’ contexts). – The links between the disciplines are not visible to the students, which degrades the coherence of their study (need for ‘professional’ context). 1.3 The Issues of Teachers Engagement The Vicious Circle of the Carrot and the Stick. The teachers observe that many students work at minimax [5] by doing a minimum of work for a maximum of points. This dynamic is indicative of a loss of epistemological consistency in teaching [6]. The implicit contract that has been established here between students and teachers consists of limiting the student’s work and reflection to give an academic answer. Therefore students do not provide any work of appropriation and personal reflection. This lack of student engagement leads to a lack of motivation on teacher’s side that impact teaching, evaluation and formative feedbacks practices. This paper proposes a possible way to get out of the vicious circle of the carrot and stick [7] to switch and give meaning to learning. Disciplines Silos and Lack of Dialogue Among Teachers. So far teaching practices have mainly been individual. Authors wanted to implement a pedagogical method that would bring a maximum number of teachers to work together and exchange ideas among disciplines. The research of a collaborative project had, for objectives, to create a dynamic of work on a common subject. Authors were also looking for a method to integrate a competency-based approach and wanted the teachers to find value and meaning in their teaching activities.

Project and Problem Based Learning

1567

The work carried out in the paper explains why authors chose project-based learning approach (PBL) and how this pedagogical method answered our two questions: 1) How to engage our students in their learning? 2) How to create a dialogue between teachers of different disciplines to give meaning to their teaching and evaluation work.

2 State of the Arts Related to Students and Teachers Issues 2.1 Theoretical Studies for Students’ Engagement In order to counter this unproductive dynamic and to give meaning to learning, it is necessary to understand the motivational dynamics of students [8]. This motivational dynamic, which engages students in their learning, is made up of three elements: the perceived value of the proposed activity, the student’s perception of his/her competence and the feeling of control over the learning process [9, 10].

Fig. 1. Motivational dynamic model [9].

It is indeed possible to give choices to students (choice of subject, resources, presence or distance learning, group or individual work) to reinforce the perception of controllability. The perception of competence is partly based on the choice of the right difficulty of a subject or an activity by the teacher. It also depend on the form of feedback and evaluation proposed. However, it is much more complex to work on the perception of value, which requires an understanding of the expectations of the students. If we refer to the studies conducted on the profiles of Generation Z learners, we can identify four points that are sought after in their learning: – – – –

Practical learning short lectures and regular feedback [10] Collaborative activities, [7] and openness to external actors or missions, Regular use of new technologies [7] A need to learn how to learn [2]

The notions of epistemological consistency [6] and enaction [11] also provides key elements to guide pedagogical practices and feed the motivational dynamic: – Taking into account the students’ previous knowledge and targeting the points of difficulty of the task to come (perception of competence) [6, 7, 12]. – Targeting the essential points of the discipline that must be transmitted, that authors can call content mourning (Perception of value) [6]. – Allowing and valuing error, which implies regular feedback on the student’s work (perception of value and competence) [6]. – Proposing authentic situations, close to professional situations (perception of value) [6, 11].

1568

C. Pédarriosse et al.

2.2 Discussion of the State of the Arts: The Choice of the Learning Method Our methodological choice was problem-based and project-based learning (PBL) as described and taught by the Catholic University of Louvain [13]. Authors have indeed made the hypothesis that this method would make it possible to answer the questions identified in Sect. 1.2 and 1.3 [14] in particular through: – A problem situation proposed to the students which contextualizes the lessons. [14] – The change in posture of the teacher who becomes a tutor and of students that necessarily become active student [4, 15] – Autonomy given to the student and scripted during sessions that alternate between tutoring and group work [16]. – A change of vision and practice of the evaluation which becomes largely formative and allows the student to leave the right to error and continuous improvement in his work [17]. – A scripting work between teachers that allows to create a link between the disciplines. The Fig. 2 presents the concepts of the proposal and the links among the initial issues (cf. Sect. 1.2 and 1.3) and the PBL approach.

Fig. 2. Main concepts and relations with issues and state of the arts.

3 The Project and Problem-Based Approach: Implementation 3.1 Implementation of Concepts for the Teaching Practice Build a Common Vision and Vocabulary: Teaching the Teachers Regular meetings were held between January and September 2021. Authors wanted to build a teaching team willing to work on scripting class sessions that would operate on a project-based approach. In parallel with this communication work, a number of training sessions were offered throughout the spring 2021 semester. These trainings contributed to the development of a common vision of the project and to use the same vocabulary. Selection of the Engineering Case Study In March 2021, a small team of teachers and a pedagogical engineer was formed in order

Project and Problem Based Learning

1569

to give a dynamic and to support the choice of a multidisciplinary pedagogical subject [7]. The majority of the teachers made the choice to work on the theme of the wind turbine. This topical subject allowed us to bring together the expertise in electronics, computing and mechanics present on the campus through a concrete and authentic situation related to a current theme [16]. Scripting the Engineering Situation, Problem and Project After the choice of the subject, authors started to work on the script. Each teacher wishing to participate in the project was asked to propose a ‘brick’ to be integrated into the script. Each brick was itself made up of the name of the discipline concerned, the targeted hourly volume, the theoretical and/or practical content and the targeted learning outcomes.

Fig. 3. Wind turbine project schedule

The project comprised for the second year students sixty-five hours on electric and electronic subjects and ten hours on decision-making process. This co-scripting work made it possible to have a dialogue between fifteen teachers. From June to September, the pedagogical resources were created. The scenario, the follow-up documents, the evaluation criterion grids and the chronograms were compiled in the student booklets. All these documents were given to the students during the first sessions of the project.

4 Scenario Execution The project named Eolis was presented to the students at the beginning of September 2021. These presentations allowed us to explain the issues that triggered this project, the philosophy and the functioning of project-based learning, the different basic documents of the distributed booklet as well as all the members of the teaching team involved. The topic was presented in the form of a newspaper article about a wind turbine crash and the expectation of a report from experts (the students) on this disaster (cf. Fig. 4). During the first class, students were asked to form groups of six. Each of these groups followed the sequence of lectures shown in Fig. 3. In order for students to be able to work independently, each entry into a new discipline began with the delivery of a booklet in which the students found the sub-script of the discipline and the chronogram with information about Fig. 5:

1570

– – – –

C. Pédarriosse et al.

Deliverables expected in each session Associated resources Learning outcomes related to each of the deliverables Criterion grid for each of the deliverables

Fig. 4. Wind turbine dislocation article as presented to the students

Each deliverable of a discipline has been designed to become an input to the next one. Teachers intervened as tutors to guide students in their reflections without giving answers. Students could use a shared logbook with teachers to share evidence of their research, ask questions, or request expert intervention on particular points.

Fig. 5. Example of a part of a chronogram as presented in the booklet

Project and Problem Based Learning

1571

5 Results: First Qualitative Analysis 5.1 Students Survey Content and Organization of the Survey A first survey was conducted in the middle of the project and a second at the end of the semester. All of the data presented are qualitative and were collected through interviews with all twelve groups of six students working on the project. The objective of the questions asked was twofold: on the one hand, to note the evidence of the students’ commitment to their work and, on the other hand, to target the points of improvement in the project’s scenarization. As it said in Fig. 1 the motivational dynamic is mainly based on three perceptions: Value, competence and controllability. We directed our questions to gather evidences of improvement on these three points through the implementation of PBL. Questions asked on first session (week 43 Fig3) Were the documents provided to you and the objectives clear? How do you feel about working in a group? About working in group: Do you manage to share work, information and knowledge? Does the logbook help you in your work? Do the interventions of your tutors (feedback, provision of resources, expert interventions) suit you? Does this project allow you to make connections between disciplines? Negative points in general Positive points in general Questions asked on second session (week 50 Fig3) Do you find a coherence between the different disciplines you have crossed in the:

Objectives Proof of coherence and clarity of the project: value Positive or negative effect of working together: value Sense of ability to work in a group: competence Clarity in the use of the tool and assistance provided: value and competence Competence and controlability coherence of the project and association with a complex situation: value Open question: values, controllability, competence Open question : values, controllability, competence Objectives Proof of coherence and clarity of the project: value

- Quantity, quality and form of the resources provided - Autonomy given in the work - Tutor's interventions

Does the logbook help you in your work? Does this project still allow you to make connections between disciplines? Negative points in general Positive points in general

Clarity in the use of the tool and assistance provided: value and competence coherence of the project and association with a complex situation: value Open question: values, controllability, competence Open question: values, controllability, competence

1572

C. Pédarriosse et al.

Results of the Survey From our surveys, it appears that students appreciated the choice made by the teachers to work on the subject of the wind turbine. This subject seems to them current and concrete. The wind turbine allowed them to make the link between the different disciplines and gave them the feeling of working on a “complete” project. “It’s transversal, it gives the feeling of doing a real engineer’s job by working on the same project, we can see that we can cross many subjects” “We learn more and it is more interesting. It’s a understandable and stimulating topic. The teachers are always available and the subject of the project is nice and relevant” If the sequence and the coherence of the sessions appeared clear for them, they underlined a lack of vision of the project’s finality. This point of communication on our expectations needs to be clarified: “The organization is a bit strange: we sometimes feel lost in the work and we don’t have a vision of the project’s purpose because there doesn’t seem to be a common goal between the different parts apart from the subject of the wind turbine.” Working in groups and sharing information is a stimulating element of the project. They appreciated being able to call on the teacher’s help and to communicate between groups. They felt able to divide the tasks between themselves but most of the students found that the groups of six students were too large for the work required. The autonomy of work that was given to them was a very positive point often underlined. The working method seemed to them to be a good balance between a project carried out in total autonomy and a directed exercise (DE). “Group work is constructive and it allows autonomy, it is close to a real situation” “We are used to working in groups. Depending on the session, sometimes the group is too large. Until then, a group of 4 would be more relevant and it would be easier to divide the work.” “We’re fairly supervised without being on our own. It’s a good mix between project and DE. We can also help each other between groups. The in-house group also helps because they all have different backgrounds and knowledge.” The quantity and quality of the resources provided and of the tutors’ interventions on the whole project were considered sufficient by the students. The feeling of not knowing exactly how to progress in their work, especially at the beginning of the project, and of losing time in their research, disturbed some groups. The collective feedback at the end of the session proposed by some tutors was very much appreciated. “We felt like we were let loose in nature. We understand the formative side of this approach but we sometimes lose a lot of time.”

Project and Problem Based Learning

1573

However, all the students underlined that the work required on the arduino boards caused them problems. This subject was new to them and did not push them to be proactive in their research or to formulate their questions through the logbook or oral questions. It seems that the encounter with new knowledge content requires more support than the tutored sessions. “In the course on embedded computing, we did not progress because we were missing information on the arduino, we were stuck on the code. As a result, we had the impression of not doing much during the session. We need more support on the coding part because we lacked the basics.” “The method is interesting, we like it more than classical courses but be careful because we lack basic knowledge on the arduino. We are not used to work like that.” The usefulness of the logbook was not understood. The students did not understand the interest of leaving traces that would allow them to understand their way of thinking, the problems they encountered and the solutions they built. Giving form and value to the whole process of reflection is not a habit for our students [18]. “We haven’t used it at all: we need to clarify its purpose. At the end of the session we don’t usually take the time for these little tasks and when things are clear to us we don’t feel the need to keep a written record.” This survey allows us to confirm that the project was appreciated by the students and that this form of teaching corresponds to their expectations. The interactions between students and with the tutors were perceived as constructive [18]. The multidisciplinarity seems to have a strong impact on the perception of the value of the activity, as well as the choice of the project’s subject. We also noted that the students, if they are looking for autonomy, appreciate the fact that their work is marked by regular assessments and numerous feedbacks [18]. 5.2 Results of PBL Implementation for Teachers The fact that PBL requires a lot of scripting work upstream leads to a real dialogue between teachers, particularly on the coherence of the sequence of their content, their posture and their evaluation methods. The implementation of the interdisciplinary project-based learning has thus allowed to: – Decompartmentalize the work of scriptwriting – Discuss the posture of the teacher and the autonomy of the student – To enter into a process of continuous improvement through moments of collective dialogue – To integrate two new teachers to the scenario for the second semester

1574

C. Pédarriosse et al.

The conclusions at the end of the first semester are positive. The posture of tutor is well appreciated by the teachers. It was a role already adopted by some teachers that they were able to take up by being less intrusive in the work of the groups. The teachers were also able to observe a higher motivation on average among the students due to the autonomy and confidence given to them. If some parts of the project (especially the arduino) led to frustration, they did not lead to boredom. The fact of working on models was very appreciated by the students and allowed to give clear objectives through manipulations. However, the teachers underline the lack of proactivity of the students. Some areas of concern were identified, notably the communication of the overall context of the project which sometimes lost the students. A change in the size of the groups was decided for the second semester from six to four students ensure a better individual commitment. A work of reinforcement of the coherence between the part on the control system/on board computing and the part aero/adimensional analysis must also be considered so that the project seems coherent in the eyes of the students. Finally a change of scenario was made in order to accompany the students on the part of course on the Arduino.

6 Conclusion and Roadmap for Improvement in Future Work Engagement and Collaborative Work of the Teachers Project-based learning has allowed us to establish this dialogue and to engage fifteen teachers in writing a common project. PBL is not a solution to be generalized but is an efficient methodology of group work through the important work of scripting that it requires. This project showed us that a very strong involvement of the teachers is necessary for the coherence between the course contents. Indeed, the students immediately felt the weak points of our scripting which affects the coherence and the perception of value of the lessons. It is interesting to note that the courses included in the PBL, when they are led by teachers strongly involved in the scripting, allow to get out of a fixed and static vision of the teaching. The dialogue involved in scriptwriting and the consideration of student surveys have allowed teachers to enter into a more creative process of continuous improvement of the content and form of their courses. Engagement and Perception of Value of the Students The survey conducted with our students shows that the important part given to interactions, group work, autonomous research and experimentation with material manipulation has helped to make the different sessions more challenging. Work Projection for the Beginning of Fall Semester in September 2022 Currently two new teachers are interested in joining our exchanges to co-script their courses with other teachers already involved in the PBL. Authors will also continue surveys with students throughout the 2022 spring semester in order to verify if the modifications made allow a better progress of the students in the project. Finally, authors are working on implementing a clearer communication on the learning objectives. Goal

Project and Problem Based Learning

1575

is for students to understand the link between disciplines and to perceive their training as a coherent and systemic approach as in ‘professional’ context. The objective is not to obligatorily generalize the implementation of project-based learning. This method must remain a way to conduct a constructive dialogue between teachers and to give epistemological consistency to the teaching.

References 1. Dolot, A.: The characteristics of Generation Z. E-Mentor 74, 44–50 (2018). https://doi.org/ 10.15219/em74.1351 2. Jayashree, M., Anil, K.B., Ramya. S.: Are Gen Z values the new disruptor for future educational institutions? J. High. Educ. Theor. Pract. 21(12), 1025–123 (2021) 3. https://ttuu.itu.dk/ 4. Torrijo, F.J., Garzón-Roca, J., Cobos, G., Eguibar, M.Á.: Combining project based learning and cooperative learning strategies in a geotechnical engineering course. Educ. Sci. 11, 467 (2021) 5. Viau, R., Louis, R.: Vers une meilleure compréhension de la dynamique motivationnelle des étudiants en contexte scolaire. Can. J. Educ. 22, 144–157 (1997) 6. Pigeonnat, Y., Legrand, M.: Consistance épistémologique, saveur et enseignement… In: QPES (2021) 7. Williams, M.K.: John Dewey in the 21st century. J. Inq. Action Educ. 9(1), 91–102 (2017) 8. Viau, R.: La motivation dans l’apprentissage du français, St-Laurent (Québec), Éditions du Renouveau Pédagogique (1999) 9. Viau, R.: La motivation en contexte scolaire, Bruxelles, Editions Deboeck (1998) 10. Csíkszentmihályi, M.: Finding Flow: The Psychology of Engagement With Everyday Life. Basic Books (1998) 11. Varela, F.J.: Ethical Know-How: Action, Wisdom, and Cognition. Stanford University Press (1999) 12. Piaget, J.: La Naissance de l’intelligence chez l’enfant. Delachaux & Niestlé (1977) 13. Raucent, B., Milgrom, E., Bourret, B., Hernandez, A., Romano, C.: Guide pratique pour une pédagogie active: les APP (Apprentissages par Problèmes et par Projets), INSA Toulouse & Ecole Polytechnique de Louvain (2011) 14. Alves, A.C., et al.: Teacher’s experiences in PBL: implications for practice. Eur. J. Eng. Educ. 41, 123–141 (2015). https://doi.org/10.1080/03043797.2015.1023782 15. Klein, J.O., Raynaud, G., Serina-Karsky, F., Penard, D., Ruiz, P.: Adaptation de l’apprentissage par problème à l’IUT de Cachan. In: Questions de Pédagogies dans l’Enseignement Supérieur, ENSTA Bretagne, IMT-A, UBO, Brest, France (2019) 16. Dochy, F., Segers, M., Van den Bossche, P., Gijbels, D.: Effects of problem-based learning: a metaanalysis. Learn. Instr. 13, 533–568 (2003) 17. Viau, R.: L’évaluation comme source de motivation ou de démotivation? In: Erudit, n°127 (2002) 18. Fernandes, S., Conde, A., Abelha, M.: Student motivation in the first year of University: findings from the implementation of a PBL project for a real context. In: PAEE/ALE (2021)

Formative Assessment in Context of COVID-19 Pandemic Ana Serrano(B) , Anna Biedermann, José Luis Santolaya, Natalia Muñoz, and Inmaculada Plaza Universidad de Zaragoza, Zaragoza, Spain [email protected]

Abstract. The pandemic situation is contributing to the redesign of training models, promoting new scenarios, or readjusting other pedagogical resources already known, which help to deal with the uncertainty and doubts that have arisen. This context raises new requirements and solutions in the approach of the face-toface, online and mixed model. Adaptation of spaces, compliance with prevention measures, interaction with students, methodologies and especially, an assessment system, which helps to keep track of the subject, so that a more active attitude of the student and their commitment to this process, are of great value. From the reflection on the achievement of objectives, follow-up of the subject, and the autoand peer-assessment, an experience of formative assessment is presented in two environments, online and face-to-face. Both are supported by a process of selfassessment and peer-assessment, which has allowed students to successfully face the subject of Artistic Expression I, in the Degree in Engineering in Industrial Design and Product Development at the University of Zaragoza (Spain). Keywords: Formative assessment · Online assessment · E-rubrics · COVID-19 · Higher education

1 Introduction After two decades of great changes in the countries belonging to the European Higher Education Area (EHEA) have been incorporated significant changes in current pedagogical models. Among the most significant, the impulse in the use of active teaching methodologies and continuous assessment strategies. After these changes the outbreak of the COVID-19 pandemic has put the world of higher education, in Europe and worldwide, under great pressure. This crisis has forced Higher Education Institutions (HEIs) to swiftly change their status quo, their ways of working and their environments, dramatically shaking up our communities [1]. It can be seen that the challenge most mentioned and highlighted by many authors during this period has been accessibility, affordability, online learning pedagogy and basic requirements such as regulatory processes and communication platforms as potential teams to provide teaching and online learning [2].

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1576–1584, 2023. https://doi.org/10.1007/978-3-031-15928-2_137

Formative Assessment in Context of COVID-19 Pandemic

1577

Despite the preference of students for face-to-face teaching dynamics had to change from mainly face-to-face training to online training environments [3]. In this context, the use of self-assessment tools can be useful in virtual environments, as the basis of a formative and continuous online assessment system. The idea of assessment as a reflective process is becoming more and more present in the classroom. García-Peñalvo, Corell, Abella-García and Grande [4] explain that the assessment constitutes a process of enormous complexity that requires the continuous distribution throughout instruction, rather than occurring only in moments scarce and specific, being the worst scenario when it is only limited to the end of the process. Neus Sanmartí [5] delves into assessment, as a useful and regulatory process from which to learn. At times, teachers, pay more attention to the content transmitted, than to its understanding by students, a formative assessment is proposed, as one of the key aspects in continuous assessment. Thanks to this assessment, students know what they can improve, and how to do it to correct possible errors. This reflective process leads to a greater awareness of the student by providing activities aimed at reflecting and assessing their own learning. In the context of engineering, experiences of self-assessment have been implemented to develop skills and attitudes such as commitment, responsibility, self-knowledge, or student autonomy [6–10. A formative self-assessment/peer-assessment process [11] can contribute to highlighting strengths and weaknesses, facilitating a more complete vision of learning [12]. Hence, the need to provide criteria and instruments, both to understand errors and overcome them and to recognize the successes obtained. The interesting question is; what aspects hinder or prevent the student from understanding a concept or carrying out a task? From this perspective, it becomes important to learn to evaluate oneself, based on objectives, strategies, assessment criteria, as well as having various assessment instruments, not only in terms of final results, but also of this process. García-Merino, Urionabarrenetxea and Bañales-Baño [13], indicate: If students must be able to learn by themselves, it is not enough to evaluate only the final knowledge acquired, but the process followed must be assessed as well. In this context, the student’s self-knowledge about his own reflection and understanding of his knowledge is essential for learning to take place [14], since it is the student himself who can correct his mistakes, understand why he is wrong, and thus face appropriate changes. As the student acquires the ability to carry out a critical and self-critical analysis, it will be easier for him to face his learning with autonomy.

2 Objectives The main objective of this experience has been to apply an assessment process according to the online educational framework experienced in the first months of the pandemic, as consequence of COVID-19, as well as into a face-to-face nowadays environment, still with some restrictions. In both cases, the assessment process has been chosen to facilitate the following-up of the Artistic Expression I subject, taught in the Degree in Engineering in Industrial Design and Product Development from the University of Zaragoza. More specifically, it has been chosen to apply a formative assessment, supported by a process of self-assessment and formative peer-assessment.

1578

A. Serrano et al.

3 Development of Innovation 3.1 Context of the Artistic Expression I Subject Artistic Expression I, is a compulsory subject, of 6 credits, taught in the first year, the first semester, of the Degree in Engineering in Industrial Design and Product Development of the University of Zaragoza. The main objective of the Artistic Expression I course is to provide students with a basis for applying different graphic techniques for the representation of industrial products. The representation of industrial products through different graphic techniques is needed when conceptually capturing the idea of a design, that is why developing their ability to sketch in an agile and efficient way will is a fundamental aspect. Likewise, it is intended to provide students with the principles and strategies of visual communication that allow them to carry out product presentations in a panel format that is effective and professional. Therefore, these are considered the two main indicators of learning: the ability to sketch in an agile and efficient way and the ability to produce effective and professional presentation panels. The content of the course is divided into 3 blocks, shown below. The self-assessment and peer-assessment surveys are designed and conducted after the end of each block. They seek to assess the process of the addition of new knowledge to that acquired previously. Block I (week 1–4). Objectives: Framing and proportion. Know how to see, understand the product entirely from observation. Being able to identify the views of the product necessary to describe it. Apply an appropriate rendering scale for views of the whole product and details. Compose views in a balanced way. Initial distribution of shafts in the composition phase. Maintain aspect ratios in all views of the product when rendering. Block II (week 5–7) Objective: Chiaroscuro. Synthetically project areas of light and shadow. Analyze and reflect on the shape of the light and shadow areas. Apply chiaroscuro through different graphic techniques. Use line modulation as a communication element. Create highlight backgrounds appropriate to the product. Compose with highlight backgrounds. Block III (week 8–10) Objectives: Design of presentation panels and formal exploration. Design presentation panels using grids. Establish color interactions adjusted to the product and communicative tone. Properly select typographic fonts. Describe a clear evolutionary sequence in the formal exploration of the product. Use a narrative and tone of the message adjusted to the characteristics of the product. The experience presented was carried out in the subject of Artistic Expression I, taught online for the first time due to the pandemic situation in 2020 and face-to-face in 2021. It was carried out in order to find solutions to monitor the issue in times of pandemic and without physical contact with the students and adjust the process to face-to-face presence again. It was an opportunity to apply a formative assessment process, in which the self-assessment of students in the practice part of the subject is presented, as a useful mechanism, due to its participatory and feedback component.

Formative Assessment in Context of COVID-19 Pandemic

1579

The Formative and Shared Assessment Network describes formative assessment as: [15]. The authors apply the formative assessment, taking into account this view. It is also in line with that presented by Brown and Pickford, who define it as the procedure used to recognize and respond to student learning in order to reinforce it during the process [16]. In this sense, it is about promoting students’ awareness when it comes to achieving the learning objectives. The student carries out the self-assessment process based on his own analysis and reflection. In the process of peer-assessment, the pair of students established by the teacher, reciprocally evaluates their drawings considering the degree of achievement of the objectives. The methodology is applied online seeking the fulfilment of the objectives and the possibility of introducing interaction models in the assessment process (IMAP) in educational practice. The objectives to be achieved are discussed in detail and exemplified in each thematic/practical block by the teacher. These objectives are included at the beginning of the survey so that the student has them in mind before completing it, since the items revolve around them. It should also be noted that this type of formative assessment requires adaptations and is more complex to apply when the number of students is high. 3.2 Experience Schedule The design and planning of the online experience was carried out over 3 months; June to August 2020. The main phases of the experience time frame are listed below. – June-July 2020. Detection of needs of the online model, mainly methodological and assessment. – August 2020. Design of the forms in Google and course in Moodle of the subject. – September 2020. Information to students about the work process to follow. – October-December 2020. Implementation of the experience. A similar schedule has been carried out in 2021 to adapt the experience to face-to-face situation. 3.3 Learning Assessment The assessment of the training objectives is done through partial deliveries of each thematic unit, in which the acquisition of certain capacities and aptitudes are considered. These are distributed as follows: fit and proportion, techniques and chiaroscuro, and presentation panels and formal exploration. All of them needed to train the student to sketch in an agile and efficient way, as well as to design effective and professional presentation panels. The student’s self and peer-assessment weights 10% in the final grade obtained. The results obtained in an online environment are very similar to those in the faceto-face. They reveal that the aim of improving the follow-up and results in said subject, based on a self and peer- assessment of the students, together with the teacher feedback has been achieved. Both environments allow students enrolled in the subject of Artistic Expression I, of the Degree in Engineering in Industrial Design and Product Development

1580

A. Serrano et al.

of the University of Zaragoza (Spain), to make visible and deepen their achievements, recognize weaknesses and strengths and give value to their learning process. Likewise, it is intended that the student develop a higher level of self-knowledge and commitment in following the subject. This aspect entails showing the possibilities offered for the student to be more aware of making attitudinal and procedural decisions, after recognizing aspects that can be improved. On the other hand, an early and more concrete detection of learning problems of the subject and the possibility of providing a flexible, inclusive response adapted to new teaching scenarios is expected. This opens a space for reflection on whether the application of new assessment models, or the adaptation of others in a strategic way, lead to an improvement in the teaching-learning process, both at the level of academic results, and of perception of the own process. The assessment indicators shown to the student reveal the level of achievement obtained with respect to the mastery of the competence. As main advantages for teachers, we highlight that they can be formulated in a collaborative and consensual way in the design of the rubric, they trace levels of progress and observation of results of the competences that can be measured and are transferable between the same subjects of the same grade. For the students, it can be an efficient learning resource as well as a space for performance, due to its reflective potential, by clearly and synthetically collecting training objectives, level of achievement and possible aspects to improve. On the other hand, it highlights the role of this resource to promote students´ autonomy and responsibility. In both cases, the authors consider that this resource entails a greater awareness of the level of achievement, since the assessment criteria are shared.

4 Results The sample of students who have participated in the experience has been 81, out of a total of 89 enrolled in 2020 and 81 out of 83 enrolled in 2021 course. All of them signed the data processing consent expressing their willingness to participate in the self and peer-assessment of the subject practices. No significant differences have been observed between both courses. The table shown below includes the main contents included in the questions of the self-assessment and peer-assessment survey of blocks 1–3 of practices. In the 1 block of self-assessment and co-assessment, students have evaluated aspects of their work presented in the Table 1. The average score for this block was 8.36, the worst valued aspect (7.78) being the maintenance of the product proportions in all views and the best valued (9.0) the understanding of the block objectives.

Formative Assessment in Context of COVID-19 Pandemic

1581

Table 1. Self-assessment and peer-assessment questions that are included in 10% of the sore. Block 1

Block 2

Block 3

The drawing takes into The modulation of the line has account the global nature of been used as a communication the product element

The presentation panels are based on the use of the appropriate grid

The views describe the product adequately. The product is understood and recognized

The areas of light and shadow are projected synthetically

The fonts are properly selected

The proportions of the product are kept in all its views

Chiaroscuro has been applied with the 3 techniques efficiently

The color interactions adjusted to the product and communicative tone are established

The composition that I apply is balanced

The highlight backgrounds have The evolutionary sequence in been used to enhance the the formal exploration of the composition product is described

The objectives sought in this The objectives sought in this block have been understood block have been understood

The objectives sought in this block have been understood

Reasonable time has been applied in the execution of the practices

Reasonable time has been applied in the execution of the practices

Reasonable time has been applied in the execution of the practices

The average of the self- assessment in block 2 was 8.2. The best valued aspect was the understanding of the objectives (9.1) and the worst valued (7.6) the composition with highlighting backgrounds. In the last block, the average grade was 8.61. The best valued aspect (9.37) has been the understanding of the objectives and the worst valued (8.0) the evolutionary sequence in the formal exploration. In the section on peer-assessment of classmates’ drawings, students have given an average grade of 8.68 in the first block, 8.46 in the second and 8.9 in the third. In view of the results, we can verify that the students have been slightly more critical in the self- assessment, obtaining an average in the three blocks of 8.35, with respect to the peer-assessment of the classmates, to whom they have assigned a grade of 8.66. The teachers’ assessment of these same works has been more critical, giving them an average score of 7.28. The vast majority of students (67.2% totally agree and 27.4% partially agree) perceive that the self- assessment process has allowed them to identify the aspects to improve (Fig. 1). Likewise, they are partially (46.4%) or totally (44.5%) satisfied with the results of their work and see positive expansion of their learning with complementary content (63.5% totally agree and 30.8% partially agree).

1582

A. Serrano et al.

Fig. 1. Reflection on the self-assessment process and satisfaction, expressed in percentage per block.

5 Conclusions The experience developed describes the results after applying a formative assessment process, based on self and peer-assessment as an educational resource in times of pandemic. The main objective of this experience has been to keep track of the Artistic Expression I subject, taught for the first time, online, and the next face-to face environment in the Degree in Engineering in Industrial Design and Product Development at the University of Zaragoza (Spain). The formative assessment has been presented as a strength of the learning process by promoting cyclical and continuous reflection on the achievement of the objectives. However, it can also be a weakness if the assessment has not been strategically designed. In any case, it must be recognized as a key process of training for lifelong learning given its role as an integrative guide to knowledge. From the teacher’s point of view, the formative assessment carried out in this experience has applied reflective thinking in order to improve the student’s learning process. This reflection has entailed a cyclical design, providing feedback, commitment, responsibility and a higher level of awareness of the parties involved.

Formative Assessment in Context of COVID-19 Pandemic

1583

The application of this formative assessment model has been based on the participation and interaction of the students, through a process of self and peer-assessment, which has allowed them to face with positive results, the follow-up of the subject. The students have been slightly more critical in their self-assessment, with respect to the peer-assessment. On the other hand, the assessment made by the teachers of these same works has been more critical, close to a point of difference. This experience opens a space for reflection on whether the application of these opportunities is reflected in an improvement of the process; as a facilitator of learning, at the level of academic results and the assessment system itself. Acknowledgments. The authors thank the Department of Science, University and Knowledge Society of the Government of Aragon for funding this work through the EduQTech Reference research group, Ref. T49_20R.

References 1. UNESCO, IESALC: COVID-19 and higher education: Today and tomorrow. Impact analysis, policy responses and recommendations, p. 6, 9 April 2020. http://www.iesalc.unesco.org/en/ wp-content/uploads/2020/04/COVID-19-EN-090420-2.pdf 2. Zuluaga, S.M.Y.: Teaching and online learning practices used in different universities during the Covid-19 crisis: findings and challenges. In: 40th International Conference of the Chilean Computer Science Society (SCCC), pp. 1–4 (2021) 3. Motyl, B., Filippi, S., Baronio, G., Villa, V.: Reasoning About Technical Drawing Online Teaching During COVID-19. In: Rizzi, C., Campana, F., Bici, M., Gherardini, F., Ingrassia, T., Cicconi, P. (eds.) Design Tools and Methods in Industrial Engineering II. ADM 2021. Lecture Notes in Mechanical Engineering. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-030-91234-5_90 4. García-Peñalvo, F.J., Corell, A., Abella-García, V., Grande, M.: Online assessment in higher education in the time of COVID-19. Educ. Knowl. Soc. 21, 12 (2020) 5. Sanmartí, N.: 10 ideas clave. Evaluar para Aprender. Graó, Barcelona (2018) 6. Valero-García, M., de Cerio, L.M.D.: Autoevaluación y co-evaluación: estrategias para facilitar la evaluación continuada. In: Actas del Simposio Nacional de Docencia en Informática (SINDI), pp. 25–32, Granada (2005) 7. Pozo, M.: Experiencias en aplicación de actividades de coevaluación y autoevaluación en el aula. Innovación Docente en la Ingeniería, pp. 57–62. DICADE, Guatemala (2017) 8. Ramis, J., Payeras, M.M., Carrasco, L.: Experiencia de Implantación de Estrategias de Autoevaluación y Coevaluación en el Grado de Ingeniería Telemática. In: XIII Jornadas de Ingeniería telemática (JITEL 2017). Libro de actas. Editorial Universitat Politècnica de València, pp. 326–333 (2018) 9. Farrerons, Ó., Pujol, A.: Proyectos de Ingeniería Industrial, autoevaluación. Una experiencia en la Universidad Politécnica de Catalunya. In: VI Internacional Symposium of University Teaching CINDU 2019, pp. 1–5, Barcelona (2019) 10. Pujol, A., Farrerons, Ó.: Autoevaluación en la ingeniería industrial: prueba piloto en Expresión Gráfica en la UPC-Terrassa. En La Transformación digital en la Ingeniería Gráfica, p. 65 (2019) 11. Rodríguez Gómez, G., Ibarra Saiz, M., García Jiménez, E.: Autoevaluación, evaluación entre iguales y coevaluación: conceptualización y práctica en las universidades españolas. Revista de Investigación en Educación 2(11), 198–210 (2013)

1584

A. Serrano et al.

12. Pastor, V.M.L., Pascual, M.G., Martín, J.B.: La participación del alumnado en la evaluación: la autoevaluación, la coevaluación y la evaluación compartida. Revista Tándem Didáctica Educación Física 17, 21–37 (2005) 13. García-Merino, J.D., Urionabarrenetxea, S., Bañales-Mallo, A.: Cambios en metodologías docentes y de evaluación: ¿Mejoran el rendimiento del alumnado universitario? Revista electrónica de investigación educativa 18(3), 1–18 (2016) 14. Forés, A., Subías, E.: Pedagogías emergentes: 14 preguntas para el debate. Octaedro Editorial, Barcelona (2018) 15. Pérez, A., Julián J. A., López, V.M.: Evaluación formativa y compartida en el Espacio Europeo de Educación Superior (EEES). In: Víctor Manuel López Pastor (coord.). Evaluación formativa y compartida en educación superior, Narcea, Madrid, pp. 19–43 (2009) 16. Brown, S., Pickforf, R.: Evaluación de habilidades y competencias en educación superior, Narcea, Madrid (2013)

Think-Sketch-Create: Improving Creative Expression Through Sketching M. Belén Calavia1,2(B) , Teresa Blanco2,3 , Ana Serrano1 Anna Biedermann1 , and Roberto Casas2,4

,

1 Department of Design and Manufacturing Engineering, University of Zaragoza,

Zaragoza, Spain [email protected] 2 HOWLab - Human OpenWare Research Group, Aragon Institute of Engineering Research (I3A), University of Zaragoza, Zaragoza, Spain 3 Department of Musical, Plastic and Body Language Expression, University of Zaragoza, Zaragoza, Spain 4 Department of Electronic Engineering, University of Zaragoza, Zaragoza, Spain

Abstract. Sketching skills are fundamental for engineers, especially in the field of design. However, developing this skill is not straightforward; most first-year students only have technical drawing training and the attempt to achieve hyperrealism finishes usually blocks them. Based on multidisciplinary work, this study presents a teaching intervention: the ‘Think-Sketch-Create’ (TSC) workshops aimed to help students improve their skills to express their thoughts and ideas through sketching. These workshops consist of two blocks: warm-up and experimentation. The TSC workshops are applied and assessed through qualitative and quantitative methods in a subject with 84 university students. The assessment of the TSC workshops shows its positive integration into the curriculum, its utility to establish a satisfactory climate to avoid blockage without fear of making mistakes, and its ability to broaden knowledge. In short, this paper evidences the interest of training future engineers in sketching and the need to support teachers with new methods that aid them to foster creative expression in their students. Keywords: Design education · Teaching skills · Sketching · Workshop design · Creativity

1 Introduction The need for professionals with boundary-crossing skills in different work sectors of today’s world requires training students through competency-based learning [1, 2]. Indeed, the concept of ‘skill’ is understood as the dynamic combination of knowledge, understanding, and abilities that prepare students for their future employment and role in society [3]. Thus, in the engineering context, and especially in the design and Design Thinking (DT) fields, one of the most relevant skills to be developed by the students is the expression of thoughts and ideas with sketches [4–6]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1585–1597, 2023. https://doi.org/10.1007/978-3-031-15928-2_138

1586

M. B. Calavia et al.

DT is based on formulating and solving complex problems through an analytical and creative human-centred process [7–12]. This process generally consists of three main phases: inspiration, ideation, and implementation [13]. Sketches have an essential role during specific moments of these DT phases, such as ideation, conceptualisation, and presentation. Sketching allows designers to develop, externalise, record, and communicate the ideas they have in mind and to generate new ones [14]. Therefore, there is a strong relationship between sketching and DT. Likewise, sketching is related to creativity [15], one of the most important 21stcentury thinking skills, understood as the ability to connect learned knowledge to solve problems and create new things [16, 17]. Expression through sketching favours creative problem solving through methods such as Visual Thinking (VT), which provides the analogies needed to develop new structures [18]. Therefore, most creative processes extensively use visual thinking, and thus, sketching is considered a means of stimulating and supporting creative thinking. However, teaching and developing the ability to express thoughts with sketches is not straightforward. Children draw without caring whether they do it right or wrong; nevertheless, this language and spontaneity are lost when they are instructed. Most academic environments generally consider drawing and creativity a pastime in practice [19]. In this sense, most first-year engineering students have not used drawing as a means of communication for years, and their previous training is primarily technical. Likewise, their drawings’ attempt to achieve perfection or hyperrealism leads to a blockage that prevents them from developing this sketching skill. As a result, many students often use only verbal language to communicate their ideas during ideation and 3D modelling and rendering software to conceptualise and present solutions. This effect is partly because, according to Norp and Van Hoek [20], many times during the training programs, students spend hours creating ‘perfect’ colour gradients or getting the right lines to produce the most excellent final drawing they can achieve. Nevertheless, this is not aligned with professional practice; it is more helpful for their future to express themselves through sketching quickly and collaboratively, fostering communication between group participants through a common ground for discussion: the sketches. In engineering training, some studies propose tools for teaching sketching skills. However, there are few interventions focused on reducing the blockage to develop this skill [4]: Worinkeng, Summers, and Joshi [21] proposed a pre-sketching activity to foster concept generation and Booth et al. [4] raised ‘freehand sketching’ and ‘sketch inhibition’ modules to reduce inhibition. Meanwhile, in the field of artistic education, activities to teach sketching and address the blockage are more common; these activities seek to use the right side of the brain, enhancing creativity [22]. Thus, practical solutions are needed to help teachers popularise creative expression through sketching in future engineers as a universal language, aligning better the relationship between training and professional practice of design sketching. To address this situation, in the following sections of this paper, we present a workshop called Think-Create-Sketch (TCS). Section 2 shows the rationale, the application, and the assessment of the TCS workshop. Section 3 collects the results and discussion of the TCS assessment, conducted in a real educational environment with end users

Think-Sketch-Create: Improving Creative Expression

1587

(engineering students), by qualitative and quantitative methods. From this experimentation, the TCS’s utility, the advantages, difficulties, and limitations derived from its implementation, and the feedback necessary for its improvement are discussed.

2 Materials and Methods 2.1 Workshop Rationale and Definition A high level of x-disciplinarity is required to design and incorporate a new method in the educational environment. X-disciplinarity is a term assigned to the set of possibilities of collaborative work: multi-, cros-, inter-, trans- [23]. Thus, in this research, a multidisciplinary team comprised of specialists in design, art, architecture, creativity, and technology was created. From the beginning, the five members of the team (manuscript authors) worked together during several sessions, considering the visions and experiences of each discipline. The sessions were focused on modifying the methodology based on hyperrealism or perfect finishes by activities that lead students to experiment and open new doors of discovery. Some verbatims about the team’s intention were collected during the sessions: ‘Sketching and not rendering’, ‘We want them to experiment’, ‘Dynamics to free them from the blockage’, ‘Foster creativity’. From the collaboration between disciplines, we propose workshops called ‘ThinkSketch-Create’ (TSC), which aims to help students improve their skill of expression through sketching, reducing the blockage. These workshops consist of 3-h sessions structured into two blocks: warm-up and experimentation. The activities in the first block aim to get students to ‘let their hands loose’ through activities that lead them to sketch their thoughts directly, regardless of perspective, detail, or quality, just flowing. The activities in the second block seek to make the students experiment with different sketching techniques without fear of making mistakes, thinking and reflecting about the process and not just about the final result, and knowing that if one sketch does not achieve the desired result, they can do more. 2.2 Context and Participants The TSC workshop was applied and assessed by the multidisciplinary team in an educational environment with end-users. Specifically, in the ‘Artistic Expression’ subject of the Engineering in Industrial Design and Product Development degree from the School of Engineering and Architecture (EINA) of the University of Zaragoza, taught in the first year and compulsory for all students. The subject deals with different representation techniques of industrial products. A course participated in the study; a total of 84 participants aged 18–22 years applied the TSC workshops to develop their sketching skills. 2.3 TSC Application TSC was applied during the chiaroscuro practices on three sessions of three hours each. To guide the students during the workshop properly, the 84 students were divided into

1588

M. B. Calavia et al.

four groups of 21 members. Each of these sessions (twelve in total) were guided by two multidisciplinary team members and were accomplished following the two general blocks mentioned. The first block is called warm-up-block; it lasts 90 min and consists of the following phases: • Unblock activity: It is a straightforward and short workflow whose objective is students to disconnect from the previous class and immerse themselves in the context of sketching. • Presentation: The aim is to present and explain some theoretical aspects of the chiaroscuro drawing technique to be used. • Warm-up activity: It is an activity whose objective is to begin to experiment and test with the drawing technique. • Initial sketch: It is aimed to fit the shape of the industrial product chosen by the student making initial sketches. This is followed by a break for the students, while the teachers photocopy the initial sketches developed by the students. Then, the second block, called experimentationblock, begins. This block lasts 90 min and consists of the following phases: • Ctrl-Z: It is an activity whose aim is that students can experiment without fear, trying and ‘dirtying’ the paper, having the possibility of ‘undoing’ the actions. For this, they use the photocopies of their initial sketches. • Inspiration presentation: The aim is to show some examples of sketches with the explained drawing technique, highlighting that ‘less is more’ and that they should not look for hyperrealism. • Final sketch: During this time, students develop their final sketch from the initial sketch, synthesizing all the knowledge and experiences learned during the session. Meanwhile, the teacher provides feedback to help students reflect and discover themselves. Table 1 shows the application of TSC in the three chiaroscuro sessions: Table 1. Application of think-create-sketch (TCS). TSC

First session

Second session

Third session

Unblock activity (10 min)

Thirty circle challenge: Convert as many circles as possible into recognizable objects. Draw what you have in mind, regardless of perspective, or quality, flowing

To warm the fingertips, the back as a canvas: Draw with the finger on the back of a classmate while capturing what he/she perceives. Mute!

Start smudging to colour a sketch quickly (can draw outside lines). Then apply layers of markers and pencils to create shadows. Use pastels for the lights

Presentation (15 min)

Explanation, tips, and short exercises. Chiaroscuro drawing technique: pencils

Explanation, tips, and short exercises. Chiaroscuro drawing technique: pastels

Explanation, tips, and short exercises. Chiaroscuro drawing technique: markers

(continued)

Think-Sketch-Create: Improving Creative Expression

1589

Table 1. (continued) TSC

First session

Second session

Third session

Warm-up activity (25 min)

Templates and canvas to make gradients, colour palettes, and pencil patterns

Templates and canvas to make spheres and cubes in gloss and matte finish; scratch and smudge with pastels

Group sketching: Redesign a water bottle, salt shaker, vase, or air freshener in groups. Use drawings, titles, and callouts, being mandatory not to speak

Initial sketch (25 min)

Composition and shape sketches with pencils

Ctrl Z activity (20 min)

Experimentation of chiaroscuro with pencils on photocopies

Experimentation of chiaroscuro with pastels on photocopies

Experimentation of chiaroscuro with markers on photocopies

Inspiration presentation (10 min)

Examples of chiaroscuro sketches using pencils

Examples of chiaroscuro sketches using pastels

Examples of chiaroscuro sketches using markers

Final sketch (60 min)

From the initial sketch and experimentation applying the corresponding technique

Finally, a short closing session was conducted, in which a self-assessment, a survey, and an exercise of ‘internalise and simplify’ were developed. In the self-assessment, each student reflected and assessed her/his work and the work of another classmate. The survey was distributed among the students to find out their perceptions. During the exercise of internalise and simplify, students were challenged with the following task: Imagine you are in a meeting, and you have to show the rest of the team a product in 10 min. How would you do it? In this manner, the students reflected on the product sketches made in the previous weeks and the techniques and knowledge they had learned (Think). They synthesised and simplified this learning to make a sketch in just 10 min (Sketch). Thus, they developed a ‘solution’ to the challenge (Create). 2.4 Assessment During the application, the multidisciplinary team acted as evaluators in the field (specifically three members who acted as lecturers). They assessed the use of TSC by students through a survey, as well as observation, field notes, and periodic internal discussions [24]. In this manner, Xassess is the general framework used for the assessment; Xassess is an evaluation methodology that merges qualitative and quantitative approaches and is centred in multidisciplinary teams [25]. Survey: Individual surveys were conducted, using questionnaires with open and closed questions in the Google Forms platform. The purpose of this survey was to collect scores and feedback about the TSC from students. The survey was divided into five sections. The first ‘control questions’, which included questions about what previous drawing training had the participants or what blocked them before drawing. The second section, ‘development of the TSC workshops’, which had questions related to the utility of establishing the classes in phases and the usefulness of these phases. The third ‘workshops contributions’ contained questions about the contributions of each phase, the drawing techniques, and skills developed. The fourth section, ‘workshops improvements’, included questions to discover improvements proposed by the participants. Finally, the

1590

M. B. Calavia et al.

fifth section, ‘sketching training’, had questions to know the participants’ interest in sketching training. The resulting data were exported to MS Excel. The closed questions (quantitative) were examined using descriptive statistics and visual graphics. These graphics were made with the Python 3.8 language and the Pandas, Matplotlip, and Seaborn libraries. The open questions (qualitative) were manually coded and grouped according to similarities in the reflections made by the students. Thus, Sect. 3 presents quantitative graphs as well as verbatim quotations that support the assessment. Observation, field notes, and periodic internal discussions: During the observation, the professors collected field notes which were shared in periodic internal discussions of 25 min at the end of each session (twelve in total). These annotations allowed discovering the key features and contributions of the TSC workshops, collected in the Discussion Section. Thus, the assessment allows checking to what extent the TSC workshops are integrated into the curriculum (1), to what extent the workshop process is suitable for the students (2), and how it broadens their knowledge for their future (3). In this manner, the results and the main contributions of this study follow this three items structure.

3 Results and Discussion This section discusses the results obtained on the assessment and the lessons learned to apply TSC workshops to improve students’ skill of expression through sketching. 3.1 Incorporation and Adaptation to the Curriculum According to Calavia et al. [16], incorporation and adaptation to the curriculum are the first premises to consider when developing a material to ensure its viability in the classroom. TSC is adapted to the subject in timing and scope for two reasons. The first and straightforward reason is that, in line with Blanco et al. [23], creating a multidisciplinary team from the beginning make it easier to adapt TSC to the environment. The second reason is that these workshops are justified in the general subject structure. The course begins with the first block of understanding reality (proportions, scales, etc.), then the chiaroscuro block is found, and finally, it moves towards the third block of formal exploration, i.e. the subject moves from ‘what is’ to ‘what can be’. There is a pathway to move students from something tangible that can be represented towards divergent thinking that leads them to create and develop new forms. Thus, the second block of chiaroscuro, where the TSC workshops are framed, is in the middle and serves as a link between the two blocks. Therefore, applying the approach of the TSC workshops in this block allows students to obtain small knowledge pills that ‘open’ their minds and aid them to assimilate new resources for action, facilitating the transition between the three blocks of the subject. Additionally, starting with a more convergent block and ending with a more divergent one makes sense because more than 90% of the students who participated in this study had only technical drawing training. This implies that most of them indicated that what they found most difficult when starting to draw freehand was ‘not using ruler and compass’.

Think-Sketch-Create: Improving Creative Expression

1591

It is important to note that these are students who start from convergent thinking, using the left brain to achieve the solution. They need to develop divergent thinking, using the right brain to foster creativity and generate many solutions, since this models them as designers [26]. As novice designers, students need to be trained with methods such as TSC workshops that lead them to develop their own design patterns, accompanying them in this transition. 3.2 Valuation of the TSC Workshops The students assessed and reflected on the contributions of each of the phases of the TSC workshops (Fig. 1). • Students noted that the unblock activities helped them ‘to break the ice and relax’, ‘to gain confidence to face the following activities’, and ‘to stimulate their minds and feel more creative’. • The presentations were appreciated for the ‘to know and understand the techniques’, as well as ‘to be able to see other examples for inspiration’. • The warm-up activities are seen as ‘direct application of theory’ that allow ‘discovering and experimenting with the new technique to gain fluency’. • The experimental activities allowed them ‘to discover what works and what does not’, ‘to try freely and without fear of damaging the end’, ‘to discover by trial and error’. • Finally, the making of the final sketch was ‘the manner of reflecting all the previous phases’ while receiving ‘feedback and advice to open the mind and improve the work’. Regarding the survey item about what phases were the most useful (Fig. 1), it can be seen that the experimental activities were the most highly valued. This is understandable because the students relaxed and openly experimented without fear of failure, under the slogan of ‘learning by doing and failing’ [27]. In contrast, the presentations were the least chosen by students. This is reasonable since, although the presentations were very visual, they are a typical resource in the classroom, and the student does not participate directly as in the other activities.

Fig. 1. Phases selected as the most useful by students on percentage (multiple choice question).

Similarly, in response to the question of ‘Is it useful to establish the 3 h of class in phases?’ they answered positively almost unanimous (97.6%). Students indicated that it was useful following and understanding the process ‘it is more enjoyable and interesting, if not it is tiring and monotonous’, ‘without this distribution, I might have completed the tasks faster, but I would not have known the purpose of the practices’.

1592

M. B. Calavia et al.

However, some students indicated that they would have liked to have more time to complete their final sketch ‘because it is what counts towards our grade’ ‘so we do not have to finish this at home’. These reflections have to do with traditional systems based on extrinsic motivators because they perceived as insufficient the time given for the final sketch, which is assessed with a quantitative mark, sensing the evaluation separate from learning. This is a tough matter, but we align with Sternberg and Lubart [28] about the necessity of devaluing the importance of final marks. We believe that the adoption of workshops such as the proposed could mitigate this phenomenon. We focus on students assimilating the concept of thinking about the process and not only about the final result, in line with Kleon [29], and supporting them with continuous feedback. This concept match with the type of drawing that is expected in the subject; they should not achieve hyperrealism finishes but process sketches that include what is necessary for their interpretation (indications of use, materials, arrows, calls, cuts, dimensions, etc.). In this manner, they can develop, externalise, record, and communicate their thoughts and ideas during the Design Thinking process, in accordance with Gryaditskaya et al. [14]. It is proposed that to enhance this approach in future applications, in the next course, students document the entire process up to the final sketch (versions, experimentations, initial sketches, etc.), together with a reflection on decision making. Likewise, it is proposed to add a mandatory final reflection phase at the end of each session so that students can see and learn from the work of their colleagues. Valuation of the Drawing Techniques. Regarding the three techniques used to work on chiaroscuro, 100% of the students admitted that they were suitable for ‘learning to add volume to our sketches’. Specifically, of the pencil technique they appreciated that ‘a lot can be done with a very simple material’ and its usefulness to ‘modulate the line’. Likewise, this technique implied that most of the students reflected on the importance of ‘understanding that not everything should be painted’ and ‘not to fill the paper’; this is essential to achieve fresh finishes (Fig. 2a). About the pastel technique, they appreciated that it allowed them ‘to give dynamism to my drawings’ as well as ‘to spot and enjoy’ and ‘to cover large surfaces in a short time’. However, some students pointed out that ‘it did not contribute anything to me’ ‘I think it is more focused on artistic drawing’, and some even commented on the possibility of removing it. Thus, it is necessary to work on getting all students to understand it as a technique to provide volume and colour quickly, emphasizing the use of the least amount of material, and thinking about what is essential and what can be omitted. There is no interest in obtaining a perfect realistic render with pastels (see Fig. 2b). However, consideration could be given to changing the subject’s name from ‘Artistic Expression’ to (for example) ‘Expression through sketching’ to avoid the association with the art and preconceived ideas. Finally, regarding the marker technique, they appreciated ‘achieving more professional results’ as well as ‘surprisingly matte and glossy finishes of plastics’. Likewise, this technique implied that most of the students reflected on the importance of ‘thinking well before painting because the marker cannot be removed’. This is fundamental because before sketching a product, it is important to think and plan; for example, in this case (Fig. 2c), the direction of the light for highlights and shadows.

Think-Sketch-Create: Improving Creative Expression

1593

Fig. 2. Examples of sketches using (a) pencils, (b) pastels, and (c) markers.

During the sessions, we insist on ‘where you cannot achieve with one technique, apply another’. However, we consider that in future applications, it would be interesting to develop the last practice to combine the three techniques. In addition, the group sketching activity during the third session worked particularly well because they used only the sketch as a means of communication (Fig. 3); so it would be interesting to implement more group activities. Training in sketching quickly and collaboratively is fundamental to their future because it will allow them to express themselves, foster new ways of communication, and promote creative proposals.

Fig. 3. Group sketching activity.

3.3 Broadening Knowledge All these experiences led students to value the following statements positively, as shown in Fig. 4. These results support our hypothesis about the utility of the TSC workshops in helping students to unblock themselves to communicate through the sketch. In addition, the students perceived that the workshops were useful to achieve the subject contents in relation to chiaroscuro and well-balanced compositions. They also highlighted the improvement in their ability to think more about the process and less about the end result. Therefore, according to the students’ view and considering that sketching stimulates creative thinking [15], the workshop contributes to fostering divergent thinking, enabling students to think creatively.

1594

M. B. Calavia et al.

Fig. 4. Assessment of the statements. Likert scores range from 0 (minimum valuation) to 7 (maximum valuation).

This learning is visible in the internalise and synthesise exercise, in which they applied all the acquired knowledge in a condensed manner in just 10 min. Results are quick and fresh (Fig. 5), including elements learned during the subject such as hidden lines, line thickness, background, traces, arrows, annotations, or dimensions.

Fig. 5. Examples of the internalise and synthesise exercise.

Finally, to ascertain the satisfaction and the degree of interest awakened in the students, we posed the question of whether these types of workshops to foster sketching are interesting for their training and whether they would like to broaden their knowledge or information in the future. As shown in Fig. 6, the assessments are positive; the average is 6.4 ± 0.7 and 6.2 ± 0.9, respectively.

Think-Sketch-Create: Improving Creative Expression

1595

Fig. 6. Assessment of perceived interest and broadening of knowledge. Likert scores range from 0 (minimum valuation) to 7 (maximum valuation).

In order to inquire about the workshop contributions from the teaching perspective, five indicators are identified on the subject’s teaching guide related to the learning achievements. First, according to teacher’s perspective, the ‘agility’ of the students was improved thanks to the fluency of the process and the deadlines established in each session. This agility connects with the second item, the increase in ‘collaboration’; all groups of students showed an unanimous aptitude to participate in the activities, with an open and fresh initiative that uninhibited them, improving the third indicator, enjoyment. The atmosphere created in the workshops was favourable, and the ‘enjoyment’ during the activities was tangible among all the actors involved, improving teacher-student and student-student relations. This effect is also reflected in the fourth item, the ‘result of the evaluation of the teaching activity’, where teachers had a ‘positive distinction’. Finally, the academic marks were positive; however, no substantial changes was observed from previous courses. Perhaps this is due to the fact that, as mentioned above, students had less time for the exercise being assessed. In sum, we can reaffirm that the skill to express oneself through the sketch is fundamental in the training of engineers, especially in the case of design specialists, in line with Hilton et al. [6]. This skill requires further promotion and training throughout the degree to enable future designers to use sketches as a means of communication in their work teams, without the need to use 3D modeling or rendering programs until later or final stages. Thus, using the TSC workshops is not a unique solution but a step towards improving these concepts in future designers’ training.

4 Conclusions This paper proposes the Think-Sketch-Create (TSC) workshops to train future designers in sketching so that they are able to express their thoughts and ideas in work teams. For the workshop development, a multidisciplinary team (formed by specialists in design, art, architecture, creativity, and technology) worked together, focusing on a teaching intervention that leads students to experiment, far from trying to achieve hyperrealism or perfect finishes. As a result, TSC is embodied in practical workshops, in which students experiment with different sketching techniques, thinking and reflecting about the process and not just about the final result. TSC was assessed quantitatively and qualitatively

1596

M. B. Calavia et al.

through its application in a compulsory subject of the Engineering in Industrial Design and Product Development degree, with 84 university students. The results of the experience allow us to affirm the integration of the TSC workshops in the subject, with it being possible to apply the same scheme in other classes. The previous multidisciplinary work, as well as the structure of the subject (from convergent to divergent thinking) implied that TSC was naturally integrated. Additionally, the assessment evidences the effectiveness of the workshop for the proposed objectives; the students openly experimented, thought about the process, learned chiaroscuro techniques, and understood that they should not achieve hyperrealism finishes. Thus, the TSC workshops are a first step that aids future designers to unblock themselves to use sketching as a means of communication during the design process. Consequently, this study reflects the potential of fostering sketching skills and the need to develop actions for their continuous training.

References 1. Daly, S.R., Adams, R.S., Bodner, G.M.: What does it mean to design? A qualitative investigation of design professionals’ experiences. J. Eng. Educ. 101(2), 187–219 (2012) 2. Tulsi, P., Poonia, M.: Expectations of industry from technical graduates: implications for curriculum and instructional processes. J. Eng. Educ. 28(4), 42–49 (2015) 3. Tuning, P.: Tuning educational structures in Europe. Informe final. Proyecto piloto. Fase 1. Universidad de Deusto, Bilbao (2003) 4. Booth, J.W., Taborda, E.A., Ramani, K., Reid, T.: Interventions for teaching sketching skills and reducing inhibition for novice engineering designers. Des. Stud. 43, 1–23 (2016) 5. Eissen, K., Steur, R.: Sketching: Basics. Bis Publishers, Amsterdam (2012) 6. Hilton, E.C., Paige, M., Williford, B., Li, W., Hammond, T., Linsey, J.: Improving the sketching ability of engineering design students. In: Proceedings of the 21st International Conference on Engineering Design, vol. 9, pp. 217–224 (2017) 7. Brown, T.: Design thinking. Harv. Bus. Rev. 86(6), 84 (2008) 8. Carroll, M.P.: Shoot for the moon! The mentors and the middle schoolers explore the intersection of design thinking and STEM. J. Pre-Coll. Eng. Educ. Res. (J-PEER) 4(1), 14–30 (2014) 9. Dorst, K., Cross, N.: Creativity in the design process: co-evolution of problem–solution. Des. Stud. 22(5), 425–437 (2001) 10. Howard, T.J., Culley, S.J., Dekoninck, E.: Describing the creative design process by the integration of engineering design and cognitive psychology literature. Des. Stud. 29(2), 160– 180 (2008) 11. Mosely, G., Wright, N., Wrigley, C.: Facilitating design thinking: a comparison of design expertise. Thinking Skills Creativity 27, 177–189 (2018) 12. Razzouk, R., Shute, V.: What is design thinking and why is it important? Rev. Educ. Res. 82(3), 330–348 (2012) 13. Brown, T., Wyatt, J.: Design thinking for social innovation. Dev. Outreach 12(1), 29–43 (2010) 14. Gryaditskaya, Y., Sypesteyn, M., Hoftijzer, J.W., Pont, S.C., Durand, F., Bousseau, A.: OpenSketch: a richly-annotated dataset of product design sketches. ACM Trans. Graph. 38(6), 232–241 (2019) 15. Van der Lugt, R.: How sketching can affect the idea generation process in design group meetings. Des. Stud. 26(2), 101–122 (2005)

Think-Sketch-Create: Improving Creative Expression

1597

16. Calavia, M.B., Blanco, T., Casas, R.: Fostering creativity as a problem-solving competence through design: think-create-learn, a tool for teachers. Think. Skills Creativity 39, 100761 (2021) 17. Kleiman, P.: Towards transformation: conceptions of creativity in higher education. Innov. Educ. Teach. Int. 45(3), 209–217 (2008) 18. Verstijnen, I.M., van Leeuwen, C., Goldschmidt, G., Hamel, R., Hennessey, J.M.: Sketching and creative discovery. Des. Stud. 19(4), 519–546 (1998) 19. Jenny, P.: Técnicas de dibujo. Gustavo Gili, Barcelona (2013) 20. Norp, B., Van Hoek, E.: Industrial design sketching in practice. In: 21st International Conference on Engineering and Product Design Education (E&PDE), pp. 1–10 (2019) 21. Worinkeng, E., Summers, J.D., Joshi, S.: Can a pre-sketching activity improve idea generation? In: Abramovici, M., Stark, R. (eds.) Smart Product Engineering. Lecture Notes in Production Engineering. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-64230817-8_57 22. Edwards, B.: Drawing on the right side of the brain. In: CHI’97 Extended Abstracts on Human Factors in Computing Systems, pp. 188–189 (1997) 23. Blanco, T.: Metodologías de diseño como plataforma para la x-disciplinaridad en proyectos tecnológicos: surfing disciplines. Doctoral dissertation, Universitat Politècnica de València (2016) 24. Lodico, M.G., Spaulding, D.T., Voegtle, K.H.: Methods in Educational Research: From Theory to Practice, vol. 28. Wiley, San Francisco (2010) 25. Blanco, T., Berbegal, A., Blasco, R., Casas, R.: Xassess: crossdisciplinary framework in user-centred design of assistive products. J. Eng. Des. 27(9), 636–664 (2016) 26. Blanco, T., López-Forniés, I., Zarazaga-Soria, F.J.: Deconstructing the Tower of Babel: a design method to improve empathy and teamwork competences of informatics students. Int. J. Technol. Des. Educ. 27(2), 307–328 (2015) 27. Bain, K.: Lo que hacen los mejores profesores universitarios. PUV, Valencia (2005) 28. Sternberg, R.J., Lubart, T.I.: La creatividad en una cultura conformista: un desafío a las masas. Paidós Ibérica, Madrid (1997) 29. Kleon, A.: Aprende a promocionar tu trabajo: 10 recursos para artistas, diseñadores y creativos. Gustavo Gili, Barcelona (2016)

Methodology for Online Assessment of Technical Drawing Skills of Industrial Engineering Students Diego Paderno(B)

, Ileana Bodini , Stefano Uberti , Valerio Villa , and Gabriele Baronio

Department of Mechanical and Industrial Engineering, University of Brescia, Brescia, Italy [email protected]

Abstract. Due to the pandemic, there has been a shift from conducting paperbased in-person exams to conducting online exams. To still be able to evaluate students, a closed-ended test was implemented to verify the skills acquired by the exam candidates. We have developed on Moodle platform an online test called Technical Drawing Test (TDT2) based on graphic questions with closed answers both single and multiple, replacing the open-ended graphic questions of the previous in-person paper exam mode and with the same contents. This article aims to show the method by which the types of exercises were chosen according to the skills to be tested and to present the first results obtained, with the goal of verifying that the new test is equivalent to the old paper-based open-ended test in its ability to test candidates’ skills. The reliability of the TDT2 is quantified through two statistical synthetic indexes calculated by the Moodle platform and comparing the results of the online TDT2 with the results of the paper-based in-person exams. Keywords: Technical drawing · Engineering education · Testing methods · Moodle

1 Introduction The recent pandemic situation related to COVID-19 has resulted in an evolution and integration of the tools used for teaching, both in terms of how content is taught and how students are tested for acquired knowledge, skills, and competences. This work is part of the research related to the objective evaluation of Technical Drawing (TD) skills acquired by students, according to the criteria established in the Technical Drawing Evaluation Grid (TDEG) proposed by [1] in 2011 and based on the European Qualification Framework (EQF). The definition of the grid has been followed by a coherent evolution of the way of conducting the examination of Industrial Technical Drawing at the Faculty of Engineering at our university, up to the formulation of an online test to verify the skills, presented here and of which we describe the method of validation [2, 3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1598–1609, 2023. https://doi.org/10.1007/978-3-031-15928-2_139

Methodology for Online Assessment of Technical Drawing Skills

1599

Historically, the TD exam is composed of two parts: a graphical test of representation of a given real part or of extraction of a component from an assembly and an oral test. The exam was held in this way also in our university, from 2000 to 2005. An initial change to the assessment methods, prior to the TDEG proposal, was the introduction of a paper test on notions of theory with closed-ended questions to replace the oral test. This Technical Drawing Test (TDT), of quick correction, was taken before the graphical test and, if passed, allowed access to the graphical test. Then the TDEG is proposed, in which knowledge, skills and competences are defined, that are the three areas for the objective evaluation of the students: the TD exam is implemented according to these concepts and is made up of three tests, all paper-based: i) the first TDT with closed-ended questions aimed at the evaluation of theoretical knowledge, ii) a second test with graphical essay question (GEQ) aimed at the evaluation of individual skills, iii) a third test of graphical drawing of a particular extracted from an assembly aimed at the evaluation of competencies. Access to the next test is conditional on passing the previous one. Natural evolution of the first test (TDT) was in 2014 the implementation of an online test on Moodle platform, based on the same database of questions of the paper tests, but with automatic correction. In this way, the full exam takes place in a single day and only those who have passed the first test remain for the second and third tests. The teacher then corrects the third test only to those students who have passed the second one. During the pandemic in the spring of 2020, to ensure the continuity of the service provided by the university, it was necessary to conduct examinations at a distance. In this context, efforts were made to make the best use of IT tools and various e-learning platforms, especially for courses with a high number of students [4]. In this context, the modular mode of the TD exam was well suited to adapt to the new situation [5–11]. The full exam remained composed of three parts, all taken online, divided as follows: – first online test to verify theoretical knowledge. We used the TDT with closed-ended questions, on Moodle platform; – second online test to verify the skills. We have developed on Moodle platform a second Technical Drawing Test (TDT2) with new graphic questions with closed answers both single and multiple, replacing the open-ended graphic questions of the previous exam mode and with the same contents; – third online test to verify competencies. We have maintained the graphic test with the hand-drawing of a component extracted from an assembly, but the assignment of the theme and the final delivery of the work by the students have been done through the e-learning platform. In this way, we have maintained a modular exam testing structure based on knowledge, skills, and competencies, and implemented a sustainable exam in a single half-day. The immediate evaluation, displayed at the end of each online test, allows the teacher to immediately filter the students who passed the first two tests (related to knowledge and skills) and identify those who can take the third test (related to competencies). In this way, the number of papers to manage and then correct is limited and the management of the exam seems efficient [5, 6, 10, 11].

1600

D. Paderno et al.

The fundamental point of the transition between the traditional and the online exam was therefore the development of the test with closed-ended graphical questions to replace the previous second test with open-ended graphical questions.

2 Technical Drawing Methods to Test Skills 2.1 Graphical Essay Questions (GEQ) TDEG allowed for the definition of topics and different levels of learning in Technical Drawing. Starting from 2014, specific graphical questions on each topic have been developed for skills acquisition (GEQ). The chosen topics were the following: i) multi-view orthographic projections, ii) cross-sectional views, iii) dimensioning, iv) tolerancing, v) representation of standardized/unified elements, vi) dimensional tolerances applications. Table 1 shows the question and the specific skills assessed through each one of them. Table 1. Questions of the test and skills to be assess. #

Questions

Skills to verify

Q1

Given the axonometry of a piece, draw the view indicated by the arrow

Skill of drawing views using orthographic projections

Q2

Complete the representation with the cross Skill of drawing cross-sectional views section indicated in the given view

Q3

Insert the dimensions of the part in the already drawn views/sections

Skill of applying dimensioning rules and standards

Q4

Given a block diagram, choose the appropriate tolerances to achieve the required backlash/interference

Skill to budget a given tolerance stack

Q5

Complete the representation of standardized/unified elements, such as threaded connections, tab connections, dowel connections, etc.…

Skill to conventionally represent mechanical parts

Q6

Given an elementary assembly and the drawing of individual parts, enter only the dimensions and tolerances necessary for proper assembly and operation

Skill to allocate functional dimensional tolerances

Methodology for Online Assessment of Technical Drawing Skills

1601

The test questions are administered one at a time and the candidate has 15 min to complete the single graphic response. In this way, the test is sequential, and the student is required to meet the timelines set by the teacher. Each answer can be scored from 0 to 5 points and those who score at least 16 out of 30 points are admitted to the next test. This type of paper-based testing has been used until February 2020. 2.2 Second Technical Drawing Test (TDT2) In the spring of 2020, university exams began to be administered remotely, due to the pandemic situation. This situation encouraged us to maximize the use of the university’s Moodle e-learning platform to conduct the exams. A first online test (TDT) with closedended questions, to verify theoretical knowledge, has been already online on Moodle platform since 2014. A second online test (TDT2) has been developed on Moodle platform to verify skills, which are the second step in the learning process, according to the TDEG: this type of assessment was previously achieved through a paper-based, openended test. The skills to be assessed and the topics covered are the same in the two tests, but since the new test is online, the form of the questions, as well as the possibility of response, have been modified. For example, to verify that the dimensioning ability has been reached, in the paper test the candidate was given an assembly, useful to establish the functionality of the considered component, together with the correct representation in views and sections. The candidate had to add the dimensions to the drawing. In the case of online closed-ended tests, this type of ability is measured by providing the candidate with the same component, correctly represented in views and sections, dimensioned in different ways, some are correct others are wrong. The candidate must be able to recognize them. The TDT2 is composed of ten closed-ended graphic questions to be completed in a maximum time of thirty minutes at the end of which the student sees the score obtained. The TDT2 question topics replicate and complement those of the paper test listed in Table 2, with additional questions on understanding a simple assembly and correct roughness indication in drawings. The TDT2 is a more structured test than the TDT, the questions are composed of possible combinations of different scores: some questions require only one correct answer, while others require two or more correct answers. For each TDT2 question, the correct answer is counted as one point, the completely incorrect answer is counted as −0.25, while the score of multiple-choice questions with partially correct answers depends on the combination of correct answers and errors or omissions in the answer. In particular, the latter type of questions provides a score from 0 to 1 without penalty in case of completely wrong answers. In line with the paper version of the second test, the TDT2 also has a sequential development and does not allow for navigation between questions. Lasting thirty minutes in total, on average the student has three minutes to answer each question. Those who score at least 5.33 out of 10 points (corresponding to 16 out of 30) are admitted to the next skills test.

1602

D. Paderno et al. Table 2. Sample TDT2 questions and categories of skills to be tested. # Q1 (Views)

Questions Given the axonometry of a piece, identify the correct orthogonal projection view(s)

Note One or more correct answers

Q2 (Cross sections)

Given the trace of a cross section on an orthogonal projection view, identify the correct cross section representation

One correct answer

Q3 (Dimensioning)

Choose the correct dimensioning of a component

One or more correct answers

Q4 (Analyses of tolerance stack)

Analysis of a tolerance stack

One correct answer

(continued)

Methodology for Online Assessment of Technical Drawing Skills

1603

Table 2. (continued) Q5 (Synthesis of tolerance stack)

Choose the proper tolerances to ensure the required backlash

One correct answer

Q6 (Assembly parts)

Identify the number of components in the following assembly

One correct answer

Q7 (Assembly indications)

Given an assembly, identify representations that contain correct indications

One or more correct answers

Q8 (Standard parts representation)

Choose the correct representation of standardized/unified elements, such as threaded connections, tab connections, dowel connections, etc…

One correct answer

(continued)

1604

D. Paderno et al. Table 2. (continued)

Q9 (Dimensional tolerances)

Q10 (Roughness)

Correctly specify dimensional tolerances in those dimensions that require them

Given an assembly and a dimensioned part drawing, find the correct roughness indication

One correct answer

One correct answer

3 Results and Discussion 3.1 Moodle Test Statistics: TDT2 Based on the examination tests administered as of June 2020, the TDT2 questions were analyzed and validated with statistical surveys [8–11] of discriminative efficiency obtained from the Moodle platform. In particular, the answers given in the first exam session of the Industrial Automation class (AUTLT) that was taking the exam for the first time were analyzed. Two synthetic indexes [12] extracted directly from Moodle were considered in this study: (i) the Facility Index, which is the percentage of students that answered the question correctly. The Facility Index (F) is defined as the average score on the item, expressed as a percentage [13]. The higher the facility index, the easier the question is (for a certain cohort of students). F index could be interpreted as in Table 3.

Methodology for Online Assessment of Technical Drawing Skills

1605

(ii) The Discriminative Efficiency (DE p ) [13], which is the correlation between the score of a single question and the score obtained in the whole test.

Table 3. Interpretation of the facility index [14, 15]. F

Interpretation

5 or less

Extremely difficult or something wrong with the question

6–10

Very difficult

11–20

Difficult

21–34

Moderately difficult

35–65

About right for the average student

66–80

Fairly easy

81–89

Easy

90–94

Very easy

95–100

Extremely easy

This statistic attempts to estimate how good the discrimination index is relative to the difficulty of the question. An item which is very easy or very difficult cannot discriminate between students of different ability because most of them get the same score on that question. Maximum discrimination requires a facility index in the range 30%–70% (although such a value is no guarantee of a high discrimination index). The discrimination efficiency will very rarely approach 100%, but values more than 50% should be achievable. Lower values indicate that the question is not nearly as effective at discriminating between students of different ability as it might be and therefore is not a particularly good question [14, 15]. Table 4. Indexes calculated for each category of questions [14, 15]. #

Facility index

Discriminative efficiency

Q1

85.07%

10.75%

Q2

42.91%

29.01%

Q3

59.95%

52.01%

Q4

18.28%

38.72%

Q5

58.96%

49.40%

Q6

25.37%

15.44%

Q7

50.25%

27.01%

Q8

68.66%

42.94%

Q9

55.22%

12.23%

Q10

57.84%

42.79%

1606

D. Paderno et al.

Table 4 shows the two indexes calculated for each category of questions. There are ten categories, and each category has an average of five questions. Overall, the values of the indexes shown in Table 4 and plotted in Fig. 1 reveal how the questions proposed in TDT2 are statistically valid even though they present some specific critical points. The criticalities are as follows: (i) questions in category Q1 are answered positively by a high number of students (85.07%) and have a low discriminative efficiency (10.75%); (ii) questions Q6 and Q9 have a low discriminative efficiency (15.44% and 12.23% respectively). These indications can be used to identify in each category the questions to be modified because they are not sufficiently discriminating, that is, they do not allow to distinguish effectively between candidates with different skills.

Fig. 1. Comparison between facility index and discriminative efficiency, for each category of questions.

3.2 Paper and Online Test Results Comparison To verify the results and adequacy of the TDT2, the performance of students who took the exam for the first time using the online TDT2 as of June 2020 was compared with the performance of students who took the exam for the first time in June 2019 using the previous in-person test based on open-ended graphical questions. We chose to compare two classes from the same undergraduate course, which attended in different years (prepandemic and 2020, first year of pandemic): these classes had the same lecturer and the method used to deliver lessons and exercises was the same both in-person and online. Lectures and exercises alternated. In presence, the teacher carried out the exercises on the blackboard, while at a distance he used the graphic tablet. Both in presence and at a distance, the teacher, after explaining the key points of each exercise, left time for the students to solve them independently, answering any questions and then proposing a solution scheme. Table 5 shows the results of the class of students in the Industrial Automation course (AUTLT), who took the online TDT2 in 2020 on Moodle platform and the results of the

Methodology for Online Assessment of Technical Drawing Skills

1607

correspondent students in the class of 2019, who took the second test in-person, in paper form. Table 5. Comparison between test based on GEQ and TDT2. Cohort

Number of candidates

Mean

Standard deviation

Test passed

AUTLT 2018-19

80

18.7

5.7

70%

AUTLT 2019-20

67

15.7

6.7

51%

Table 5 shows that the TDT2 is more selective than the GEQ-based paper test. A more accurate analysis of this difference can only be done over time; indeed, it must be considered that GEQ tests had been known to the students until 2014, while, in June 2020, candidates were tested through the TDT2 the first time ever. In addition, the paper GEQ test assessment does not include negative evaluations for single wrong answers unlike the TDT2.

Fig. 2. Comparison between results, in tenths, of the in-person paper GEQ test (in light blue) (a) and results of the online TDT2 test (in orange) (b).

1608

D. Paderno et al.

Over time there has been an evolution: 76% of candidates who took the online TDT2 in the June/July 2021 exams passed it, and the results are more like those of the paper GEQ test. Figure 2 shows the comparison between the scores of the cohorts in Table 5. The scores in Fig. 2(a) have a well-defined maximum, corresponding to the grade of 7. The scores in Fig. 2(b) are more distributed over the whole scale of values, in agreement with the data shown in Table 5. Certainly, the second test in paper form represents a well-established way to ascertain students’ DT skills, on the other hand, the paper test requires a certain amount of time for correction that does not allow the third test to be held immediately afterwards, thus extending both the correction time and the duration of the entire exam. The online TDT2 has the great advantage of providing an immediate evaluation of the test by allowing the development of the graphical test in rapid succession, allowing to concentrate the time of the examination in only half a day. In addition, the management of the test is much more streamlined (e.g., it is not necessary to prepare photocopies of the themes of the exercises, …). On the other hand, based on the Moodle statistical evaluations, some single questions of TDT2 will have to be replaced because they are not very representative of the skills achieved by the students.

4 Conclusions and Future Developments A new graphical online closed-answer test has been developed to assess drawing skills in agreement with the TDEG. The quality of the questions and their suitability for distinguishing between candidates with different abilities were assessed through statistical indexes available on the Moodle platform: Facility Index and Discriminative Efficiency. Moreover, the scores obtained from students of different cohorts, who were testes trough the in-person paper GEQ test and the online TDT2, respectively, were considered and compared. From the analyses of the indexes and comparisons between scores of different tests, it emerges that the TDT2 is overall a reliable online test that is representative of students’ acquired skills although it highlights some individual questions that need to be improved or replaced because they were found to be too easy or too difficult. In future, we will analyze and compare question by question, to eliminate or reduce critical aspects. We will also investigate some statistical tails that appeared during the application of the full test consisting of students who repeatedly pass the pc tests but not the graphical test or vice versa of students who repeatedly fail the pc test even though they have an average sufficient level of TD skill.

References 1. Metraglia, R., Baronio, G., Villa, V.: Learning levels in technical drawing education: proposal for an assessment grid based on the european qualifications framework (EQF). In: 18th International Conference on Engineering Design - Impacting Society Through Engineering Design, ICED 2011, vol. 8, pp. 161–172 (2011)

Methodology for Online Assessment of Technical Drawing Skills

1609

2. Baronio, G., Bodini, I., Paderno, D., Uberti, S., Villa, V.: Objective evaluation of geometrical product specification/geometrical dimensioning and tolerancing basic skills. In: CavasMartínez, F., Sanz-Adan, F., Morer Camo, P., Lostado Lorza, R., Santamaría Peña, J. (eds.) INGEGRAF 2019. LNME, pp. 305–312. Springer, Cham (2020). https://doi.org/10.1007/ 978-3-030-41200-5_33 3. Villa, V., Motyl, B., Paderno, D., Baronio, G.: TDEG based framework and tools for innovation in teaching technical drawing: the example of LaMoo project. Comput. Appl. Eng. Educ. 26(5), 1293–1305 (2018) 4. Yeung, K.L., Carpenter, S.K., Corral, D.: A comprehensive review of educational technology on objective learning outcomes in academic contexts. Educ. Psychol. Rev. 33(4), 1583–1630 (2021). https://doi.org/10.1007/s10648-020-09592-4 5. Raffaeli, R., Cicconi, P., Mandorli, F.: A comparative assessment of learning outcomes in online vs traditional teaching of engineering drawing. In: Burgos, D., et al. (eds.) HELMeTO 2019. CCIS, vol. 1091, pp. 149–162. Springer, Cham (2019). https://doi.org/10.1007/978-3030-31284-8_12 6. Violante, M.G., Moos, S., Vezzetti, E.: A methodology for supporting the design of a learning outcomes-based formative assessment: the engineering drawing case study. Eur. J. Eng. Educ. 45(2), 305–327 (2020) 7. Gamage, S.H.P.W., Ayres, J.R., Behrend, M.B.: A systematic review on trends in using Moodle for teaching and learning. Int. J. STEM Educ. 9(1), 1–24 (2022). https://doi.org/10.1186/s40 594-021-00323-x 8. Kika, A., Leka, L., Loka, E.: Adaptive learning and testing in Moodle. In: CEUR Workshop Proceedings, vol. 2910, pp. 1–10 (2021) 9. Muller, M.T., Togni, A., Thilgen, C.: Evaluation of the chemistry knowledge of students entering the ETH Zurich with a Moodle quiz. Chimia 75(1–2), 89–97 (2021) 10. Dintu, S., Suletea, A.: On-line evaluation in graphic engineering disciplines. In: Proceedings of the 11th International Conference on Electromechanical and Energy Systems, SIELMEN 2021, pp. 121–124 (2021) 11. Albano, G., Telloni, A.I.: From traditional exams to closed-ended quizzes: an exploration towards an effective assessment in mathematics at university level. J. E-Learn. Knowl. Soc. 17(2), 45–55 (2021) 12. https://docs.moodle.org/311/en/Quiz_statistics_report. Accessed 14 Feb 2022 13. https://docs.moodle.org/dev/Quiz_statistics_calculations. Accessed 14 Feb 2022 14. https://docs.moodle.org/dev/Quiz_report_statistics. Accessed 14 Feb 2022 15. Butcher, P.: Brief guide to the Moodle iCMA Reports. The Open University. https://wiki.umo ntreal.ca/download/attachments/92832287/Brief_Guide_to_iCMA_reports.pdf?version=1& modificationDate=1399917582000&api=v2. Accessed 14 Feb 2022

Techniques for the Representation of the Application of Historical-Archaeological Evidence Scales in Heritage Assets Irene Cáceres-Criado1 , Diego Francisco García-Molina2 , Rafael Enrique Hidalgo-Fernández3 , and Paula Triviño-Tarradas3(B) 1 Doctoral Program in Agricultural, Food, Forestry and Rural Development Engineering,

University of Córdoba, Córdoba, Spain [email protected] 2 Department of Graphic Engineering, Design and Engineering Projects of the University of Jaen, Jaen, Spain [email protected] 3 Department of Graphic and Geomatics Engineering, University of Cordoba, Córdoba, Spain {ig1hifer,ig2trtap}@uco.es

Abstract. At the beginning of the discipline, archaeology as a scientific subject was understood thanks to the archaeological drawing, or what is the same, to the graphic representation of heritage. The use of new technologies in the scope of archaeology has posed a revolution in the representation of heritage assets. The creation of multi-disciplinary teams who transfer the information of the archaeological record to new representation techniques, and who can solve specific problems in the scope of heritage. Virtual archaeology contributes to the motivation and interest for the learning of archaeological spaces. In this context, new technologies are a relevant and more attractive tool that ease the access to the knowledge of past societies, facilitating the didactic and informational mission and the clarity and comprehension of the contents. This study is focused on the representation of the degree of historical-archaeological evidence of a heritage asset, through the use of the three evidence scale propositions that exist nowadays. Evidence scales will be compared, with the aim of obtaining accurate results of their application, identifying their suitability. Furthermore, compliance with the scientific quality requirements established for this kind of work will be guaranteed. Currently, it is important to disseminate the existing heritage, relying on emerging technologies. In the same way as the lifestyles of people change, the ways of acquiring information must also change. This study supports the creation of a unique language in the dissemination of archaeological remains relying on emerging technologies. Keywords: Historical-archaeological evidence scale · Evidence level · Reconstructive units · Virtual reconstruction

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1610–1619, 2023. https://doi.org/10.1007/978-3-031-15928-2_140

Techniques for the Representation of the Application

1611

1 Introduction Representation techniques in archaeology exist since the beginning of this discipline. These have advanced and gained global interest in cultural needs. To cover what society demands in terms of heritage, there is a fundamental tool: its dissemination [1]. Archaeology does not only consist in recovering heritage assets of past societies, but also in building a historical discourse based on them. This requires archaeological research, which currently relies on digital technologies for the interpretation of archaeological remains. Moreover, these technologies play a key role in the conservation, preservation and dissemination of cultural assets [2]. The dissemination of heritage and archaeological research can be included in education, which is achieved with the representation of heritage using digital technologies. This study is focused on the comparison of three propositions of existing historicalarchaeological evidence scales, which show the veracity of the virtual reconstructions. A historical-archaeological evidence scale is a colour scale associated with a 3D virtual reconstruction. The aim was to indicate the levels of evidence that make up such reconstructions, in order to allow the observer to identify which parts have greater or lesser historical-archaeological evidence. The use of the historical-archaeological evidence scale dates to around 2014, since then only six scientific research papers have been published in this field.

2 Objectives The aim of the present research was to represent the degree of historical-archaeological evidence of a heritage asset, through the use of the existing three propositions of evidence scale. These scales were compared to obtain accurate results of their application, identifying the suitability of each proposition.

3 Materials and Software The extraction of data from the existing historical-archaeological scales is fundamental for their comparison. In the literature about this type of graphical representation, it was observed that not all propositions have been used to support virtual reconstructions. Moreover, the two propositions used in scientific research appear in a different number of studies. Project Byzantium 1200 was developed to recreate the city of Bizantium for the year 1200 [3]. Patrick Clifford, Jan Kostenec and Albercht Berger, with the use of a colour graduation, aimed to complement and support the virtual reconstructions with the representation of the degree of historical-archaeological evidence. The scale proposed by these authors has ten levels of evidence, each of them associated with one colour, with warmer and colder hues indicating greater and lesser evidence, respectively (Table 1).

1612

I. Cáceres-Criado et al. Table 1. Historical-archaeological evidence scale of Project Byzantium 1200 [3]. Level of evidence

Colour

Definition

1

Exist in its original form

2

Partially or with modifications

3

Photographs or plans available

4

Archaeological information

5

Detailed graphical evidence

6

Simple graphical evidence

7

Textual and comparative evidence

8

Textual evidence

9

Based on similar structures

10

Imagination

This proposition has been used by other authors. P. Aparicio and C. Figueiredo applied it for the first time to verify its effectiveness [4]. As a result, they established a fixed colour code, which relates the historical-archaeological evidence to a level of veracity. Colours are associated with a number (from 1 to 10), corresponding to the lower to higher level of evidence (Table 2). Table 2. Historical-archaeological evidence scale indicated by P. Aparicio and C. Figueiredo [4]. Level of evidence

Colour

Definition

1

Imagination

2

Conjecture based on similar structures

3

Basic textual reference

4

Descriptive textual reference

5

Simple graphical reference

6

Detailed graphical reference

7

9

Basic archaeological information or simple base plans Strong archaeological and documental evidence in photographs and detailed plans Still existing (or partially existing) with modifications

10

Still existing in original form

8

The last proposition, which is more revolutionary, was developed by R. Ortiz, E. León and R. E. Hidalgo [5]. Unlike the previous propositions, it has eight different levels of historical-archaeological evidence and, in addition, it modifies the colour range and gradation of the evidence levels. They obtained a scale with a colour range of dark greens to browns for eight levels of historical-archaeological evidence (Table 3).

Techniques for the Representation of the Application

1613

Table 3. Historical-archaeological evidence scale indicated by R. Ortiz, E. León and R. E. Hidalgo [5]. Level of evidence

Colour

Definition

1

Still existing in original form

2

Still existing with modifications

3

Detailed graphical evidence

4

Slight graphical evidence

5

Archaeological hypothesis

6

Textual evidence

7

Based on similar structures

8

Based on historical context, nature and culture

After obtaining the information of the existing options of historical-archaeological evidence scales, we selected the heritage assets on which they would be applied for their comparison. In this case, we selected the area of a heritage asset: the cella penuaria (pantry) and culina (kitchen) of a Roman domus. A virtual reconstruction of these spaces, as well as of the entire building, had been previously performed. Computer programmes 3DReshaper, SketchUp and LumenRT were used to carry out the previous virtual reconstruction. However, to implement the evidence scale, Blender 2.90 was used; therefore, the 3D model had to be imported to this software in order to conduct this study.

4 Methodology The archaeological site of Torreparedones is located in the countryside of Cordoba, on the northern border of the towns of Castro del Río and Baena. The studies carried out in this place indicate that it was inhabited from the 4th millennium BC to the 16th century. In the 2015–2016 excavation campaign, the excavation and documentation of the domus under study took place [6]. The recovered archaeological remains have three construction phases: the late republican Roman phase, the high imperial Roman phase and the medieval-modern phase. The 3D virtual reconstruction carried out in this heritage asset focuses on the second construction phase, with the bread oven in operation [7]. Prior to the implementation of the evidence scales, the reconstructive units of the previous reconstruction were selected, as well as the evidence levels that correspond to each reconstructive unit (Fig. 1). The latter help to record more accurately the historicalarchaeological characteristics of the elements that are present in virtual reconstructions. Each reconstructive unit (RU) is associated with a degree of veracity, depending on the level of evidence corresponding to the element identified with the reconstructive unit number.

1614

I. Cáceres-Criado et al. STAGE 1 SELECTION OF THE 3D VIRTUAL MODEL TO IMPLEMENT THE SCALE OF HISTORICAL-ARCHAEOLOGICAL EVIDENCE In this case, the 3D virtual reconstruction of the 'Baker's House' in Torreparedones has been selected. (The kitchen area located in this house).

STAGE 2 SELECTION OF THE RECONSTRUCTIVE UNITS IN THE 3D VIRTUAL RECONSTRUCTION Identification of the reconstructive units (RU) composing the virtual reconstruction and assigning a RU number.

STAGE 3 SELECTION OF COLOURS CORRESPONDING TO THE LEVELS OF EVIDENCE The level of veracity of the RU is associated to one colour corresponding the level of evidence.

STAGE 4 IDENTIFICATION OF THE LEVELS OF EVIDENCE IN THE VIRTUAL MODEL Allocation of the colours of each level of evidence to each RU in the 3D model.

Fig. 1. Implementation phases of the historical-archaeological evidence scale in the kitchen area in the Baker’s house in Torreparedones.

The area selected for the implementation of the historical-archaeological evidence scales has 6 reconstructive units, corresponding to: • • • • • • •

Pavement Base of the walls Elevation of the walls Bay of access to the culina Preserved remains of a masonry structure Elevation of the masonry structure Roman furniture associated with these spaces

Techniques for the Representation of the Application

1615

The evidence levels associated with each reconstructive unit were established according to the elements used for their virtual reconstruction. For the pavement, the level of evidence is associated with the archaeological information and hypotheses. The pavements of these rooms were not preserved, although it is possible to estimate how they would be, based on remains documented in other areas of the building. The walls that make up the spaces have two levels of evidence: the one for the foundations and the one for the elevation. Regarding the foundations, that is, the base of the walls, since they were preserved, the greatest degree of veracity or evidence is assigned to them. To analyse the reconstruction of their elevation, the study is based on the work of Vitrubio, that is, on the proportions, since the height is not preserved in its entirety. In its northern side, the space identified as the possible cella penuaria presents an access to the culina, which is 0.97 m wide. Since the bay does not preserve its height, it was assigned a “partially existing” level of evidence. In the culina, the remains of a masonry structure were documented; therefore, the preserved remains are attributed the greatest level of evidence. For the reconstruction of such structure, we used the textual references of the archaeological excavation report, which corresponded to the evidence level of the structure elevation. Lastly, for better understanding the spaces, Roman furniture associated with them was added. Such elements, simply added for pedagogical reasons, have the lowest level of evidence. After applying the colours that corresponded to each of the evidence scale propositions in the kitchen of the Roman building, three infographics of the 3D model were obtained: Project Byzantium 1200 (Fig. 2), P. Aparicio and C. Figueiredo (Fig. 3) and R. Ortiz, E. León and R. E. Hidalgo (Fig. 4).

Fig. 2. Application of the scale of colours proposed in Project Byzantium 1200.

1616

I. Cáceres-Criado et al.

Fig. 3. Application of the scale of colours proposed by P. Aparicio and C. Figueiredo.

Fig. 4. Application of the scale of colours proposed by R. Ortiz, E. León and R. E. Hidalgo.

5 Results As was previously mentioned, for the implementation of the historical-archaeological evidence scales, a thorough documentation process was conducted to obtain the ideal materials to perform the virtual representation. From that search, we obtained results regarding the amount of scientific studies found. The proposition of Project Byzantium has not been used in any scientific study. The proposition of P. Aparicio and C. Figueiredo appeared in six studies, four of which were conducted by the mentioned authors to support virtual reconstructions [4, 8, 9, 11], and, in the fifth and sixth studies, different authors

Techniques for the Representation of the Application

1617

used the proposition to support the representation of the degree of evidence of a heritage asset [10, 12]. The proposition of P. Aparicio and C. Figueiredo was first used in the year 2016 [8], and was last applied in the year 2021 [9]. Lastly, it is worth highlighting that there is only one scientific paper about the most recent evidence scale, which presented the scale to the scientific community [5]. The application of the existing evidence scales in the area of the kitchen of the Roman building led to the creation of a table that shows the reconstructive unit number, the levels of evidence of each proposition, and the name of the reconstructed elements (Table 4). Table 4. Identification of the reconstructive units and the evidence levels of the three scale propositions. RU Nº

Evidence level Project Byzantium proposition

Evidence level P. Aparicio and C. Figueiredo proposition

1 2 3 4

4 1 3 2

7 10 8 9

5

1

10

6

8

3

7

10

1

Evidence level R. Ortiz, E. León and R. E. Hidalgo proposition 5 1 6 2 1 6 8

Name

Pavement Base of the walls Elevation of the walls Access bay Base of the masonry structure Elevation of the masonry structure Roman furniture

Table 4 shows two significant aspects: the colour and number of the evidence levels. Firstly, regarding colour, it is observed that the first and second propositions have quite similar colours and the same colour range, but different hues, whereas the third proposition has a totally different colour range. Secondly, the evidence levels of the first and third propositions are numbered from greater to lesser veracity, with level 1 corresponding to the existing elements, whereas the last level (level 10 for the first proposition and level 8 for the third proposition) refers to elements or structures based on the historical, cultural and natural context. However, in the second proposition, the evidence levels are numbered from lesser to greater veracity, with level 1 corresponding to elements based on the historical, cultural and natural context, whereas level 10 refers to the existing elements. After the application of the evidence scales, considering the obtained infographics (Fig. 2, 3 and 4), it is worth mentioning that there are no differences between the first and second propositions, except for the level identified as elements or structures based on the historical, cultural and natural context. The colours of both propositions are clearly differentiated from each other. In the third proposition, evidence levels 5 and 6 are not differentiated, since they have very similar colours.

1618

I. Cáceres-Criado et al.

Finally, it is worth mentioning that the existing proposals for scales of historicalarchaeological evidence comply with the theoretical framework of the Seville Charter and the principles of interdisciplinarity, purpose, complementarity, authenticity, historical rigor, efficiency, scientific transparency, and training and evaluation [13].

6 Conclusions The number of elements in the virtual reconstruction is proportional to the number of reconstructive units. The graphic representation of the historical-archaeological evidence scales must differ visually, in order to easily identify and interpret their differences in terms of veracity. Therefore, the present study considers the proposition of P. Aparicio and C. Figueiredo as the most suitable, for several reasons. Firstly, this proposition is a more recent and improved version of the scale proposed for Project Byzantium 1200. Likewise, the colours are valid for the understanding and differentiation of the reconstructed elements. Moreover, of all the existing scales, this is the most frequently used to support virtual reconstructions. In the third proposition, there are two evidence levels that cannot be clearly differentiated. It is believed that this is not valid for this type of representations and that there would be more undifferentiated levels if the evidence scale were applied to the building in its entirety. An advance in this field, that is, the graphic representation of the veracity of the degree of historical-archaeological evidence, could be the discrimination by colours that the general user employs in his/her everyday life, interpreting the colours according to a daily routine symbology: red, yellow and green are associated with danger, precaution and guarantee, respectively. The establishment of these three main colours would pose the creation of a unique language that would be easy to understand for the general public. Observers would very easily identify the evidence levels that correspond to all the reconstructive units reflected in the virtual reconstructions. The use of the historical-archaeological evidence scale is a great opportunity to add value to archaeological research. With it, users can observe the representation of the heritage assets in a specific period before to the current one and identify the degree of veracity of each of the parts of the virtual reconstruction.

References 1. Andrioti, N., Kanetaki, E., Drinia, H., Kanetaki, Z., Stefanis, A.: Identifying the industrial cultural heritage of Athens, Greece, through digital applications. Heritage 4, 3113–3125 (2021) 2. Staropoli, L., Lanza, A., Avido, D.N., Herrera, V.: La construcción del patrimonio virtual en la investigación, conservación y difusión del pasado. In: XX Congreso Nacional de Arqueología Argentina. IDACOR-CONICET, Universidad Nacional de Córdoba (2019) 3. Byzantium 1200. http://www.byzantium1200.com/. Accessed 11 Jan 2022 4. Aparicio, P., Figueiredo, C.: El Grado De Evidencia Histórico-Arqueológica De Las Reconstrucciones Virtuales: Hacia Una Escala De Representación Gráfica. Revista Otarq: Otras arqueologías, pp. 235–247 (2017)

Techniques for the Representation of the Application

1619

5. Ortiz, R., León, E., Hidalgo, R.E.: Proposal for the improvement and modification in the scale of evidence for virtual reconstruction of the cultural heritage: a first approach in the mosque-cathedral and the fluvial landscape of Cordoba. J. Cult. Herit. 30, 10–15 (2018) 6. Morena, J.A., Márquez, C., Tristell, F.J., Fernández, L., Moreno, A.: BAENA arqueológica. Boletín informativo del Museo Histórico Municipal de Baena, pp. 2–4 (2016) 7. Cáceres-Criado, I., Triviño-Tarradas, P., Valderrama-Zafra, J.M., García-Molina, D.F.: Digital preservation and virtual 3D reconstruction of “The Baker’s house” in the archaeological site of Torreparedones (Baena, Cordoba- Spain). Digit. Appl. Archaeol. Cult. Herit. 24, 1–11 (2022) 8. Aparicio, P.: La Torre Almenara De San García, Algeciras (S. XVII-XVIII). Recuperación Virtual De Una Estructura Militar Gracias A La Tecnología. Revista Otarq: Otras arqueologías, pp. 325–334 (2016) 9. Aparicio, P., García Álvarez, A., Muñiz, I., Fernández, N.: Reconstrucción virtual en 3D del castillo de Gauzón (Castrillón, Principado de Asturias). Virtual Archaeol. Rev. 12(25), 158–176 (2021) 10. García, J., Gallego, D.: La arqueología de órdenes militares en Castilla-La Mancha y la reconstrucción virtual de su patrimonio. Virtual Archaeol. Rev. 9(23), 76–88 (2018) 11. Rodríguez-Hernández, J., Álvarez-Sanchís, J.R., Aparicio-Resco, P., Maté-González, M.Á., Ruiz-Zapatero, G.: Reconstrucción virtual en 3D del “Torreón” del oppidum de Ulaca (Solosancho, Ávila): Mucho más que una imagen. Arqueol. De La Arquit 18, e123 (2021) 12. Cáceres-Criado, I., García-Molina, D.F., Mesas-Carrascosa, F.J., Triviño-Tarradas, P.: Graphic representation of the degree of historical-archaeological evidence: the 3D reconstruction of the “Baker’s House.” Herit. Sci. 10(1), 1–14 (2022) 13. López-Menchero Bendicho, V.M., Grande, A.: Hacia una carta internacional de arqueología virtual. El borrador SEAV. Virtual Archaeol. Rev. 2(4), 71–75 (2011)

Author Index

A Aguerri Galindo, Álvaro, 51 Alaimo, Andrea, 946 Altavilla, Stefania, 28, 1471 Angelino, Antimo, 768 Antuono, Giuseppe, 873 Anwer, Nabil, 1005, 1279 Arnau, Rosario, 626 Aruanno, Beatrice, 733, 1352 Asión-Suñer, Laura, 330 Aurensanz Mallada, Rosa, 15 Ausiello, Pietro, 294 B Bacciaglia, Antonio, 815, 1042 Badenes, Cristobal, 626 Baggetta, Mario, 614, 664 Barberi, Emmanuele, 1516 Barone, Sandro, 733, 1352 Baronio, Gabriele, 1598 Bellitti, Paolo, 1247 Beltrán, Inés, 626 Benaben, Audrey, 827, 1103 Berardo, Alice, 276 Beraud, Nicolas, 569, 803, 955, 1149 Bergs, Thomas, 354 Bernardo, Enrico, 711 Berni, Aurora, 1471 Berri, Luigi, 851 Berselli, Giovanni, 3, 192, 380, 614, 664, 698 Bianconi, Francesco, 533, 793 Bici, Michele, 780 Biedermann, Anna, 1576, 1585 Bigliardi, Matteo, 380

Bilancia, Pietro, 380 Bisognin, Giuseppe, 1268 Blampain, Félix, 884 Blanco, Teresa, 1585 Blázquez-Parra, E. Beatriz, 110 Boccaccio, Antonio, 512 Bodini, Ileana, 367, 1598 Bonino, Brigida, 698 Bonomi, Germano, 367 Bordegoni, Monica, 1460 Borgianni, Yuri, 28, 1471 Bove, Davide, 135 Bricogne, Céline, 884 Bricogne, Matthieu, 884 Brilhault, Quentin, 442 Brunzini, Agnese, 229, 1401 Buonocore, Sara, 1376 C Caccavale, Fabrizio, 1172 Cáceres-Criado, Irene, 1610 Calabretta, Michele, 1138 Calavia, M. Belén, 1585 Calzone, Nicolas, 1172 Camba, Jorge D., 677 Campana, Francesca, 780 Cañavate, Francisco J. F., 344 Cantone, Andrea, 367 Caporaso, Teodorico, 286, 1195, 1227, 1239, 1247 Cappetti, Nicola, 204, 895, 1426, 1495 Caputo, Francesco, 503 Caragiuli, Manila, 229 Cardilicchio, Antimo, 1364

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 S. Gerbino et al. (Eds.): JCM 2022, LNME, pp. 1621–1625, 2023. https://doi.org/10.1007/978-3-031-15928-2

1622 Carfagni, Monica, 743 Cariati, Simona, 1413 Carli, Raffaele, 1436 Carniel, Emanuele Luigi, 276 Carollo, Giovanni, 652 Carratalà, Pablo, 626 Carulli, Marina, 1460 Carvalho, Philippe, 1527 Casas, Roberto, 1585 Casciotta, Edoardo, 1389 Castiblanco Jimenez, Ivonne Angelica, 318 Castillo-Rueda, Francisca J., 88 Caterino, Mario, 581 Cattaneo, Andrea, 1183 Ceruti, Alessandro, 815, 1042 Chillemi, Massimiliano, 1516 Chirol, Clément, 421, 827, 1103 Chiva, Sergio, 626 Ciccarelli, Marianna, 1401 Ciccolella, Alessandro, 390 Ciccone, Francesco, 815, 1042 Climent, Javier, 626 Coccorese, Domenico, 1448 Colombo Zefinetti, Filippo, 1079 Colombo, Giorgio, 1055 Comberton, David, 1126 Company, Pedro, 677 Concheri, Gianmaria, 711, 722, 1256, 1268 Conti, Paolo, 533, 793 Cosenza, Chiara, 768 Covarrubias Rodriguez, Mario, 1352 Cristofolini, Ilaria, 925 Cucinotta, Filippo, 1516 D D’Agostino, Pierpaolo, 873 d’Amore, Matteo, 1495 D’Angelo, Raffaele, 286, 1448 D’Introno, Annarita, 1413 Daidié, Alain, 421, 827, 1103 Daille-Lefèvre, Bruno, 75 Dal Fabbro, Pierandrea, 265 Dalpadulo, Enrico, 1067 Dammacco, Lucilla, 1436 De Crescenzio, Francesca, 1505 De Dominici, Gregory, 967 de la Rosa, Sergio, 914 de Los Angeles Guachi-Guachi, Lorena, 780 De Luca, Alessandro, 503 de-Cózar-Macías, Óscar D., 88, 110 Dellarre, Anthony, 803 Di Angelo, Luca, 148, 306 Di Bernardo, Romolo, 122 di Filippo, Andrea, 895

Author Index Di Gironimo, Giuseppe, 286, 466, 1195, 1215, 1376 Di Marino, Castrese, 454 Di Stefano, Paolo, 148, 306 Diano, Maurizio, 1340 Díez-Martínez, Elena, 110 Donnici, Giampiero, 135 Donzella, Antonietta, 367 Dotoli, Mariagrazia, 1436 Douin, Chloé, 1017 Durupt, Alexandre, 1527 E Ejarque-Ortiz, Aroa, 15 Evangelista, Alessandro, 1305, 1316, 1340 Eynard, Benoît, 884 F Favi, Claudio, 980, 993 Federica, Cappelletti, 39, 62 Fera, Marcello, 581 Fernández-Pacheco, Daniel G., 344 Ferreira, Kévin, 1005 Ferretti, Patrich, 135 Filippi, Stefano, 1030, 1293 Fiorentino, Anita, 1495 Fiorentino, Michele, 1305, 1413, 1436 Fiorillo, Iolanda, 1426 Foglia, Massimo, 1305 Foissac, Charly, 421 Fontana, Carlotta, 204, 1426 Fontanelli, Giuseppe Andrea, 1215 Francia, Daniela, 135 Franciosa, Pasquale, 546, 851 Frizziero, Leonardo, 135 Furferi, Rocco, 170, 743 Fusco, Salvatore, 1215 G Gabriel, Bertrand, 967 Galasso, Valerio, 1340 Galati, Nicolò, 1159 Gallo, Nicola, 851 García de Paredes, Ignacio, 253 García, Mairena, 626 García-Molina, Diego Francisco, 1610 Gattino, Cecilia, 664 Gattullo, Michele, 1316, 1327, 1436 Generosi, Andrea, 1460 Gerbino, Salvatore, 546, 851, 1364 Germani, Michele, 993, 1401 Gherardini, Francesco, 780, 1159 Ghidotti, Anna, 242 Giannini, Franca, 698 Gianotti, Alberta, 1541

Author Index Gibaru, Olivier, 410 Giliberti, Claudia, 1340 Giovannoli, Pietro, 1376 Gironimo, Giuseppe Di, 1227 Gloria, Antonio, 122, 294 Godot, Xavier, 75 Gómez, José Antonio, 905 Gomez-Jauregui, Valentin, 686, 862 Gonzalez-Redondo, Miguel, 1115 Governi, Lapo, 148, 743 Gramegna, Nicola, 851 Grandvalet, Yves, 1527 Grandvallet, Christelle, 479 Grasso, Corrado, 1448 Grazioso, Stanislao, 286, 1195, 1215, 1227, 1239, 1247 Greco, Alessandro, 546, 1364 Grigolato, Luca, 265, 711, 722 Gruhier, Elise, 1017, 1552 Grunwald, Tim, 354 Guachi, Robinson, 780 Guaitoli, Caterina, 1159 Guardiani, Emanuele, 306 Gujski, Lucas Matias, 895 H Hartono, Natalia, 581 Heidarpour, Farhad, 390 Hidalgo-Fernández, Rafael Enrique, 1610 Honnorat, Olivier, 520 Huet, Florian, 1564 I Iaquinandi, Michele, 1426 Imbriani, Vito, 466, 1030 Ingrassia, Tommaso, 637, 652 Innocente, Chiara, 159 Iolascon, Giovanni, 1227 K Kamgaing Souop, Landry Arnaud, 1103 Khamaisi, Riccardo Karim, 1389 Kilian, Wolfgang, 1483 Kleiner, Mathias, 1564 Klement, Nathalie, 410 Kofler, Ingrid, 1471 Kopnarski, Lena, 605 Kromer, Robin, 1017, 1126, 1552 Krumm, Dominik, 286, 605 L Lacombe, Alexandra, 1103 Lacoste, Eric, 1126 Ladrón-de-Guevara-López, Isidro, 110 Ladrón-de-Guevara-Muñoz, M. Carmen, 88, 110

1623 Laera, Francesco, 1305 Lafou, Meriem, 410 Lamanna, Giuseppe, 503 Landi, Daniele, 101, 242, 1079 Landon, Yann, 593, 1103 Lanzoni, Daniel, 1183 Lanzotti, Antonio, 286, 294, 677, 768, 1195, 1215, 1227, 1239, 1247, 1448 Lanzotti, Francesca Giovanna, 466 Lasso Perdomo, Cristiam Javier, 967 Laviola, Enricoandrea, 1316, 1327 Lazazzera, Vito, 1436 Le Duigou, Julien, 399 Leali, Francesco, 217, 354, 839, 1067 Limousin, Maxime, 803 Liverani, Alfredo, 135, 1042 Loiseau, Adrien, 593 López-Forniés, Ignacio, 15, 51, 330 López-Salmerón, Emilio, 344 Ludi, Simona, 181 M Mabru, Catherine, 1090 Macías-García, José, 88 Madre Sediles, María A., 15 Malburet, François, 520 Maltauro, Mattia, 1256, 1268 Manchado, Cristina, 686, 862 Mancuso, Antonio, 559 Mandolini, Marco, 229, 980, 993 Manghisi, Vito Modesto, 1305, 1340 Marannano, Giuseppe, 637, 755 Marcolin, Federica, 318 Marconi, Marco, 430, 980 Marianna, Ciccarelli, 39 Mariconte, Raffaele, 1340 Marin, Philippe, 569 Marín-Granados, Manuel D., 88 Marsan, Thibault, 593 Marta, Rossi, 39, 62 Martin, Patrick, 75 Martínez-Chao, Tomás E., 873 Martínez-Cuenca, Raúl, 626 Martorelli, Massimo, 122, 294, 768, 851 Marzola, Antonio, 148 Marzullo, Domenico, 466, 1030 Mayuet, Pedro F., 914 Mazzoli, Alida, 229 Mazzone, Giuseppe, 466, 1030 Medina-Triviño, Pedro, 1115 Mehdi-Souzani, Charyar, 1005 Melgarejo-Torralba, Mar, 344 Menchi, Giacomo, 1401 Meneghello, Roberto, 1256, 1268 Menéndez-Díaz, Agustín, 873

1624 Mengoni, Maura, 1460 Merlo, Fabio, 3 Messina, Angelo Alberto, 1138 Michele, Germani, 39, 62 Mignot-Pasty, Raphaël, 520 Milardi, Demetrio, 1516 Millán-Estepa, Álvaro, 110 Millet, Dominique, 967 Miralbes, Ramon, 905 Montanari, Roberto, 294 Montcel, Thibaut Tezenas Du, 1149 Monteleone, Antonio, 768 Monti, Marina, 698 Moos, Sandro, 159, 318 Motyl, Barbara, 1030 Mozzillo, Rocco, 1172 Muñoz, Natalia, 1576 Museau, Matthieu, 955 N Naddeo, Alessandro, 1426 Navarro, Pablo, 593 Neri, Paolo, 733, 937 Nezzi, Chiara, 28, 1471 Nigrelli, Vincenzo, 637, 652, 755 O Odenwald, Stephan, 286, 605, 1483 Olivieri, Salvatore Massimo, 1138 Orlando, Calogero, 946 Orquera, Myriam, 967 Ortiz, Jesús, 192 Ortiz-Cordero, Rafael, 1115 Orzes, Guido, 28 Ostuni, Benedetta Maria Vita, 1195, 1227, 1239 Otero, César, 686, 862 Ottonello, Elia, 664 P Paderno, Diego, 367, 1598 Pagano, Davide, 367 Pagnoni, Mario, 229 Pailhès, Jérôme, 569 Palomba, Angela, 1227 Pantano, Antonio, 652 Paoli, Alessandro, 733, 937 Papa, Stefano, 122, 768, 1448 Papallo, Ida, 294 Papetti, Alessandra, 1401 Pardo-Vicente, Miguel-Angel, 914 Paredes, Manuel, 1090, 1103 Paroissien, Éric, 827 Parras-Burgos, Dolores, 344 Pascoletti, Giulia, 533, 793 Pasquariello, Agnese, 491

Author Index Patalano, Stanislao, 454, 491, 677 Pédarriosse, Camille, 1564 Pellegrini, Daniele, 1268 Pellegrino, Arcangelo, 1495 Pellicciari, Marcello, 380 Pensa, Claudio, 122 Pérez, Eduardo Manchado, 253 Perfetto, Donato, 503 Perina, Matteo, 925 Perretta, Silvana, 276 Peruzzini, Margherita, 380, 1389 Petronijevic, Jelena, 75 Petruccioli, Andrea, 839 Pettinato, Stefano, 192 Piazzolla, Pietro, 159 Pichon, Guillaume, 827 Pierri, Francesco, 1172 Pierri, Silvia, 204 Pietroni, Giorgia, 430 Pigazzi, Riccardo, 1055 Pini, Fabio, 354, 839, 1067 Pinon, Sébastien, 884 Pirri, Riccardo, 755 Pitzalis, Roberto Francesco, 192 Plaza, Inmaculada, 1576 Polette, Arnaud, 1564 Polichetti, Rosario, 1448 Pourroy, Franck, 479, 569, 955 Pretto, Andrea, 276 Puggelli, Luca, 170, 743 Püschel, Aline, 1483 Q Qie, Yifan, 1279 R Raffaele, Marcello, 1516 Raffaeli, Roberto, 380, 698, 1389 Ramírez, Emilio A., 955 Ranz, David, 905 Razionale, Armando Viviano, 733, 937 Razzoli, Roberto, 614, 664 Rega, Andrea, 454 Regazzoni, Daniele, 181, 242, 1079, 1183 Renno, Fabrizio, 1448 Renzi, Cristina, 217 Ricci, Marina, 1413 Richetta, Maria, 294 Ricotta, Vito, 637, 755 Rinaldi, Marta, 581 Rizzi, Caterina, 181, 242, 1183 Rodríguez-Parada, Lucía, 914 Rossano, Veronica, 1340 Rosso, Stefano, 711, 722 Rossoni, Marco, 1055

Author Index

1625

Roucoules, Lionel, 442, 520, 1564 Rovida, Edoardo Giovanni Maria, 1541 Rudisch, Julian, 605 Ruiz-Pastor, Laura, 28, 1471 Russo, Davide, 101 Russo, Mario Brandon, 546

Totis, Giovanni, 1030 Trifi, Delia, 626 Tripoli, Marco Cosimo, 652 Triviño-Tarradas, Paula, 1115, 1610 Tumino, Davide, 559, 946 Turino, Maria Antonietta, 581

S Sabella, Roberto, 1215 Salmeri, Fabio, 1516 Salvati, Luca, 1495 Sanseverino, Giuseppe, 286, 605 Santamaria, Natalia, 905 Santarelli, Chiara, 170 Santhosh, Sandhya, 1505 Santi, Gian Maria, 135 Santolaya, José Luis, 1576 Sanz-Segura, Rosana, 253 Saporito, Antonio, 559 Sardini, Emilio, 1247 Sartini, Mikhailo, 993, 1401 Savio, Gianpaolo, 265, 276, 711, 722 Savoldelli, Anna, 181 Scalzo, Federico, 1030 Scarcelli, Alessandra, 1413 Segonds, Stéphane, 421 Selvaggio, Mario, 1215 Sena, Pasquale, 1495 Sequenzia, Gaetano, 1138 Serio, Paola, 170 Serpelloni, Mauro, 1247 Serrano, Ana, 1576, 1585 Sfravara, Felice, 1516 Siciliano, Bruno, 1215 Sierra-Pérez, Jorge, 51 Sileo, Monica, 1172 Silva, Cristovão, 410 Sitta, Alessandro, 1138 Sortino, Marco, 1030 Speranza, Domenico, 122 Sponchiado, Riccardo, 265 Spreafico, Christian, 101, 1079 Stecki, Jacek, 664 Stella, Ettore, 851 Strippoli, Valeria, 1413

U Uberti, Stefano, 1598 Uccheddu, Francesca, 265, 276 Ulrich, Luca, 159, 318 Uva, Antonio Emmanuele, 390, 512, 1327, 1340

T Tamassia, Eugenio, 354 Tamburrino, Francesco, 937, 1352 Tarallo, Andrea, 768, 1448 Tejero-Olalla, José M., 88 Tezenas Du Montcel, Thibaut, 569 Toniolo, Ilaria, 276 Tornincasa, Stefano, 159, 318

V Vaglio, Emanuele, 1030 Vaiani, Lorenzo, 512 Vaïssette, Julien, 1090 Valvano, Stefano, 946 Vanino, Giovanni, 1293 Vazzoler, Greta, 3 Vergnano, Alberto, 1159 Vezzetti, Enrico, 159, 318 Viero, Massimo, 1268 Vigil-Fernández, Carlos, 686 Vignat, Frédéric, 479, 569, 1149 Villa, Valerio, 367, 1598 Villecco, Francesco, 895, 1495 Villeneuve, François, 955 Vindigni, Carmelo Rosario, 946 Vindrola, Pedro G., 873 Vitali, Andrea, 1183 Vitolo, Ferdinando, 454, 491 Voelcker-Rehage, Claudia, 605 Volpe, Yary, 148, 170 W Watier, Bruno, 593 Weisheit, Linda, 1203 Wockenfuß, Wilhelm Richard, 1203 Y Yahia, Esma, 442, 1564 You, Jeong-Ha, 466, 1030 Z Zafferri, Giulio, 1541 Zago, Macro, 925 Zanetti, Elisabetta Maria, 533, 793 Zanini, Alessio, 980 Zapata, Santiago Montoya, 410 Zenoni, Aldo, 367 Zhang, Shengzhe, 399 Zucchinetti, Marta, 614