Trends in Artificial Intelligence and Computer Engineering: Proceedings of ICAETT 2022 3031259416, 9783031259418

This book constitutes the proceedings of the 4th International Conference on Advances in Emerging Trends and Technologie

819 107 81MB

English Pages 732 [733] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Trends in Artificial Intelligence and Computer Engineering: Proceedings of ICAETT 2022
 3031259416, 9783031259418

Table of contents :
Preface
Organization
Contents
Artificial Intelligence
Recognition and Classification of Cardiac Arrhythmias Using Discrete Wavelet Transform (DWT) and Machine Learning Techniques
1 Introduction
2 Methodology
2.1 MIT-BIH Arrhythmia Database
2.2 Preprocessing
2.3 ECG Beat Segmentation
2.4 Feature Extraction
2.5 Classification
2.6 Experimental Design
3 Results
3.1 Discussions
4 Conclusions
References
Artificial Firefly Meta-heuristic Used for the Optimization of a Fractional PID on an ARM Platform
1 Introduction
2 Method
2.1 Data Acquisition and System Identification
2.2 Tuning by Using the FA Algorithm
3 Results
3.1 Continuous-Time FOPID Controller Tuned by FA
3.2 Discrete-Time FOPID Controller Tuned by FA
3.3 Statistical Analysis with Performance Indexes (ITAE) Between Fomcon Optimized FOPID Controller and FA Tuned FOPID Using Wilcoxon Test
4 Discussion
5 Conclusions
References
A Convolutional Neural Network-Based Web Prototype to Support COVID-19 Detection Using Chest X-rays
1 Introduction
2 Methodology
3 Materials and Background
4 Data Mining: Experimental Phase and Results
4.1 Business Understanding Phase
4.2 Data Understanding Phase
4.3 Data Preparation Phase
4.4 Modeling Phase
4.5 Test and Evaluation Phase
5 Prototype Development and Results
6 Comparison with Related Works
7 Conclusions
References
Chatbots and Its Impact on the Information Support Service for Students of the Faculty of Computer Science of the Technical University of Manabí
1 Introduction
2 Materials and Methods
2.1 Use of Chatbots in Higher Education Institutions
2.2 Chatbots and Their Importance in University Communication
2.3 Conversational Protocol
2.4 A Session with @MGUTMbot
2.5 Architecture
2.6 Implementation
3 Results
3.1 Ease of Learning
3.2 Memorization Capacity
3.3 Efficiency of Use
3.4 Error Prevention
3.5 Satisfaction
4 Conclusions
References
Neural Networks on Noninvasive Electrocardiographic Imaging Reconstructions: Preliminary Results
1 Introduction
2 Materials and Methods
2.1 Experimental Dataset
2.2 Preprocessing the Data Set
2.3 Inverse Mapping Methods
3 Results
4 Conclusions
References
Socio-spatial Segregation Using Computational Algorithms: Case Study in Ambato, Ecuador
1 Introduction
2 Related Works
3 Materials and Methods
3.1 Materials
3.2 Methods
3.3 Algorithm Development
3.4 Case Study
4 Results
4.1 Living Conditions Index
4.2 Uniformity
4.3 Centralization
5 Discussion and Conclusions
References
Expert System with Facial Recognition Implemented in Human-Machine Conversation Services for the Automation of Multi-platform Remote Processes in the Identification of People Reported Missing
1 Introduction
2 Methods
3 Results
3.1 Accessibility
3.2 Efficiency
3.3 Speed
4 Discussion
5 Conclusions
References
Communications
Radar Probability of Detection in Multipath Environments
1 Introduction
2 Pattern Propagation Factor for More Than One Reflected Ray
2.1 One Way Voltage Multipath Factor Distribution
2.2 Two Way Power Multipath Factor Distribution
3 Signal Plus Noise to Noise Ratio Distribution
4 Probability of Detection
5 Results and Discussions
5.1 Analytical Results
5.2 Probability of Detection Coverage
6 Conclusions
References
Low Detection Symbol Algorithm for MIMO Systems with Big Number of Antennas
1 Introduction
2 System Model
2.1 Maximum Likelihood Detector
3 Proposed Detector
3.1 Symbol Flipping Procedure
3.2 Initial Random Solution
4 Numerical Results
4.1 BER Performance
5 Conclusion
References
Deployable Networks. An Alternative for Communications in Critical Environments
1 Introduction
2 Methodology
2.1 Characteristic
2.2 Network Architecture
3 Functionality Tests and Results
4 Conclusions
References
Application for the Study of Underwater Wireless Sensor Networks: Case Study
1 Introduction
1.1 UWSNs Architectures
1.2 Characteristics and Challenges of a UWSN
1.3 Applications of UWSN
2 Materials and Methods
2.1 Analysis Phase
2.2 Design Phase
2.3 Coding Phase
2.4 Testing Phase
3 Development
3.1 Use Case Diagram
3.2 Interface Coding
4 Discussion
5 Conclusions
References
e-Learning
Learning Performance Indicators a Statistical Analysis on the Subject of Natural Sciences During the COVID-19 Pandemic at the Tulcán District
1 Introduction
1.1 Related Work
2 Materials and Methods
2.1 Evaluation Statistics Techniques
2.2 Evaluation Instrument
2.3 Significative Difference Analysis
3 Results
3.1 Instrument Validation
3.2 Data Gathering Analysis
4 Discussion
5 Conclusions
References
Virtual Physics Learning for Basic Education
1 Introduction
2 State of the Art
3 Methodology
3.1 Methodology for the Software Development
3.2 Methodological Proposal for the Integration of the Virtual Laboratory as a Complementary Activity
4 Preliminary Evaluation
5 Arguments and Results
6 Conclusions
References
Bibliometric Mapping of Scientific Literature Located in Scopus on Teaching Digital Competence in Higher Education
1 Introduction
2 Method
3 Results
3.1 Analysis of the Scientific Productions Extracted According to the Year of Publication
3.2 Analysis of the Scientific Productions Extracted According to Their Origin
3.3 Analysis of the Scientific Productions Extracted According to Their Authors
3.4 Analysis of the Scientific Productions Extracted Based on Their Filiation
3.5 Analysis of the Scientific Productions Extracted According to the Country Where It Originates
3.6 Analysis of the Scientific Productions Extracted According to the Type of Document
3.7 Analysis of the Scientific Productions Extracted According to the Research Area
3.8 Analysis of the Scientific Productions Extracted According to the Language of the Publication
3.9 Analysis of the Scientific Productions Extracted Based on the Keywords of the Publication
3.10 Analysis of the Scientific Productions Extracted Based on the Citations
4 Discussion
5 Conclusions
References
Computational Thinking as Instrument to Evaluate Student Difficulties in Higher Education: Before and During Pandemic Analysis
1 Introduction
2 Methodology
2.1 Participants
2.2 Instruments
3 Results
3.1 CT Applied to Problem Solving
3.2 Intervention Description
3.3 Quantitative Analysis Between Interventions
3.4 Qualitative Analysis for Learning Modality
3.5 Qualitative Analysis Before and After the Pandemic
4 Discussion
4.1 Limitations and Future Work
5 Conclusions
References
Systematization of Playful Teaching Using Games Aimed at Teachers and Students
1 Introduction
2 Materials and Methods
2.1 Participants
3 Results and Discussion
4 Conclusions
References
Design of a Predictive Model to Evaluate Academic Risk Using Data Mining
1 Introduction
2 Materials y Methods
2.1 Database and Resources
2.2 Methodology
3 Results
4 Conclusions
References
Hope Project: Augmented Reality to Teach Dance to Children with ASD
1 Introduction
2 Material and Method
2.1 Population
2.2 Work Plan
2.3 Phases of the Intervention
2.4 Resources Used in the Intervention Plan
2.5 Activities Designed to Reinforce Teaching Learning Processes
3 Results
4 Discussion
5 Conclusions
References
An Approach to Scientific Research for the Continuous Improvement of Scientific Production in Ecuador
1 Introduction
2 Materials and Methods
2.1 Materials
2.2 Methods
3 Results
3.1 Pearson Correlation
3.2 Diagnostico Aplicando el Alfa Cronbach
3.3 Appropriate Prototype to Improve Scientific Production in a Higher Education Institution (IES)
4 Discussion
5 Works Future and Conclusions
References
Professional Skills for the Administration Career with a Higher Technological Level
1 Introduction
2 Methodology
3 Results and Discussion
3.1 Of the Work with the Teachers of the Institute
4 Conclusions
References
Gamification as a Methodological Strategy and Its Impact on the Academic Performance of University Students
1 Introduction
2 The Importance of Gamification as a Methodological Competence
3 Gamification and University Evaluation
3.1 The Potential of Gamification as an Evaluative Resource
4 Model of Development by Competencies and Gamification
4.1 Specific Skills as a Transversal Axis in Gamified Strategies
5 Materials and Methods
5.1 Methods
5.2 Population and Sample
5.3 Materials
6 Results
6.1 Survey of University Teachers About Their Perception of Gamified Tools as Methodological Strategies in Academic Subjects to Improve the Performance of University Students
6.2 Survey of University Students Regarding the Use of Gamification as a Methodological Strategy by Their Teachers
6.3 Academic Performance of University Students
7 Discussion and Conclusions
References
Impact of Blended-Learning on Higher Education and English Language
1 Introduction
2 Methodology
3 Results
4 Discussion
5 Conclusions
References
Augmented Reality Application with Multimedia Content to Support Primary Education
1 Introduction
2 Methods and Materials
2.1 Hardware
2.2 Software
2.3 Interface Design
3 Experimental Tests and Results
3.1 Experimental Design
3.2 Knowledge Test
4 Discussions and Conclusions
References
Storytelling as a Motivational Resource in the Therapy of Childhood Cancer
1 Introduction
1.1 Use of Graphic Design
1.2 Storytelling
2 Methods and Materials
2.1 First Phase. Empathize
2.2 Second Phase. Define
2.3 Third Phase. Creating
2.4 Fourth Phase. Prototyping
2.5 Fifth Phase. Testing
3 Analysis and Results
4 Discussion and Conclusions
References
Presyllabic Method to Correct Dysorthography in Elementary School Students
1 Introduction
2 Development
2.1 Writing
2.2 Dysorthography
2.3 Causes of Dysorthography
3 Academic Performance
4 Language and Literature
4.1 Academic Achievement in the Area of Language and Literature
5 Methodological Approach
6 Methodology and Research Techniques
7 Results
8 Conclusions
References
State of ICTs as Support for the Educational Process in the Andean Region
1 Introduction
2 Scope
3 Methodology
4 Results
5 Conclusions and Recommendations
References
AT for Engineering Applications
Computerized Planning of Surface Ratios in a Milk Extraction Plant
1 Introduction
1.1 Methodology
1.2 Results of CORELAP Application
1.3 Graphical Solution
2 Discussion
3 Conclusions
References
Methodological Proposal for Micro-enterprises Through a Mathematical - Statistical Model Based on Integral Logistics
1 Introduction
2 Methodology
2.1 Pilot Survey
2.2 Reliability
3 Developing
3.1 Factorial Analysis
3.2 Bartlett's Sphericity Test
3.3 Internal Consistency Coefficient Using Two Halves or Split-Half
4 Discussion
5 Conclusions
References
Material Selection for a Biomass Heat Exchange Multicriteria Decision Methods: Study Case on Ecuador
1 Introduction
2 Method
2.1 Heat Exchanger Conditions
2.2 Material Selection
2.3 Multicriteria Decision Methods
2.4 Analytical Hierarchy Process
2.5 VIKOR Method
2.6 Technique for Order Preference by Similarity to Ideal Solution
2.7 Complex Proportional Assessment
2.8 SPEARMAN’S Coefficient
2.9 Validation of Heat Potential
3 Results and Discussion
3.1 AHP Method
3.2 VIKOR Method
3.3 Method TOPSIS
3.4 COPRAS Method
3.5 Spearman Correlation
3.6 Simulation Results
4 Conclusions
References
Design and Simulation of an Aircraft Autopilot Control System: Longitudinal Dynamics
1 Introduction
2 Automatic Landing System
2.1 Instrument Landing System
3 Mathematical Model of Longitudinal Dynamics
3.1 Aircraft Model
3.2 Longitudinal Dynamics Model
4 Control System for Coupling to the Glide Slope
4.1 Vertical Attitude Controller
4.2 Speed Controller
4.3 Vertical Trajectory Controller
5 Integration, Validation and Analysis of Results
6 Conclusions
References
Management Innovation and Competitive Success in Peruvian Companies of the Manufacturing Sector
1 Introduction
2 Materials and Methods
3 Results and Discussion
4 Conclusion
References
Comparative Study of Accounting and Management Perceptions of the Usefulness of Financial Information in Small and Medium-Sized Timber Companies in Colombia
1 Introduction
1.1 Problem Definition
2 Methodology
2.1 Type of Research and Approach
2.2 Collection Instruments
3 Background
4 Theoretical Framework
4.1 Competitiveness vis-à-vis IFRS
4.2 Implementation of IFRS in Latin America
4.3 Causes and Effects of IFRS Implementation in Small and Medium-Sized Timber Companies in Colombia
5 Results
6 Conclusions and Discussion
References
Influence of Aqueous Phase of Hydrothermal Carbonization Feeding on Carbon Fixation by Microalgae
1 Introduction
2 Method
2.1 Materials
2.2 Characterization Analysis
2.3 Time Course Growth Studies
2.4 Productivity Optimization
2.5 Growth Kinetics and CO2 Biofixation
3 Results and discussions
3.1 Characterization of Hydrothermal Process Water
3.2 Dry Biomass
3.3 Characterization of Algae
4 Conclusions
References
Assessment of the Thermal Behavior in Social Housing in Hot Humid Climate in Ecuador
1 Introduction
2 Methodology
2.1 Case Study
2.2 Climate and Study Period
2.3 Thermal Behavior of the Envelope and Indoor Air
2.4 Thermal Environment Evaluation
3 Results
3.1 Thermal Behavior of the Envelope and Indoor Air
3.2 Thermal Environment Evaluation
4 Conclusions
References
Implications of Spraying Powder Paint
1 Introduction
2 Pollution
2.1 Air Pollution
2.2 Treatment of Particles
3 Methodology
3.1 Dimensional Analysis of the Cyclone
4 Discussion and Results
5 Conclusions
References
Heat Transfer Adhesion Factor on Metal Surfaces
1 Introduction
2 Materials Analysis
2.1 Metals
2.2 Non-metals
2.3 Coating Processes by Heat Transfer
2.4 Curing Ovens
2.5 Conventional Curing Room Systems
3 Methodology
3.1 Materials
3.2 Treatments
3.3 Ovens
3.4 Furnace Design Parameters
3.5 Dimensioning and Geometry of the Structure
4 Discussion and Results
5 Conclusions
References
Virtual Laboratory of Electronic Instrumentation Based on a Programming Proposal Focused on Systems
1 Introduction
2 Methodology
2.1 Materials
2.2 Methods
3 Results
4 Conclusions and Discussion
References
Thermal-Mechanical Properties of Recycled PVC Used in Schrader Valve Caps
1 Introduction
2 Materials and Methods
2.1 Material
3 Methods
3.1 Physical Identification Methods
3.2 Spectroscopic Methods. According to ASTM Standard
3.3 Infrared Spectroscopy (FTIR)
3.4 Glass Transition Temperature (Tg)
3.5 Thermogravimetric Analysis in Accordance with
3.6 Tensile - Deformation Test
4 Analysis and Results
4.1 Infrared Spectroscopy Identification (FTIR) (ASTM142 Standard)
4.2 Thermogravimetric Analysis
4.3 Melting Point by Differential Scanning Calorimetry DSC (ASTM D 3418 Standard)
4.4 Yield Stress by Tensile Test. Tensile-Deformation Mechanical Tests
4.5 Thermal Simulation of the Prototype
4.6 Simulation of Axial Load Inside the Valve Cap Head
4.7 Simulation of the Application of an External Torque to the Prototype
4.8 Pressure Exerted on the Tire Valve Cap
5 Conclusion
References
Consequence of a Geriatric Psychomotricity Program on the Quality of Life of Older Adults
1 Introduction
2 Methodology
3 Results and Discussion
3.1 Proposal Description
4 Conclusions
References
Laboratory-Scale Determination of the Influence of Temperature, Time, and Mordant on the Tensile Strength and Elongation of Abaca Yarn Dyed with Marco Extract (Ambrosia Peruviana) Subjected to Seawater
1 Introduction
1.1 Materials and Methods
2 Results
2.1 Analysis of Variance (ANOVA) for Resistance
2.2 Analysis of Variance (ANOVA) for Elongation
3 Conclusions
References
Cryptocurrencies Towards Financial Innovation in the Microenterprise Sector
1 Introduction
1.1 Financial Transaction
1.2 Cryptocurrencies as a Financial Innovation
2 Methods
2.1 Bibliometric Analysis
3 Results
4 Discussion
5 Conclusions
References
3D Modelling of Freedom Summit for Virtual Environments
1 Introduction
2 Metodology
2.1 Concepts and Characteristics of 3D Modeling
2.2 Development Methodology
3 Result and Discussion
References
Model of Technological Competencies as Determinants of Innovation: A Comparative Intersectoral Study in Ecuador
1 Introduction
2 Theoretical Framework
2.1 Technological Competencies
2.2 The Component Elements of the Technological Competences
2.3 Mastery of Technological Competences
3 Methodology
3.1 Database and Variables
3.2 Statistical Modelling
4 Discussion and Conclusions
References
Micro-enterprise Management Towards Scenario Building for Decision Making
1 Introduction
2 Theoretical Reference
2.1 Microenterprise Management
2.2 Functions of the Microenterprise
2.3 From Empiricism to Strategy in Micro-enterprise
2.4 Scenarios for Micro-enterprise Management
2.5 Decision-Making in Micro-enterprise
3 Method
3.1 Bibliometric Analysis
3.2 Concurrence Analysis
3.3 Co-occurrence Analysis
3.4 Co-occurrence Analysis
4 Results
5 Discussion
6 Conclusions
References
Analysis of Business Efficiency Considering the Influence of the Particular Events on Sales Increase Period 2016–2020
1 Introduction
2 Methodology and Methods
3 Results
4 Discussion
5 Future Work and Conclusions
References
Income from Ordinary Activities and Its Tax Impact on Companies in the Automotive Sector in Ecuador
1 Introduction
2 Methodology
3 Results
3.1 Practical Demonstration of the Hypothesis
4 Conclusions
References
Security
Proof of Concepts of Corda Blockchain Technology Applied on the Supply Chain Area
1 Introduction
2 Corda Key Concepts
2.1 State
2.2 Transaction
2.3 Contract
2.4 Flow
2.5 Notary
3 Supply Chain Development with Corda
3.1 Ledger Layer
3.2 Orchestration Layer
4 Materials and Methodology
4.1 Throughput
4.2 Latency
4.3 CPU Utilization
4.4 Memory Utilization
4.5 Performance Analysis Tool
4.6 Custom Web Server for Corda RPC
5 Simulation Results
6 Conclusion
References
Spread Virus: Usability Evaluation on a Mobile Augmented Reality Videogame
1 Introduction
2 Context
2.1 Augmented Reality
2.2 AR Software Development Kit
2.3 Software Usability
2.4 System Usability Scale
2.5 Tower Defense
3 Main Contribution
4 Related Works
5 Experiments
6 Conclusions and Perspectives
References
Security Mechanisms and Log Correlation Systems
1 Introduction
2 State of the Art
2.1 Cyber Security World Stage
2.2 Centralized Security
2.3 SIEM Systems
3 Methodology
3.1 Preparation
3.2 Planning
3.3 Design
3.4 Implementation Phase
3.5 Operation Phase
3.6 Optimization
4 Results and Discussion
4.1 Results
4.2 Discussion
5 Conclusions
Bibliography
Technology Trends
Identification of Corn Leaves Diseases Images Using MobileNet Architecture in SmartPhones
1 Introduction
2 Convolutional Neural Network Architecture
3 MobileNet Architecture
3.1 The Network Architecture
3.2 Parameters of MobileNet
4 Materials and Methodology
4.1 Dataset
4.2 Pipeline
4.3 Experimental Configurations
4.4 MobileNet Model Deployment in Android Smartphone
5 Numerical Results and Discussions
5.1 Experimental Setup
5.2 SmartPhone Application
6 Conclusion
References
Experimental Assessment of Photovoltaic Systems Using One-Axis Tracking and Positioning Strategies in Equatorial Regions
1 Introduction
2 System Design and Implementation
2.1 Solar Radiation Measurement Station
2.2 Static and One-Axis Tracking Photovoltaic Systems
2.3 Photovoltaic System with Positioning Strategies
3 Results
3.1 Days Considered for Analysis
3.2 Energy Gain Due to One-Axis Tracking
3.3 Energy Gain Due to Positioning Strategies
4 Conclusions
References
Evaluation of the Reliability of a LiDAR Sensor Through a Geometric Model in Applications to Autonomous Driving
1 Introduction
2 Algorithmic Control: Avoid Obstacles and Follow Yellow Line
3 System Structure
4 Simulator
4.1 Vehicle
4.2 Data Processing: Matlab – Excel
5 Result Simulation and Discusión
6 Conclusion
References
Design of Optimal Controllers Applying Reinforcement Learning on an Inverted Pendulum Using Co-simulation NX/Simulink
1 Introduction
2 Methods
2.1 Modeling, Simulation and Co-simulation
2.2 Design of Optimal Controller Using Reinforcement Learning
3 Results and Discussion
3.1 NX Modeling and Co-simulation
3.2 Reinforcement Learning
4 Conclusions
References
Author Index

Citation preview

Lecture Notes in Networks and Systems 619

Miguel Botto-Tobar · Omar S. Gómez · Raul Rosero Miranda · Angela Díaz Cadena · Washington Luna-Encalada   Editors

Trends in Artificial Intelligence and Computer Engineering Proceedings of ICAETT 2022

Lecture Notes in Networks and Systems

619

Series Editor Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland

Advisory Editors Fernando Gomide, Department of Computer Engineering and Automation—DCA, School of Electrical and Computer Engineering—FEEC, University of Campinas—UNICAMP, São Paulo, Brazil Okyay Kaynak, Department of Electrical and Electronic Engineering, Bogazici University, Istanbul, Turkey Derong Liu, Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, USA Institute of Automation, Chinese Academy of Sciences, Beijing, China Witold Pedrycz, Department of Electrical and Computer Engineering, University of Alberta, Alberta, Canada Systems Research Institute, Polish Academy of Sciences, Warsaw, Poland Marios M. Polycarpou, Department of Electrical and Computer Engineering, KIOS Research Center for Intelligent Systems and Networks, University of Cyprus, Nicosia, Cyprus Imre J. Rudas, Óbuda University, Budapest, Hungary Jun Wang, Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong

The series “Lecture Notes in Networks and Systems” publishes the latest developments in Networks and Systems—quickly, informally and with high quality. Original research reported in proceedings and post-proceedings represents the core of LNNS. Volumes published in LNNS embrace all aspects and subfields of, as well as new challenges in, Networks and Systems. The series contains proceedings and edited volumes in systems and networks, spanning the areas of Cyber-Physical Systems, Autonomous Systems, Sensor Networks, Control Systems, Energy Systems, Automotive Systems, Biological Systems, Vehicular Networking and Connected Vehicles, Aerospace Systems, Automation, Manufacturing, Smart Grids, Nonlinear Systems, Power Systems, Robotics, Social Systems, Economic Systems and other. Of particular value to both the contributors and the readership are the short publication timeframe and the world-wide distribution and exposure which enable both a wide and rapid dissemination of research output. The series covers the theory, applications, and perspectives on the state of the art and future developments relevant to systems and networks, decision making, control, complex processes and related areas, as embedded in the fields of interdisciplinary and applied sciences, engineering, computer science, physics, economics, social, and life sciences, as well as the paradigms and methodologies behind them. Indexed by SCOPUS, INSPEC, WTI Frankfurt eG, zbMATH, SCImago. All books published in the series are submitted for consideration in Web of Science. For proposals from Asia please contact Aninda Bose ([email protected]).

More information about this series at https://link.springer.com/bookseries/15179

Miguel Botto-Tobar · Omar S. Gómez · Raul Rosero Miranda · Angela Díaz Cadena · Washington Luna-Encalada Editors

Trends in Artificial Intelligence and Computer Engineering Proceedings of ICAETT 2022

Editors Miguel Botto-Tobar Eindhoven University of Technology Eindhoven, The Netherlands

Omar S. Gómez Escuela Superior Politécnica de Chimborazo Riobamba, Ecuador

Raul Rosero Miranda Escuela Superior Politécnica de Chimborazo Riobamba, Ecuador

Angela Díaz Cadena Universitat de Valencia Valencia, Valencia, Spain

Washington Luna-Encalada Escuela Superior Politécnica del Chimborazo Riobamba, Ecuador

ISSN 2367-3370 ISSN 2367-3389 (electronic) Lecture Notes in Networks and Systems ISBN 978-3-031-25941-8 ISBN 978-3-031-25942-5 (eBook) https://doi.org/10.1007/978-3-031-25942-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The 4th International Conference on Advances in Emerging Trends and Technologies (ICAETT) was held on the main campus of the Escuela Superior Politécnica de Chimborazo, in Riobamba–Ecuador, from October 26 to 28, 2028, and it was proudly organized by Facultad de Informática y Electrónica (FIE) at Escuela Superior Politécnica de Chimborazo and supported by GDEON. The ICAETT series aims to bring together top researchers and practitioners working in different domains in the field of computer science to exchange their expertise and to discuss the perspectives of development and collaboration [1, 2]. The content of this volume is related to the following subjects: • • • • • •

Artificial Intelligence Communications e-Learning AT for Engineering Applications Security Technology Trends.

ICAETT 2022 received 234 submissions written in English by 940 authors coming from 15 different countries. All these papers were peer-reviewed by the ICAETT 2022 Program Committee consisting of 162 high-quality researchers. To assure a high-quality and thoughtful review process, we assigned each paper at least three reviewers. Based on the peer reviews, 54 full papers were accepted, resulting in a 23% acceptance rate, which was within our goal of less than 40%. We would like to express our sincere gratitude to the invited speakers for their inspirational talks, to the authors for submitting their work to this conference and the reviewers for sharing their experience during the selection process. October 2022

Miguel Botto-Tobar Omar S. Gómez Raúl Rosero Miranda Angela Díaz Cadena Washington Luna-Encalada

Organization

General Chairs Miguel Botto-Tobar Omar S. Gómez

Eindhoven University of Technology, The Netherlands Escuela Superior Politécnica de Chimborazo, Ecuador

Organizing Committee Miguel Botto-Tobar Omar S. Gómez Raúl Rosero Miranda Ángela Díaz Cadena Sergio Montes León Washington Luna-Encalada

Eindhoven University of Technology, The Netherlands Escuela Superior Politécnica de Chimborazo, Ecuador Escuela Superior Politécnica de Chimborazo, Ecuador Universitat de Valencia, Spain Universidad Rey Juan Carlos Madrid, Spain Escuela Superior Politécnica de Chimborazo, Ecuador

Steering Committee Miguel Botto-Tobar Ángela Díaz Cadena

Eindhoven University of Technology, The Netherlands Universitat de Valencia, Spain

Publication Chair Miguel Botto-Tobar

Eindhoven University of Technology, The Netherlands

viii

Organization

Program Chairs Technology Trends Miguel Botto-Tobar Sergio Montes León Hernán Montes León

Eindhoven University of Technology, The Netherlands Universidad Rey Juan Carlos Madrid, Spain Universidad Rey Juan Carlos, Spain

Electronics Ana Zambrano Vizuete David Rivas Edgar Maya-Olalla Hernán Domínguez-Limaico

Escuela Politécnica Nacional, Ecuador Universidad de las Fuerzas Armadas (ESPE), Ecuador Universidad Técnica del Norte, Ecuador Universidad Técnica del Norte, Ecuador

Intelligent Systems Guillermo Pizarro Vásquez Janeth Chicaiza Gustavo Andrade Miranda

Universidad Politécnica Salesiana, Ecuador Universidad Técnica Particular de Loja, Ecuador Universidad de Guayaquil, Ecuador

Machine Vision Julian Galindo Erick Cuenca Pablo Torres-Carrión

LIG-IIHM, France Université de Montpellier, France Universidad Técnica Particular de Loja, Ecuador

Communication Óscar Zambrano Vizuete Pablo Palacios Jativa

Universidad Técnica del Norte, Ecuador Universidad de Chile, Chile

Security Luis Urquiza-Aguiar Joffre León-Acurio

Escuela Politécnica Nacional, Ecuador Universidad Técnica de Babahoyo, Ecuador

Organization

ix

e-Learning Miguel Zúñiga-Prieto Doris Macias

Universidad de Cuenca, Ecuador Universitat Politécnica de Valencia, Spain

e-Business Angela Díaz Cadena

Universitat de Valencia, Spain

e-Goverment and e-Participation Alex Santamaría Philco

Universidad Laica Eloy Alfaro de Manabí, Ecuador

Program Committee Abdón Carrera Rivera Adrián Cevallos Navarrete Alba Morales Tirado Alejandro Ramos Nolazco

University of Melbourne, Australia Griffith University, Australia University of Greenwich, UK Instituto Tecnológico y de Estudios Superiores Monterrey, Mexico Alex Santamaría Philco Universitat Politècnica de València, Spain/Universidad Laica Eloy Alfaro de Manabí, Ecuador Alex Cazañas Gordon The University of Queensland, Australia Alexandra Velasco Arévalo Universität Stuttgart, Germany Alexandra Elizabeth Bermeo Arpi Universidad de Cuenca, Ecuador Alfonso Guijarro Rodríguez Universidad de Guayaquil, Ecuador Alfredo Núñez New York University, USA Allan Avendaño Sudario Università degli Studi di Roma “La Sapienza”, Italy Almílcar Puris Cáceres Universidad Técnica Estatal de Quevedo, Ecuador Ana Guerrero Alemán University of Adelaide, Australia Ana Santos Delgado Universidade Federal de Santa Catarina (UFSC), Brazil Ana Núñez Ávila Universitat Politècnica de València, Spain Andrea Mory Alvarado Universidad Católica de Cuenca, Ecuador Andrés Calle Bustos Universitat Politècnica de València, Spain Andrés Jadan Montero Universidad de Buenos Aires, Argentina Andrés Molina Ortega Universidad de Chile, Chile Andrés Robles Durazno Edinburgh Napier University, UK

x

Organization

Andrés Vargas González Andrés Barnuevo Loaiza Andrés Chango Macas Andrés Cueva Costales Andrés Parra Sánchez Ángel Plaza Vargas Angel Vazquez Pazmiño Ángela Díaz Cadena Angelo Vera Rivera Antonio Villavicencio Garzón Audrey Romero Pelaez Bolívar Chiriboga Ramón Byron Acuna Acurio Carla Melaños Salazar Carlos Barriga Abril Carlos Valarezo Loiza Cesar Mayorga Abril César Ayabaca Sarria Christian Báez Jácome Cintya Aguirre Brito Cristian Montero Mariño Daniel Magües Martínez Daniel Silva Palacios Daniel Armijos Conde Danilo Jaramillo Hurtado David Rivera Espín David Benavides Cuevas Diana Morillo Fueltala Diego Vallejo Huanga Edwin Guamán Quinche Efrén Reinoso Mendoza Eric Moyano Luna Erick Cuenca Pauta Ernesto Serrano Guevara Estefania Yánez Cardoso Esther Parra Mora Fabián Corral Carrera Felipe Ebert Fernando Borja Moretta Franklin Parrales Bravo

Syracuse University, USA Universidad de Santiago de Chile, Chile Universidad Politécnica de Madrid, Spain University of Melbourne, Australia University of Melbourne, Australia Universidad de Guayaquil, Ecuador Université Catholique de Louvain, Belgium Universitat de València, Spain George Mason University, USA Universitat Politècnica de Catalunya, Spain Universidad Politécnica de Madrid, Spain University of Melbourne, Australia Flinders University, Australia Universidad Politécnica de Madrid, Spain University of Nottingham, UK Manchester University, UK Universidad Técnica de Ambato, Ecuador Escuela Politécnica Nacional (EPN), Ecuador Wageningen University & Research, The Netherlands University of Portsmouth, UK University of Melbourne, Australia Universidad Autónoma de Madrid, Spain Universitat Politècnica de València, Spain Queensland University of Technology, Australia Universidad Politécnica de Madrid, Spain University of Melbourne, Australia Universidad de Sevilla, Spain Brunel University London, UK Universitat Politècnica de València, Spain Universidad del País Vasco, Spain Universitat Politècnica de València, Spain University of Southampton, UK Université de Montpellier, France Université de Neuchâtel, Switzerland University of Southampton, UK University of Queensland, Australia Universidad Carlos III de Madrid, Spain Universidade Federal de Pernambuco (UFPE), Brazil University of Edinburgh, UK Universidad Complutense de Madrid, Spain

Organization

Gabriel López Fonseca Gema Rodriguez-Perez Georges Flament Jordán Germania Rodríguez Morales Ginger Saltos Bernal Gissela Uribe Nogales Glenda Vera Mora Guilherme Avelino Héctor Dulcey Pérez Henry Morocho Minchala Holger Ortega Martínez Iván Valarezo Lozano Jacqueline Mejia Luna Jaime Jarrin Valencia Janneth Chicaiza Espinosa Jefferson Ribadeneira Ramírez Jeffrey Naranjo Cedeño Jofre León Acurio Jorge Quimí Espinosa Jorge Cárdenas Monar Jorge Illescas Pena Jorge Lascano Jorge Rivadeneira Muñoz Jorge Charco Aguirre José Carrera Villacres José Quevedo Guerrero Josue Flores de Valgas Juan Barros Gavilanes Juan Jiménez Lozano Juan Romero Arguello Juan Zaldumbide Proaño Juan Balarezo Serrano Juan Lasso Encalada Juan Maestre Ávila Juan Miguel Espinoza Soto Juliana Cotto Pulecio Julio Albuja Sánchez Julio Proaño Orellana Julio Balarezo Karla Abad Sacoto

Sheffield Hallam University, UK LibreSoft/Universidad Rey Juan Carlos, Spain University of York, UK Universidad Politécnica de Madrid, Spain University of Portsmouth, UK Australian National University, Australia Universidad Técnica de Babahoyo, Ecuador Universidade Federal do Piauí (UFP), Brazil Swinburne University of Technology, Australia Moscow Automobile And Road Construction State Technical University (Madi), Russia University College London, UK University of Melbourne, Australia Universidad de Granada, Spain Universidad Politécnica de Madrid, Spain Universidad Politécnica de Madrid, Spain Escuela Superior Politécnica de Chimborazo, Ecuador Universidad de Valencia, Spain Universidad Técnica de Babahoyo, Ecuador Universitat Politècnica de Catalunya, Spain Australian National University, Australia Edinburgh Napier University, UK University of Utah, USA University of Southampton, UK Universitat Politècnica de València, Spain Université de Neuchâtel, Switzerland Universidad Politécnica de Madrid, Spain Universitat Politécnica de València, Spain INP Toulouse, France Universidad de Palermo, Argentina University of Manchester, UK University of Melbourne, Australia Monash University, Australia Universitat Politècnica de Catalunya, Spain Iowa State University, USA Universitat de València, Spain Universidad de Palermo, Argentina James Cook University, Australia Universidad de Castilla La Mancha, Spain Universidad Técnica de Ambato, Ecuador Universidad Autónoma de Barcelona, Spain

xi

xii

Organization

Leopoldo Pauta Ayabaca Lorena Guachi Guachi Lorenzo Cevallos Torres Lucia Rivadeneira Barreiro Luis Carranco Medina Luis Pérez Iturralde Luis Torres Gallegos Luis Benavides Luis Urquiza Aguiar Manuel Beltrán Prado Manuel Sucunuta España Marcia Bayas Sampedro Marco Falconi Noriega Marco Tello Guerrero Marco Molina Bustamante Marco Santórum Gaibor María Escalante Guevara María Molina Miranda María Montoya Freire María Ormaza Castro María Miranda Garcés Maria Dueñas Romero Mariela Barzallo León Mauricio Verano Merino Maykel Leiva Vázquez Miguel Botto-Tobar Miguel Arcos Argudo Mónica Baquerizo Anastacio Mónica Villavicencio Cabezas Omar S. Gómez Orlando Erazo Moreta Pablo León Paliz Pablo Ordoñez Ordoñez Pablo Palacios Jativa Pablo Saá Portilla Patricia Ludeña González

Universidad Católica de Cuenca, Ecuador Università della Calabria, Italy Universidad de Guayaquil, Ecuador Nanyang Technological University, Singapur Kansas State University, USA Universidad de Sevilla, Spain Universitat Politècnica de València, Spain Universidad de Especialidades Espíritu Santo, Ecuador Universitat Politècnica de Catalunya, Spain University of Queensland, Australia Universidad Politécnica de Madrid, Spain Vinnitsa National University, Ukraine Universidad de Sevilla, Spain Rijksuniversiteit Groningen, The Netherlands Universidad Politécnica de Madrid, Spain Escuela Politécnica Nacional, Ecuador/Université Catholique de Louvain, Belgium University of Michigan, USA Universidad Politécnica de Madrid, Spain Aalto University, Finland University of Southampton, UK University of Leeds, UK RMIT University, Australia University of Edinburgh, UK Eindhoven University of Technology, The Netherlands Universidad de Guayaquil, Ecuador. Eindhoven University of Technology, The Netherlands Universidad Politécnica de Madrid, Spain Universidad Complutense de Madrid, Spain Université du Quebec À Montréal, Canada Escuela Superior Politécnica del Chimborazo (ESPOCH), Ecuador Universidad de Chile, Chile/Universidad Técnica Estatal de Quevedo, Ecuador Université de Neuchâtel, Switzerland Universidad Politécnica de Madrid, Spain Universidad de Chile, Chile University of Melbourne, Australia Politecnico di Milano, Italy

Organization

Paulina Morillo Alcívar Rafael Campuzano Ayala Rafael Jiménez Ramiro Santacruz Ochoa Richard Ramírez Anormaliza

Roberto Larrea Luzuriaga Roberto Sánchez Albán Rodrigo Saraguro Bravo Rodrigo Cueva Rueda Rodrigo Tufiño Cárdenas

Samanta Cueva Carrión Sergio Montes León Tania Palacios Crespo Tony Flores Pulgar Vanessa Echeverría Barzola Vanessa Jurado Vite Verónica Yépez Reyes Victor Hugo Rea Sánchez Voltaire Bazurto Blacio Washington Velásquez Vargas Wayner Bustamante Granda Wellington Cabrera Arévalo Xavier Merino Miño Yan Pacheco Mafla Yessenia Cabrera Maldonado Yuliana Jiménez Gaona

xiii

Universitat Politècnica de València, Spain Grenoble Institute of Technology, France Escuela Politécnica del Litoral (ESPOL), Ecuador Universidad Nacional de La Plata, Argentina Universidad Estatal de Milagro, Ecuador/Universitat Politècnica de Catalunya, Spain Universitat Politècnica de València, Spain Université de Lausanne, Switzerland Escuela Superior Politécnica del Litoral (ESPOL), Ecuador Universitat Politècnica de Catalunya, Spain Universidad Politécnica Salesiana, Ecuador/Universidad Politécnica de Madrid, Spain Universidad Politécnica de Madrid, Spain Universidad Rey Juan Carlos Madrid, Spain University College London, UK Université de Lyon, France Université Catholique de Louvain, Belgium Universidad Politécnica Salesiana, Ecuador South Danish University, Denmark Universidad Estatal de Milagro, Ecuador University of Victoria, Canada Universidad Politécnica de Madrid, Spain Universidad de Palermo, Argentina University of Houston, USA Instituto Tecnológico y de Estudios Superiores Monterrey, Mexico Royal Institute of Technology, Sweden Pontificia Universidad Católica de Chile, Chile Università di Bologna, Italy

References 1. Botto-Tobar, M., León-Acurio, J., A. D. Cadena, and P. M. Díaz, Preface, vol. 1067 (2020) 2. Botto-Tobar, M., Gómez, O. S., Miranda, R. R., Cadena, Á. D.: Preface, vol. 1302 (2021)

xiv

Organization

Organizing Institutions

Contents

Artificial Intelligence Recognition and Classification of Cardiac Arrhythmias Using Discrete Wavelet Transform (DWT) and Machine Learning Techniques . . . . . . . . . . . . . . . Hermes Andrés Ayala-Cucas, Edison Alexander Mora-Piscal, Dagoberto Mayorca-Torres, Alejandro José León-Salas, and Diego Hernán Peluffo-Ordoñez Artificial Firefly Meta-heuristic Used for the Optimization of a Fractional PID on an ARM Platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . William Montalvo, Roger Catota Ñacata, and Sebastián Layedra Guayta A Convolutional Neural Network-Based Web Prototype to Support COVID-19 Detection Using Chest X-rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mauro Rosas-Lara, Julio C. Mendoza-Tello, Diana C. López-Olives, and Andrea P. Robles-Loján Chatbots and Its Impact on the Information Support Service for Students of the Faculty of Computer Science of the Technical University of Manabí . . . . . Marco Giler, Emilio Cedeño, Walter Zambrano, Michellc Zambrano, and David Zambrano Neural Networks on Noninvasive Electrocardiographic Imaging Reconstructions: Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dagoberto Mayorca-Torres, Alejandro José León-Salas, and Diego Hernán Peluffo-Ordoñez Socio-spatial Segregation Using Computational Algorithms: Case Study in Ambato, Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Manuel Ayala-Chauvin, Paola Maigua, Andrea Medina-Enríquez, and Jorge Buele Expert System with Facial Recognition Implemented in Human-Machine Conversation Services for the Automation of Multi-platform Remote Processes in the Identification of People Reported Missing . . . . . . . . . . . . . . . . . . Jefferson Panchi-Chacón, Cindy Ortiz-Araujo, and Milton Patricio Navas-Moya

3

16

28

43

55

64

76

xvi

Contents

Communications Radar Probability of Detection in Multipath Environments . . . . . . . . . . . . . . . . . . Juan Minango, Andrea Flores, Marcelo Zambrano, Wladimir Paredes Parada, and Cristian Tasiguano

91

Low Detection Symbol Algorithm for MIMO Systems with Big Number of Antennas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Juan Minango, Marcelo Zambrano, Wladimir Paredes Parada, and Cristian Tasiguano Deployable Networks. An Alternative for Communications in Critical Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Marcelo Zambrano, Ana Zambrano, Juan Minango, and Edgar Maya Application for the Study of Underwater Wireless Sensor Networks: Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Fabián Cuzme-Rodríguez, Angel Velasco-Suárez, Mauricio Domínguez-Limaico, Luis Suárez-Zambrano, Henry Farinango-Endara, and Mario Mediavilla-Valverde e-Learning Learning Performance Indicators a Statistical Analysis on the Subject of Natural Sciences During the COVID-19 Pandemic at the Tulcán District . . . . 139 Marcela Aza-Espinosa, Laura Guerra Torrealba, Erick Herrera-Granda, María Aza-Espinosa, Marco Burbano-Pulles, and Javier Pozo-Burgos Virtual Physics Learning for Basic Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Carmen Cecilia Ausay, Santiago Alejandro Acurio Maldonado, Daniel Marcelo Acurio Maldonado, Pablo Israel Amancha Proaño, and Francisco Javier Echeverría Tamayo Bibliometric Mapping of Scientific Literature Located in Scopus on Teaching Digital Competence in Higher Education . . . . . . . . . . . . . . . . . . . . . . 167 Andrés Cisneros-Barahona, Luis Marqués Molías, Gonzalo Samaniego Erazo, María Isabel Uvidia-Fassler, and Gabriela de la Cruz-Fernández Authentic Evaluation for the Improvement of the Argumentative Written Essay in Virtual University Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Soratna Verónica Navas, Eduardo Jesús Garcés, María Cristina Pecho Rivera, Frank Luis Guanipa, and Felix Colina Ysea

Contents

xvii

Computational Thinking as Instrument to Evaluate Student Difficulties in Higher Education: Before and During Pandemic Analysis . . . . . . . . . . . . . . . . . 193 Ana-Lucía Pérez-Suasnavas, Bayardo Salgado-Proaño, Karina Cela, and Jorge L. Santamaría Systematization of Playful Teaching Using Games Aimed at Teachers and Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 Albornoz Karina, Jurado Merlis, and Maldonado Michelle Design of a Predictive Model to Evaluate Academic Risk Using Data Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 Shirley Alarcón-Loza, Diana Calderón-Onofre, Karen Mite-Baidal, and Mishel Macías-Plúas Hope Project: Augmented Reality to Teach Dance to Children with ASD . . . . . . 236 Mónica R. Romero, Estela M. Macas, Nancy Armijos, and Ivana Harari An Approach to Scientific Research for the Continuous Improvement of Scientific Production in Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Segundo Moisés Toapanta Toapanta, Marcelo Zambrano, Wladimir Paredes Parada, María José Rivera Gutierrez, Luis Enrique Mafla Gallegos, María Mercedes Baño Hifóng, Ma. Roció Maciel Arellano, and José Antonio Orizaga Trejo Professional Skills for the Administration Career with a Higher Technological Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Rodríguez Edison, María Del Carmen Estevez, Luz Rodríguez-Cisneros, Galárraga Nuria, and Narvaez Hugo Gamification as a Methodological Strategy and Its Impact on the Academic Performance of University Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Renato Coello-Contreras Impact of Blended-Learning on Higher Education and English Language . . . . . . 287 Mishell Angulo-Alvarez, Viviana Nagua-Andrango, Carmen Nato-Sierra, Enrique Rosero-Olalla, and Carlos Ruiz-Guangaje Augmented Reality Application with Multimedia Content to Support Primary Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Jorge Buele, John Espinoza, Belén Ruales, Valeria Maricruz Camino-Morejón, and Manuel Ayala-Chauvin Storytelling as a Motivational Resource in the Therapy of Childhood Cancer . . . 311 Mónica Liliana Castro Pacheco, Mateo Calle Loja, and Marco Segarra Chalco

xviii

Contents

Presyllabic Method to Correct Dysorthography in Elementary School Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Kate Lizbeth Pazmiño, Editha Jael Guerrero, Franklin Daniel Aguilar E., Paulina Renata Arellano G., and Fernando Garcés Cobos State of ICTs as Support for the Educational Process in the Andean Region . . . . 337 Wladimir Paredes-Parada, Christian Del Pozo, Silvia Elizabeth García González, and Franz Del Pozo AT for Engineering Applications Computerized Planning of Surface Ratios in a Milk Extraction Plant . . . . . . . . . . 349 Alexis Suárez del Villar, Ana Álvarez Sánchez, and Alexander Ricardo Galarza Tipantuña Methodological Proposal for Micro-enterprises Through a Mathematical Statistical Model Based on Integral Logistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Marcelo Javier Mancheno-Saá, Jenny Margoth Gamboa-Salinas, and Jacqueline del Pilar Hurtado-Yugcha Material Selection for a Biomass Heat Exchange Multicriteria Decision Methods: Study Case on Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 Juan Francisco Nicolalde, Javier Martínez-Gómez, Ricardo A. Narvaez C., Daniel Rivadeneira, Boris German, Michelle Romero, Cristhian M. Velalcázar Rhea, P. Cuji, Danny F. Sinche Arias, Carlos A. Méndez Durazno, and E. Catalina Vallejo-Coral Design and Simulation of an Aircraft Autopilot Control System: Longitudinal Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Luis A. Coello, Fausto A. Jácome, Jonathan R. Zurita, Carlos W. Casa, and Jonathan S. Vélez Management Innovation and Competitive Success in Peruvian Companies of the Manufacturing Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403 Rina A. Valencia-Durand, Aleixandre Brian Duche-Pérez, Cintya Yadira Vera-Revilla, Olger Albino Gutiérrez-Aguilar, Milena Ketty Jaime-Zavala, and Anthony Medina Rivas Plata Comparative Study of Accounting and Management Perceptions of the Usefulness of Financial Information in Small and Medium-Sized Timber Companies in Colombia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 María del Pilar Corredor García, Natalia Murillo Gallego, and Jasleidy Astrid Prada Segura

Contents

xix

Influence of Aqueous Phase of Hydrothermal Carbonization Feeding on Carbon Fixation by Microalgae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 Mayra S. Andrade Guerrero, Daysi N. Bayas Moposita, Cristhian M. Velalcázar Rhea, P. Cuji, Danny F. Sinche Arias, Carlos A. Méndez Durazno, and Javier Martínez-Gómez Assessment of the Thermal Behavior in Social Housing in Hot Humid Climate in Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 E. Catalina Vallejo-Coral, Francis Vásquez-Aza, Luis Godoy-Vaca, Marco Orozco Salcedo, and Javier Martínez-Gómez Implications of Spraying Powder Paint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Paúl Caza, Díaz Rodrigo, Víctor López, Cruz Patricio, and Villarreal Pamela Heat Transfer Adhesion Factor on Metal Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 468 Paúl Caza, Díaz Rodrigo, Víctor López, and Villarreal Pamela Virtual Laboratory of Electronic Instrumentation Based on a Programming Proposal Focused on Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483 Yngrid J. Melo Q., Andrés E. Castillo R., Enrique I. Valencia V., Edgar A. Bravo D., and Wilson G. Simbaña L. Thermal-Mechanical Properties of Recycled PVC Used in Schrader Valve Caps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497 Jose Vicente Manopanta-Aigaje and Diana Peralta-Zurita Consequence of a Geriatric Psychomotricity Program on the Quality of Life of Older Adults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 510 Veronica Molina, Nuria Galárraga, Gabriela Enríquez, Rocío Duque, and Ismenia Araujo Laboratory-Scale Determination of the Influence of Temperature, Time, and Mordant on the Tensile Strength and Elongation of Abaca Yarn Dyed with Marco Extract (Ambrosia Peruviana) Subjected to Seawater . . . . . . . . . . . . . 524 Elsa Sulay Mora Muñoz, Elvis Ramírez, and Omar Lara Castro Cryptocurrencies Towards Financial Innovation in the Microenterprise Sector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 Jessica Quispe, Cesar Segovia, Rubén Jaramillo, and Darwin Arias 3D Modelling of Freedom Summit for Virtual Environments . . . . . . . . . . . . . . . . . 548 Aguas Luis, Suárez Lizbeth, Coral Rosario, and Machay Byron

xx

Contents

Model of Technological Competencies as Determinants of Innovation: A Comparative Intersectoral Study in Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 561 Claudio Arcos and Adrian Padilla Micro-enterprise Management Towards Scenario Building for Decision Making . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575 Paula Flores, Estefani Segura, Rubén Jaramillo, Luis Ulcuango, and Lizbeth Suárez Analysis of Business Efficiency Considering the Influence of the Particular Events on Sales Increase Period 2016–2020 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 585 Ximena Elizabeth Cayambe Badillo, Willman Leonel Bravo Espinoza, Luis Alberto Carrera Toro, and Hilberth Alexis Villalba Bejarano Income from Ordinary Activities and Its Tax Impact on Companies in the Automotive Sector in Ecuador . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600 Aníbal Altamirano Salazar, Carla Valdiviezo Morales, Ramiro Pastás Gutiérrez, and Lenin Altamirano Gallegos Security Proof of Concepts of Corda Blockchain Technology Applied on the Supply Chain Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 619 Juan Minango, Marcelo Zambrano, Wladimir Paredes Parada, Cristian Tasiguano, and Maria Jose Rivera Spread Virus: Usability Evaluation on a Mobile Augmented Reality Videogame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632 Alvaro Poma, Piero Aldaves, and Luis Canaval Security Mechanisms and Log Correlation Systems . . . . . . . . . . . . . . . . . . . . . . . . 644 Andy Mora-Cruzatty, Jose Villacreses-Chancay, Cesar Moreira-Zambrano, Josselyn Pita-Valencia, and Leonardo Chancay-García Technology Trends Identification of Corn Leaves Diseases Images Using MobileNet Architecture in SmartPhones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 661 Juan Minango, Marcelo Zambrano, Wladimir Paredes Parada, Cristian Tasiguano, and Karla Ayala

Contents

xxi

Experimental Assessment of Photovoltaic Systems Using One-Axis Tracking and Positioning Strategies in Equatorial Regions . . . . . . . . . . . . . . . . . . . 674 Cristian Alarcón, Carlos López, Cristian Tasiguano Pozo, and Freddy Ordóñez Evaluation of the Reliability of a LiDAR Sensor Through a Geometric Model in Applications to Autonomous Driving . . . . . . . . . . . . . . . . . . . . . . . . . . . . 688 Danny J. Zea, Alex P. Toapanta, César A. Minaya, Carlos A. Paspuel, and Irlanda E. Moreno Design of Optimal Controllers Applying Reinforcement Learning on an Inverted Pendulum Using Co-simulation NX/Simulink . . . . . . . . . . . . . . . . 706 Henry Díaz-Iza, Karla Negrete, and Jenyffer Yépez Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 719

Artificial Intelligence

Recognition and Classification of Cardiac Arrhythmias Using Discrete Wavelet Transform (DWT) and Machine Learning Techniques Hermes Andr´es Ayala-Cucas1 , Edison Alexander Mora-Piscal1 , Dagoberto Mayorca-Torres1,2(B) , Alejandro Jos´e Le´on-Salas2 , and Diego Hern´ an Peluffo-Ordo˜ nez3 1

2

Grupo de investigaci´ on de Ingenier´ıa Mecatr´ onica, Universidad Mariana, Pasto, Colombia {hayala,dmayorca}@umariana.edu.co Departamento de Lenguajes y Sistemas Inform´ aticos, Universidad de Granada, C/Periodista Daniel Saucedo Aranda s/n, 18071 Granada, Spain 3 Modeling, Simulation and Data Analysis (MSDA) Research Program, Mohammed VI Polytechnic University, Ben Guerir, Morocco

Abstract. Cardiac arrhythmias are heart rhythm problems that usually occur when the electrical impulses coordinated with the heartbeat do not work correctly. For this reason, detecting abnormalities in an electrocardiogram (ECG) plays a vital role in patient follow-up. Due to the presence of noise, the irregularity of the heartbeat, and the nonstationary nature of ECG signals, their interpretation can be difficult, requiring the use of advanced computer systems to support the diagnosis of cardiac disorders. Therefore, the development of assisted ECG analysis systems is a current topic of study, and the main challenge is to achieve adequate accuracy for application in the clinical setting. Therefore, this article describes a software tool for classifying ECG samples into the main classes of cardiac arrhythmias by removing noise from the ECG signal at the preprocessing stage using conventional digital filters; the location of the QRS complex is essential for the identification of the ECG signal. Therefore, the position and amplitude of the R peaks are determined in the segmentation stage. Then the selection of the most relevant features of the ECG signal is performed using the discrete wavelet transform (DWT). The ability of the extracted features to differentiate between different classes of data is tested using machine learning techniques such as k-Nearest Neighbors, Neural Networks, and Decision Trees with 10-fold cross-validation. These methods are evaluated and tested with the MIT-BIH arrhythmia database, achieving the best accuracy of 98.54% using the k-Nearest Neighbors classifier. Keywords: Electrocardiogram (ECG) · Cardiac arrhythmia extraction · Machine learning · Performance measures

· Feature

c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 3–15, 2023. https://doi.org/10.1007/978-3-031-25942-5_1

4

1

H. A. Ayala-Cucas et al.

Introduction

Cardiovascular diseases are the leading cause of death in the world; around 17.5 million people die from these heart disorders, which represents 31% of deaths worldwide [2]. In general, the increase in the mortality rate from cardiovascular disease is related to unhealthy lifestyles, such as lack of physical activity, poor diet, and smoking. The main factors in these cardiovascular conditions are cardiac arrhythmias, which are disturbances or disorders that alter the normal functioning of the heart’s electrical activity [10]. Electrical activity monitoring is performed through the electrocardiogram (ECG), a non-invasive method in which the propagation of the electromagnetic wave propagating in the heart can be recorded through a series of electrodes placed on the body’s surface. From these recordings, it is possible to study, analyze and identify irregular patterns in the ECG signal. Because the recorded data is so extensive, it is necessary to classify or categorize the different types of signal beats using advanced computer diagnostic tools. Hence, in recent years, the development of computational systems and the adoption of artificial intelligence techniques to support the detection and diagnosis of cardiovascular diseases have increased. Not to mention that, at times, cardiologists must perform analyses on extensive records of different patients, which can lead to erroneous diagnoses. Some researchers have proposed different studies with efficient methods for classifying cardiac arrhythmias in the scientific literature. Briefly, Malik et al. [12] proposes an optimized classification model for automated cardiac arrhythmia recognition. First, it uses the discrete wavelet transform (DWT) to extract the most significant features from the ECG signal. It then evaluates the multidomain features using machine learning techniques such as support vector machine (SVM) and the Grasshopper optimization algorithm to identify five classes of cardiac arrhythmias. Similarly, Madan et al. [11] uses a hybrid approach based on deep learning for cardiovascular disease detection and classification, using various configurations; in a first step, they use 2D Scalogram images to reduce signal noise and extract features. The second step uses a combination of deep learning models, such as convolutional neural networks (CNN) and short-term memory networks (LSTM), to identify abnormal heartbeats. On the other hand, statistical analysis techniques and hybrid feature-based techniques are used for feature selection and extraction. These features can be classified using machine learning techniques, such as Decision Trees [19], k-Nearest Neighbor (KNN) classification techniques [7], Support Vector Machines (SVM) [20], and Neural Networks (NN) [4], among others. In arrhythmia classification techniques using machine learning approaches, achieving acceptable accuracy with efficient features is desirable. In this regard, this paper presents a novel technique that uses discrete wavelet transform (DWT) based ECG beat features and efficiently classifies five classes of abnormal ECG beats using machine learning techniques such as K-Nearest Neighbors, neural networks, and decision trees. These techniques are fundamental in this field as they are becoming an essential method for reliable decision-making by analyzing large data sets and events. Likewise, it is used to study and analyze the cardiac

System for the Classification of Cardiac Arrhythmias

5

rhythm problems that the heart presents, identify the type of arrhythmia that the person may manifest, as well as to implement prevention strategies adopted by different types of processing and characterization of ECG signals. This paper is structured as follows: Sect. 2 specifies the methodological design and the stages of database selection, data processing, segmentation, feature extraction, and classification. Section 3 describes the results and discussions and Sect. 4 concludes future work. One of the significant contributions of this study is the development of a methodology based on conventional methods.

2

Methodology

The methodological scheme of the proposed method for detecting and classifying cardiac arrhythmias is presented in Fig. 1. The following subsections specify the database to be used, the stages of preprocessing, segmentation, feature extraction, and classification.

Fig. 1. Methodological scheme for the detection of cardiac arrhythmias.

2.1

MIT-BIH Arrhythmia Database

The dataset selected for this study corresponds to the MIT-BIH cardiac arrhythmia database [6]. The database contains two-channel ambulatory ECG records from 47 individuals analyzed by the BIH Arrhythmia Laboratory between 1975 and 1979. Forty-eight half-hour recordings were collected using analog Holter equipment with a sampling rate of 360 Hz and 11-bit resolution over a range of 10 mV. The database includes about 110.000 records, with five classes of arrhythmias: Nonectopic beats (N), Fusion beats (F), Supraventricular ectopic beats (S),

6

H. A. Ayala-Cucas et al.

Ventricular ectopic beats (V), and Unknown beats (U). Cardiologists performed the classification and annotation of the recordings of each heartbeat. Table 1 shows the specifications of the different beats from the MIT-BIH database and summarizes the five types of ECG beat samples used in this study. Table 1. MIT-BIH arrhythmia beat classification according to the ANSI/AAMI EC57:1998 standard database. AAMI classes Non-ectopic beat (N)

Supra-ventricular ectopic beat (S)

Ventricular ectopic beat (V)

MIT-BIH annotation

Type

No. of beats

N

Normal beat

90.604

L

Left bundle branch block

R

Right bundle branch block

j

Nodal (junctional) escape

e

Atrial escape beat

A a

Aberrated atrial premature Atrial premature

S

Supraventricular premature

J

Nodal (junctional) premature

2.781

V

Ventricular escape

E

Premature ventricular contraction

Fusion beat (F)

F

Fusion of ventricular and normal

802

Unknown beat (U)

U

Unclassifiable

8.041

p

Paced

f

Fusion of paced and normal

Total number of beats

2.2

7.235

109.463

Preprocessing

ECG signals obtained from the MIT-BIH arrhythmia database are affected by different types of noise classified as network disturbances (60 Hz), deviation from baseline, muscle artifacts, and disturbances generated by various electrical equipment. In a stage of identification and selection of the most relevant parameters found in the signal, the elimination of these types of noise is fundamental. If the ECG signal being analyzed contains a high noise level, the identification task becomes even more difficult. According to these criteria, the ECG signal preprocessing stage has the initial phase, eliminating the baseline noise and filtering the ECG signal. For these purposes, a digital Butterworth filter configured as a third-order high-pass filter with a cutoff frequency of 1 Hz has been implemented to remove low-frequency noise from baseline deviation. In the next phase, the ECG signal is filtered using the Savitzky-Golay (SG) polynomial order digital filter with a window dimension between 5 and 21 according to [9] to reduce the low and high-frequency interferences present in the whole signal. Figure 2 shows the original ECG signal in the presence of noise and the result of the filtered signal.

System for the Classification of Cardiac Arrhythmias

7

Fig. 2. Removal of noise from ECG signal (Non-ectopic beat).

2.3

ECG Beat Segmentation

In this stage, the location of the QRS complex and landmarks are essential to perform beat segmentation of the ECG signal. Specifically, the QRS complex describes the depolarization originating from the ventricles’ contraction [10]. One aspect to consider is that the QRS complex is a fundamental part of the ECG signal that must be analyzed and evaluated. In this regard, the annotations provided by the MIT-BIH dataset are used for QRS complex localization. The developed algorithm detects the positions of each R peak of the ECG signal (called the midpoint of the QRS complex), whereby a fixed-width window of 200 data is generated with 99 data from the left side and 100 from the right side. The aim is to divide the ECG signal into beatlike fragments with a regular width. The segmentation step results are shown in Fig. 3. 2.4

Feature Extraction

At this stage, the most crucial step is appropriately selecting the most relevant features of the beat segments created in the previous stage to obtain an acceptable classification performance [15,18]. Based on the above, the discrete wavelet transform (DWT) is used for feature extraction. This technique usually decomposes a signal using approximate versions of several families of wavelets,

8

H. A. Ayala-Cucas et al.

Fig. 3. 200-sample segment of the ECG signal (Non-ectopic beat).

such as Haar, Daubechies, and Symlet [3]. One aspect to consider is that the selection of a wavelet type is made according to the kind of signal to be studied and the information to be obtained from it. For this purpose, signal features are extracted using the Daubechies wavelet family of order 6 (db6) because it has a morphological structure similar to the signal of an electrocardiogram [17]. The signal decomposition is performed up to eight levels obtaining a series of coefficients from which a feature space of the different types of families is generated. The decomposed signal is made up of a series of detail coefficients (D1 to D8) and an approximation coefficient (A8). From each subband of the detail coefficients, three statistical characteristics are calculated: Maximum, Minimum, and variance, collecting a total of 24 features extracted from each beat and will be fed into the supervised machine learning classifiers. 2.5

Classification

The features extracted in the previous step are evaluated by supervised machine learning algorithms to check which classifier is the most suitable for the task at hand. For comparative purposes, three different classification algorithms were tested: K-Nearest Neighbor Classifier (KNN). It is considered one of the nonparametric instance-based classification algorithms due to its simplicity and versatility. Classification is performed based on the information provided at the training time to determine that an element x belongs to a class C [22]. The algorithm selects the closest data in the learning dataset to perform classification: its “near-

System for the Classification of Cardiac Arrhythmias

9

est neighbors”. The nearest neighbors are those that show several similarities to the new case. The value of the nearest neighbors to be analyzed is specified by k. Neural Network Classifier (NN). Neural networks are widely used in various applications, such as parameter identification and classification tasks. In the structure of a neural network model, each neuron performs the balanced sum of its inputs and superimposes the result of the sum on a nonlinear activation function [5]. Similarly, the performance of this type of classifier depends on the parameters set on the number of input neurons, the number of hidden layers, the number of output neurons, and the activation function. Decision Tree Classifier. Decision trees are the most widely used methods for techniques such as classification and regression. In general, they consist of tracing all possible paths taking into account the importance of each attribute using recursive partitioning to classify the data [14]. When constructing a decision tree, it is crucial to determine which attribute is the best or most predictive for splitting the data based on the feature. Decision trees are constructed by dividing the training set into different nodes, where a node contains most of a data category. In this study, a search for the best parameters for each of the classifiers was performed using the hyperparameter optimization method. Table 2 shows the parameters that gave the best results for the evaluation metrics used, which are presented later in the experimental design. Table 2. Parameters used in each classifier. Classifier

Parameters

K-nearest neighbor Number of neighbors = 4 Distance metric = Mahalanobis Distance weight = ‘Squared inverse’ Standardize data = No Neural network

Number of fully connected layers = 1 First layer size = 80 Activation = ‘Relu’ Iteration limit = 1000 Regularization strength = 0 Standardize data = Yes

Decision tree

SplitCriterion = ‘deviance’ MaxNumSplits = 794 Surrogate = off

10

2.6

H. A. Ayala-Cucas et al.

Experimental Design

This section describes the design of experiments and metrics to evaluate the performance of the developed algorithms. The MIT-BIH database contains a total of 110.000 records with 5 types of arrhythmias (N, S, V, F, U). A total of 3 experiments, including a filtering technique (SG), a feature extraction technique (DWT), and three machine learning algorithms (KNN, NN, TREE), were applied for irregular beat recognition. The evaluation of the classification models is performed using 10-fold random cross-validation and performance metrics such as Accuracy, Precision, Sensitivity, and F1 Score of four parameters as shown in Eqs. (1), (2), (3) and (4). Accuracy =

TP + TN TP + TN + FP + FN

(1)

TP TP + FP

(2)

TP TP + FN

(3)

TP T P + 0.5(F P + F N )

(4)

P recision = Sensitivity = F 1 − Score =

where TP corresponds to correctly classified beats, FN to unclassified beats, TN to correctly unclassified beats, and FP to incorrectly classified beats.

3

Results

Table 3 shows the results of the 3 experiments, considering the stages of filtering, feature extraction, and classification techniques. Table 3. Classification results with three different classifiers. Diagnostic system

Performance metrics

Filtering Features Classifier Accuracy Precision Sensitivity F1-Score SG

DWT

KNN NN TREE

98.54% 98.34% 97.09%

98.64% 98.72% 97.52%

99.59% 99.30% 98.96%

99.12% 99.01% 98.23%

To contrast the classification results obtained in Table 3, Fig. 4 shows the box plots of the accuracies of the three classifiers used in this study with 10-fold cross-validation. As seen in Fig. 4, the KNN classifier is the best performer in terms of the accuracy metric represented with an average of 98.54%, a precision of 98.64%, a

System for the Classification of Cardiac Arrhythmias

11

Fig. 4. Accuracy box plot for three different classifiers.

sensitivity of 99.59%, and an F1 score of 99.12%. On the other hand, the secondbest classifier is the NN algorithm, which achieves an average accuracy, precision, sensitivity, and F1 score of 98.34%, 98.72%, 99.30%, and 99.01%, respectively. It should be noted that, unlike other studies, the metrics are calculated considering the five classes of arrhythmias (N, S, V, F, U). Similarly, it is essential to consider the confusion matrix that defines the performance of the training model on a previously established test data set in which its true values are already known. The following graphs represent the analysis of five classes of arrhythmias where the predicted class of the ECG beat is considered on the X-axis, and the true class of the ECG beat is considered on the Y-axis. Figure 5 shows the confusion matrix obtained with KNN and NN.

Fig. 5. Confusion matrix.

12

3.1

H. A. Ayala-Cucas et al.

Discussions

To validate the results for the configurations proposed in this study, we identify studies that have used similar approaches and techniques to the MIT-BIH database. Table 4 shows a list of studies with their respective feature extraction techniques, classification algorithms, and evaluation metrics. Kamath [8] employs a method to collect time and frequency domain features with energy function (Teager) approaches to achieve 95.00% classification accuracy in recognizing five types of cardiac arrhythmias using a neural network (NN) classifier. Similarly, Ayar et al. [1] applied a feature extraction approach based on Genetic Algorithms (GA) together with the Random Forest (RF) algorithm of the decision tree family, achieving an accuracy of 97.98%. On the other hand, Ramkumar et al. [16] highlighted a detection system that combined techniques such as discrete wavelet transform (DWT) along with principal component analysis (ICA) and multilayer neural network classifier (MLP NN) to classify three classes of arrhythmias, obtaining an overall accuracy of 96.14%. Yang et al. [23] evaluates parametric features and visual patterns from ECG signal morphology to train the KNN algorithm and achieve an overall accuracy of 97.70%. In addition, other approaches such as deep learning through convolutional neural networks (CNN) and DWT are used for detecting five classes of cardiac arrhythmias, reaching an average accuracy of 97.20% [21]. Likewise, Mazidi et al. [13] used a method for feature extraction employing TQWT and statistical feature calculation to classify the ECG signal into normal and abnormal classes using the KNN classifier, achieving an accuracy of 97.81%. Considering the above, we can establish that the results obtained in this study are better than other studies presented with the same database. In other words, the performance of cardiac arrhythmia classification methods is improved. In addition, two main approaches were considered in our study. First, all patient records from the MIT-BIH arrhythmia database were adopted in order to obtain high-fidelity cardiac arrhythmia identification in the face of large or small input ECG samples; second, we aim to provide a reliable medical diagnostic method to detect all classes of cardiac arrhythmias recommended by the ANSI/AAMI EC57:1998/(R) standard. The validation of this work is based on these two criteria, allowing the analysis of a more practical and reliable system to diagnose an actual event as patients present different types of morphologies. Furthermore, the comparison results presented in Table 4 allow us to establish that our proposed network configurations can serve as an efficient and highfidelity tool for application in medical fields to diagnose heart disease.

System for the Classification of Cardiac Arrhythmias

13

Table 4. Comparison of the classification performance of the proposed method with other study methods. Literature

Year Features

Classifier Classes Accuracy

Kamath [8]

2011 Teager features

NN

5

95.00%

Ayar et al. [1]

2018 GA+DT

TREE

6

97.98%

Ramkumar et al. [16] 2020 DWT + ICA

MLP NN 3

96.50%

Yang et al. [23]

2020 Parameter + Visual pattern of ECG Morphology

KNN

15

97.70%

Wu et al. [21]

2021 DWT

CNN

5

97.20%

Mazidi et al. [13]

2022 TQWT + statistical features

KNN

2

97.81%

Our method

2022 DWT

KNN

5

NN

4

98.54% 98.34%

Conclusions

Electrocardiographic recordings of the ECG signal are widely used for diagnosing cardiovascular diseases, as they present the electrical activity of the heart rhythm and provide information about the different conditions of the heart status. In the present work, the performance of the classification system is evaluated by analyzing five classes of cardiac arrhythmias using a combination of digital filters, feature extraction methods (DWT), and three different classifiers (KNN, NN, and decision tree). The proposed method provided acceptable and easyto-interpret accuracy, precision, sensitivity, and F1 score. Considering that the performance of the classification and recognition system is highly dependent on feature extraction, features may vary depending on their application, as they may be time or frequency domain features. In addition, it should be noted that techniques can be used for feature dimensionality reduction. However, dimensionality reduction depends on which features are to be removed and which are to be retained to speed up the classification process. Each experiment allows us to improve the methods used and to propose several alternatives at all stages of the modeling process. Future filtering, feature extraction, and classification techniques, such as deep learning, could be used to diagnose heart disease based on signal decomposition in ECG spectrum images and help the medical community diagnose arrhythmias with high reliability and accuracy. Acknowledgments. The authors would like to acknowledge the valuable support given by the SDAS Research Group (https://sdas-group.com/).

14

H. A. Ayala-Cucas et al.

References 1. Ayar, M., Sabamoniri, S.: An ECG-based feature selection and heartbeat classification model using a hybrid heuristic algorithm. Inform. Med. Unlock. 13, 167–175 (2018). https://doi.org/10.1016/j.imu.2018.06.002 2. Benjamin, E.J., Virani, S.S., Callaway, C.W.: Heart disease and stroke statistics - 2018 update: a report from the American Heart Association. Circulation 137 (2018). https://doi.org/10.1161/CIR.0000000000000558, https://www. ahajournals.org/doi/10.1161/CIR.0000000000000558 3. Bodile, R.M., Talari, V.K.H.R.: Removal of power-line interference from ECG using decomposition methodologies and kalman filter framework: a comparative study. Traitement Signal 38(3), 875–881 (2021). https://doi.org/10.18280/ ts.380334, https://www.iieta.org/journals/ts/paper/10.18280/ts.380334 4. Chen, K.C.J., Chien, P.C., Gao, Z.J., Wu, C.H.: A fast ECG diagnosis by using nonuniform spectral analysis and the artificial neural network. Assoc. Comput. Mach. 2(3), 1–21 (2021). https://doi.org/10.1145/3453174, https://dl.acm.org/doi/10.11 45/3453174 5. Elhaj, F.A., Salim, N., Harris, A.R., Swee, T.T., Ahmed, T.: Arrhythmia recognition and classification using combined linear and nonlinear features of ECG signals. Comput. Methods Program. Biomed. 127, 52–63 (2016). https://doi.org/10. 1016/j.cmpb.2015.12.024, https://www.sciencedirect.com/science/article/abs/pii/ S0169260715301097 6. Goldberger, A., et al.: PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation 101(23), 215–220 (2000). https://www.physionet.org/content/mitdb/1.0.0/ 7. Gupta, V., Mittal, M., Mittal, V., Sharma, A.K., Saxena, N.K.: A novel feature extraction-based ECG signal analysis. Neural Comput. Appl. 102(5), 903–913 (2021). https://doi.org/10.1007/s40031-021-00591-9, https://link.springer.com/ article/10.1007/s40031-021-00591-9 8. Kamath, C.: ECG beat classification using features extracted from Teager energy functions in time and frequency domains. IET Signal Process. 5, 575–581 (2011). https://doi.org/10.1049/iet-spr.2010.0138, https://digital-library. theiet.org/content/journals/10.1049/iet-spr.2010.0138 9. Kumar, C., Kolekar, M.H.: Cardiac arrhythmia classification using tunable Q-wavelet transform based features and support vector machine classifier. Biomed. Signal Process. Control 59, 101875 (2020). https://doi.org/10. 1016/j.bspc.2020.101875, https://www.sciencedirect.com/science/article/abs/pii/ S1746809420300318 10. Luis, F., Moncayo, G.: Cardiovascular Health Book of the San Carlos Clinical Hospital and the BBVA Foundation, 1st edn. Madrid, Spain (2009) 11. Madan, P., Singh, V., Singh, D.P., Diwakar, M.: A hybrid deep learning approach for ECG-based arrhythmia classification. Bioengineering 9(4), 67–492 (2018). https://doi.org/10.3390/bioengineering9040152, https://www.mdpi.com/ 2306-5354/9/4/152 12. Malik, G.K., Yatindra, K., Manoj, P.K.: Multiclass arrhythmia classification based on support vector machine optimized by grasshopper optimization algorithm. Indian J. Comput. Sci. Eng. 13(2), 525–535 (2022) 13. Mazidi, M.H., Eshghi, M., Raoufy, M.R.: Premature ventricular contraction (PVC) detection system based on tunable Q-factor wavelet transform. J. Biomed. Phys. Eng. 12(1), 61–74 (2022). https://doi.org/10.31661/jbpe.v0i0.1235, https:// pubmed.ncbi.nlm.nih.gov/35155294/

System for the Classification of Cardiac Arrhythmias

15

14. M¨ uller, A.C., Guido, S.: Introduction to Machine Learning with Python A Guide for Data Scientists, 1st edn. (2017) 15. Pozo-Ruiz, S., Morocho-Cayamcela, M.E., Mayorca-Torres, D., H. Peluffo-Ord´ on ˜ez, D.: Parkinson’s disease diagnosis through electroencephalographic signal process´ Ferr´ ing and sub-optimal feature extraction. In: Rocha, A., as, C., M´endez Porras, A., Jimenez Delgado, E. (eds.) ICITS 2022. LNNS, vol. 414, pp. 118–127. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-96293-7 12 16. Ramkumar, M., Ganesh Babu, C., Vinoth Kumar, K., Hepsiba, D., Manjunathan, A., Sarath Kumar, R.: ECG cardiac arrhythmias classification using DWT, ICA and MLP neural networks. J. Phys. Conf. Ser. 1831(1), 1–13 (2021). https:// doi.org/10.1088/1742-6596/1831/1/012015, https://iopscience.iop.org/article/10. 1088/1742-6596/1831/1/012015/meta 17. Ranaware, P.N., Deshpande, R.A.: Detection of arrhythmia based on discrete wavelet transform using artificial neural network and support vector machine. In: International Conference on Communication and Signal Processing, pp. 1767–1770 (2016) 18. Rodriguez-Sotelo, J.L., Peluffo-Ordo˜ nez, D., Cuesta-Frau, D., CastellanosDom´ınguez, G.: Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering. Comput. Methods Programs Biomed. 108(1), 250–261 (2012). https://doi.org/10.1016/j.cmpb.2012.04.007 19. Sahoo, S., Subudhi, A., Dash, M., Sabut, S.: Automatic classification of cardiac arrhythmias based on hybrid features and decision tree algorithm. Int. J. Autom. Comput. 17(4), 551–561 (2020). https://doi.org/10.1007/s11633-019-1219-2 20. Sharma, P., Dinkar, S.K., Gupta, D.V.: A novel hybrid deep learning method with cuckoo search algorithm for classification of arrhythmia disease using ECG signals. Neural Comput. Appl. 33(19), 13123–13143 (2021). https://doi.org/10. 1007/s00521-021-06005-7 21. Wu, M., Lu, Y., Yang, W., Wong, S.Y.: A study on arrhythmia via ECG signal classification using the convolutional neural network. Front. Comput. Neurosci. 14(January), 1–10 (2021). https://doi.org/10.3389/fncom.2020.564015, https:// www.frontiersin.org/articles/10.3389/fncom.2020.564015/full 22. Xiang, Y., Lin, Z., Meng, J.: Automatic QRS complex detection using twolevel convolutional neural network. BioMed. Eng. 17(1), 1767–1770 (2018). https://doi.org/10.1186/s12938-018-0441-4, https://biomedical-engineeringonline.biomedcentral.com/articles/10.1186/s12938-018-0441-4 23. Yang, H., Wei, Z.: Arrhythmia recognition and classification using combined parametric and visual pattern features of ECG morphology. IEEE Access 8, 47103– 47117 (2020). https://doi.org/10.1109/ACCESS.2020.2979256, https://ieeexplore. ieee.org/document/9027930

Artificial Firefly Meta-heuristic Used for the Optimization of a Fractional PID on an ARM Platform William Montalvo(B)

, Roger Catota Ñacata , and Sebastián Layedra Guayta

Universidad Politécnica Salesiana, UPS, 170146 Quito, Ecuador [email protected], {rcatotan,slayedrag}@est.ups.edu.ec

Abstract. Nowadays, DC motors are very useful in the industry, and their easy control is one of the reasons. Over time, different control techniques have been implemented to improve the performance of these machines. In the present study, a Fractional Order Proportional Integral Derivative controller (FOPID) is developed, which presents better results in the face of disturbances or changes in the set point than a conventional Proportional Integral and Derivative (PID) controller. Since it is a more complex controller, one of its problems is the tuning of its parameters. For this purpose, artificial intelligence techniques such as metaheuristics can be used, among which one that has not been explored in depth is the Artificial Firefly (AF) algorithm, which uses a Cost Function (CF) that adjusts the optimization based on a performance index, such as the Integral Time Absolute Error (ITAE). The hardware used for the control is an STM32F4 embedded board (with Advanced RISC Machine - ARM technology). A Control Plant Trainer (CPT) containing all the instrumentation for the actual tests is used as the plant. The performance of the FOPID - FA is compared with a conventionally tuned FOPID, using the Wilcoxon statistical method, yielding interesting results from the control point of view. Keywords: Firefly Algorithm - FA · FOPID · ITAE · Wilcoxon · FOPID · Wilcoxon

1 Introduction Direct current (DC) motors are used in many industrial processes, as they transform electrical energy into mechanical energy, with a high starting torque and small size, also their control characteristics are accurate and simple. Usually, the control of these processes requires a Proportional, Integral and Derivative (PID) controller classic as it provides effective control, but over time many real systems have been modeled with fractional differential equations as it requires a more robust and precise control, and for optimal operation requires the fractional order controller [1, 2]. The use of fractional order controllers has given a better result in terms of accuracy, in addition, they have proven to be more efficient in robust controls since it has 5 parameters for tuning and this has made them positioned over classical controllers. The resolution of these controllers is a laborious task that employs numerical methods [3]. To avoid © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 16–27, 2023. https://doi.org/10.1007/978-3-031-25942-5_2

Artificial Firefly Meta-heuristic Used for the Optimization

17

these inconveniences, researchers have adapted several meta-heuristic algorithms that facilitate the tuning of the control parameters and result in more efficient control. Meta-heuristic algorithms in recent times have shown effectiveness in solving many optimization problems without providing extensive details of the problem concepts. This gives them many advantages over several conventional techniques [4]. In 2008, Xin-She Yang designed an algorithm based on the firefly behavior and since then it has been modified, hybridized and applied to solve different optimization areas [5, 6]. The operation of the FA acts according to the position of a firefly, which represents a specific point in the solution space, obtained as a result of having evaluated in a cost function (CF), which can be seen as the brightness emitted by each firefly, so that the others are attracted depending on the brightness of each one, being that those of lower brightness are attracted by those of higher brightness [7, 8]. Some of the features of this algorithm are as follows: • Fireflies do not have sex, only the brightness of each is important in attraction. • The attractiveness of each firefly depends on the distance from each other, the brightness decreases with increasing distance from each other. In recent years there have been multiple investigations verifying the benefits of using different meta-heuristic algorithms, but most of them have been carried out in simulators, which have not allowed to observe that, if applied to a plant, the same results are obtained. For this reason, the implementation of these algorithms in low-cost plants such as the Control Plant Trainer (CPT) of the National Instrument is proposed, to have data or results closer to reality. The CPT is equipment that has sensors and actuators of common instrumentation and control systems, such as temperature, position, analog signals of direct current, alternating current, digital, pulse train and speed, which is the one used for this study, which consists of a direct current motor that is controlled by a voltage signal that varies between 0 and 5 V, also on its shaft is attached an encoder of 36 pulses per revolution to measure the speed and gives as output a pulsing signal [9].

2 Method The implementation methodology for the FOPID controller tuned using the Firefly Algorithm (FA) is described below: 2.1 Data Acquisition and System Identification The DC motor speed data are obtained by using the Waijung library that allows the STM32F4 Discovery card to communicate with the Matlab software. Initially, a serial connection is made from the encoder port of the motor to the electronic card for receiving and sending data that allows the actuator to move. The general scheme can be seen in Fig. 1.

18

W. Montalvo et al.

Fig. 1. Wiring diagram used for acquisition, communication and control between Matlab/Simulink, STM32F407 and CPT.

1048 samples were taken with a sampling period of 0.01 s; with the help of the Matlab tool “System Identification”, the collected data were filtered to obtain a transfer function (TF) of 89% approximation. The continuous-time transfer function (TF) is given in Eq. (1). C(s) =

26 s + 26.01

(1)

For the simulation part, a continuous FT is used, however, for the implementation a discrete time FT is needed, so the Matlab command “c2d” is used to make this change, obtaining Eq. (2). C(z) =

0.2289 z − 0.771

(2)

The Fomcon library is used to carry out the continuous and discrete control, in addition, the Waijung toolbox is used, which has the necessary blocks for programming and communication of the STM32F407 card, the CTP and the computer; the two controls are modeled in Matlab/Simulink. 2.2 Tuning by Using the FA Algorithm Equation (3) represents the attraction of the glow of a firefly to another firefly located at a distance of r. β = β0 e−γ r

2

(3)

where β0 is the degree of attraction when r = 0. The distance r o rab between two fireflies a and b in two positions xa and xb respectively is given by Eq. (4) rab = xa − xb 

(4)

The trajectory of the motion of a firefly a compared to a firefly b that is brighter than a and is determined by Eq. (5)  2  xat+1 = xat + β0 e−γ rab xbt − xat + αt εat (5) where:

Artificial Firefly Meta-heuristic Used for the Optimization

19

xat : Current position of the firefly  2  β0 e−γ rab xbt − xat : Firefly attraction αt εat : Introduces randomization in motion The randomization parameter αt is adjusted depending on the number of iterations, and is given by αt = α0 δ t , where 0 < δ < 1; and εat is a vector of random numbers that are drawn from the uniform distribution over time. However, if β0 = 0 and γ = 0, it is determined that there is no variation and is reduced to a variant of particle swarm optimization [10]. The integral of the absolute time error (ITAE) is used as a performance index and cost function to establish the evaluation of the controller and to update the luminescence intensity of each firefly during the iterations of the algorithm. Equation (6) represents in a general way the mathematical function of the ITAE [11]. ∞

ITAE = ∫ t|e(t)|)dt

(6)

0

The FA meta-heuristic algorithm is performed in a Matlab script, with which the values corresponding to the gains Kp, Ki, and Kd for the FOPID controller are determined, to be subsequently linked with the transfer function. Based on [12], the optimal parameters of μ and λ of the fractional controller are stipulated by using tables and their values are 0.5 and 0.5 respectively. The FA is tuned with several parameters, Table 1 details the most significant ones for the development of the algorithm: Table 1. Values used in the FA parameters. Parameters

Value

Number of iterations

30

Firefly population

100

Alpha

1.0

Beta

1.0

Gamma

0.01

Theta

0.97

3 Results This section presents the continuous and discrete time control schemes for the presentation of simulated and real results, respectively.

20

W. Montalvo et al.

Fig. 2. Closed-loop continuous FOPID control scheme tuned by FA

3.1 Continuous-Time FOPID Controller Tuned by FA Figure 2 shows the blocks used from the Fomcon and Simulink libraries for the modeling of the continuous control loop of the system. Figure 3 shows the result obtained from the ITAE vs. the number of iterations for the determination of the parameters Kp, Ki and Kd, which were obtained by tuning the FA and will be used by the FOPID controller for the simulation of the system response.

Fig. 3. ITAE vs the number of iterations.

Artificial Firefly Meta-heuristic Used for the Optimization

21

In Fig. 3, it can be seen that as the iterations increase, the evaluation of the cost function converges to its most optimal value. A maximum number of iterations of 30 is established because experimentally in that range the best parameters for the controller gains are found. When entering a higher number of iterations, the algorithm tends to slow down and therefore begins to emit repeated values and in other cases values that lack coherence.

Fig. 4. Continuous time system response.

It can be seen in Fig. 4 that the behavior of the system shows some over impulse, however, being a speed control, it is prioritized that the settling time is fast in the face of set point variations. At the second 25, a disturbance is entered to determine whether the controller behaves efficiently, resulting in optimum efficiency in the face of this type of change. 3.2 Discrete-Time FOPID Controller Tuned by FA For the discrete-time modeling, the Fomcon library is used, in addition to the Waijung toolbox that allows interaction with the Matlab/Simulink software. For its correct performance, communication and data conversion blocks were used, in addition to gains that allow the scaling of the input and output signals of the proposed controller. The resulting scheme is shown in Fig. 5. As in the case of continuous time, the controller parameters kp, ki, and kd in discrete time are tuned by the FA, which depends on the assigned set point or disturbances that occur. In these cases, the controller shows a satisfactory response as shown in Fig. 6, 7 and 8 for the different cases presented. In all the cases shown, it can be observed that the FOPID controller optimized by FA in a DC motor, both in the simulation and in the implementation, responds and adapts quickly to the different changes that may occur.

22

W. Montalvo et al.

Fig. 5. Discrete-time schematic of FA-tuned FOPID controller

Fig. 6. Discrete system response to disturbances.

Artificial Firefly Meta-heuristic Used for the Optimization

Fig. 7. Response of the discrete system to an established set point.

Fig. 8. Response of the discrete system to set point variations.

23

24

W. Montalvo et al.

3.3 Statistical Analysis with Performance Indexes (ITAE) Between Fomcon Optimized FOPID Controller and FA Tuned FOPID Using Wilcoxon Test For the validation of the FOPID controller, a statistical analysis is used employing ITAE & IAE performance indexes through the Wilcoxon test that allows comparing the medians of the error, these two samples do not have a normal distribution and indicate which of these provides better results for the proposed controller. We proceed to the collection of 30 experimental samples with a population of 1000 fireflies of the described controllers to compare them and verify which tuning presents better results using the Wilcoxon test in the IBM SPSS software; it is worth mentioning that in the tuning with the Fomcon library the Grunwald-Letnikov method and the TrustRegion-Reflective algorithm are used. Null Hypothesis (H_o): “The ITAE of the FA optimized FRACTIONAL PID controller is approximated to the ITAE of the Fomcon tuned FRACTIONAL PID controller”. Alternate Hypothesis 1 (H_a): “The ITAE of the FRACTIONAL PID control optimized by FA is lower than the ITAE of the FRACTIONAL PID control tuned by Fomcon”.

Fig. 9. Normality analysis for the 30 comparative simples.

With the data shown in Fig. 9 and Table 2, the null hypothesis is discarded since the data taken do not present a normal distribution and it can be seen that their values are less than 0.05, which allows the Wilcoxon test to be performed and the alternative hypothesis to be accepted.

Artificial Firefly Meta-heuristic Used for the Optimization

25

Table 2. Comparison of error between FOPID FA and FOPID Fomcon controller performance indices. N

Average range

Negative ranks

743

595.1

Positive ranks

257

227.02

Ties

0

Total

1000

Za

1.9

Z

−21..007

4 Discussion In the research of Ñaupari and Beltran [13] a PID controller for an inverted pendulum is tuned according to two meta-heuristic algorithms such as FA and simulated annealing, obtaining that the parameters optimized by both methods do not show noticeable variations, but a fast convergence speed is obtained, concluding that both methods are presented as a tuning option for this type of controllers. The present study seeks to give a different approach to the use of FA by applying it to a speed control with FOPID in which the two additional parameters already provide better results than a classical PID controller and to obtain better performance with this tuning. In the work of B. Hekimo˘glu [14] a FOPID controller using Chaotic Atom Search Optimization (ChASO) is developed for speed control of a DC motor, this algorithm is based on chaotic sequences of logistic maps and its results are compared with controllers such as GWO-FOPID, GWO-PID, IWO-PID and SFS-PID using the ITAE objective function. Although his results are interesting, the experimental data he uses are simulated and his conclusions lack solid statistical support, which is instead contributed to the present research by using data that are closer to reality and using state-of-the-art ARM technology. Yeroglu and Tan propose techniques for tuning FOPID controllers, firstly, they use Ziegler-Nichols and the Astrom-Hagglund method, deriving and solving two nonlinear equations to obtain the parameters, as a result there is a better step response, they also use the bode envelopes, in this case five equations are derived to obtain the necessary coefficients in the same way, the two techniques show simulated results. The present work details the optimization of this type of controllers from a metaheuristic algorithm taking advantage of both software resources and when implementing them in an ARM card with excellent characteristics and also low cost, thus obtaining a faster tuning and showing real results [15]. Khubalkar, Junghare, Aware and DAS propose a FOPID controller for the speed control of a permanent magnet DC motor by optimizing dynamic particle swarms and compare it with a conventional PID controller, obtaining that in this case the use of the FOPID controller it reduces the overshoot as well as providing a smaller amount of current for driving the motor, allowing an energy efficiency characteristic of FOPID

26

W. Montalvo et al.

controllers, in the same way it shows simulated results. Like this work, statistical analyzes of the comparison of the two controllers are also shown, always obtaining the best results with the FOPID controllers in both studies, in addition to the fact that in this study the statistical analysis was carried out with the results of the implementation of the algorithm [16].

5 Conclusions It is verified that the use of metaheuristic algorithms such as the Artificial Firefly has great applicability in the tuning of complex controllers, with greater efficiency on set point variations or disturbances, achieving that the stabilization time is very small as shown in Figs. 6, 7 and 8. Since the STM32F407 is a low-cost and high-performance card, together with the CPT, it makes possible the implementation of several metaheuristic algorithms for the implementation of various controls such as speed, temperature or position, and thus be able to present physical results that support the simulations, since many previous researches only reach this point. The simulation and implementation results have their present error since in the first case it will depend on the state of the processor and the characteristics of the processor since Figs. 3 and 4 varied depending on the time of use of the computer and the best results were presented in the first simulation while the implementation will depend on the physical state of the plant in which the test is performed. In real control systems, it is not possible to perform an infinite number of tests since this leads to wear of the actuators and would reduce their lifetime. For this reason, the Wilcoxon test is used in this case with 30 tests for verifying, with 95% reliability, the advantage of the FOPID FA controller over the FOPID Fomcon. There are different cost functions for these algorithms and most of them are based on the integral errors of the system response, but for the development of the FA, it is recommended to use an ITAE function.

References 1. Sahputro, S.D., Fadilah, F., Wicaksono, N.A., Yusivar, F.: Design and implementation of adaptive PID controller for speed control of DC motor. In: QiR 2017 - 2017 15th International Conference on Quality in Research (QiR): International Symposium on Electrical and Computer Engineering, vol. 2017-December, no. 1, pp. 179–183 (2017). https://doi.org/10. 1109/QIR.2017.8168478 2. Achanta, R.K., Pamula, V.K.: DC motor speed control using PID controller tuned by jaya optimization algorithm. In: IEEE International Conference on Power, Control, Signals and Instrumentation Engineering, ICPCSI 2017, pp. 983–987 (2018). https://doi.org/10.1109/ICP CSI.2017.8391856 3. Singh, R., Kumar, A., Sharma, R.: Fractional order PID control using ant colony optimization. In: 1st IEEE International Conference on Power Electronics, Intelligent Control and Energy Systems, ICPEICES 2016 (2017). https://doi.org/10.1109/ICPEICES.2016.7853387

Artificial Firefly Meta-heuristic Used for the Optimization

27

4. Oladipo, S., Sun, Y., Wang, Z.: Optimization of PID and FOPID controllers with new generation metaheuristic algorithms for controlling AVR system: concise survey. In: Proceedings - 2020 12th International Conference on Computational Intelligence and Communication Networks, CICN 2020, pp. 280–286 (2020). https://doi.org/10.1109/CICN49253.2020.924 2585 5. Vsromdulf, W., et al.: 2Swlpl] Dwlrq Ri 3,’ & Rqwuroohu Lq $ 95 6\Vwhp E\8Vlqj $ Qw/Lrq 2Swlpl] Hu $ Ojrulwkp,” pp. 1522–1526 (2018) 6. Yang, X.-S.: Nature-Inspired Metaheuristic Algorithms. Luniver Press (2010) 7. Márquez-Vera, M.A., Barrera-Lujan, S.C., Paredes-Huerta, G.A.: Aplicaciones del Algoritmo de Luciérnagas y su comparación con la Colonia Artificial de Abejas. Universo de la tecnológica 8(24), 6–9 8. Yang, X.S.: Firefly algorithm, Lévy flights and global optimization. In: Research and Development in Intelligent Systems XXVI: Incorporating Applications and Innovations in Intelligent Systems XVII, pp. 209–218 (2010). https://doi.org/10.1007/978-1-84882-983-1_15 9. Datalights: Entrenador de Planta de Control ‘EPC’. www.datalights.com.ec 10. Pal, P., Dey, R.: Optimal PID controller design for speed control of a separately excited DC motor: a firefly based optimization approach. Int. J. Soft Comput. Math. Control 4(4), 39–48 (2015). https://doi.org/10.14810/ijscmc.2015.4404 11. Arrieta, O., Víctor, O., Ruiz, M.A.: Sintonización de controladores pi y PID utilizando los criterios integrales IAE e ITAE 12. David, I., Jiménez, B.: Escuela politécnica nacional facultad de ingeniería eléctrica y electrónica 13. Beltrán, L.G., Ñaupari Huátuco, Z.: Sintonía de Controlador PID para un Péndulo Invertido Mediante Algoritmos Meta-Heurísticos: Luciérnaga y Recocido Simulador. TECNIA 30(2), 82–91 (2020). https://doi.org/10.21754/tecnia.v30i2.623 14. Hekimoglu, B.: Optimal tuning of fractional order PID controller for DC motor speed control via chaotic atom search optimization algorithm. IEEE Access 7, 38100–38114 (2019). https:// doi.org/10.1109/ACCESS.2019.2905961 15. Yeroglu, C., Tan, N.: Note on fractional-order proportional-integral-differential controller design. IET Control Theory Appl. 5(17), 1978–1989 (2011). https://doi.org/10.1049/iet-cta. 2010.0746 16. Khubalkar, S., Junghare, A., Aware, M., Das, S.: Modeling and control of a permanent-magnet brushless DC motor drive using a fractional order proportional-integral-derivative controller. Turk. J. Electr. Eng. Comput. Sci. 25(5), 4223–4241 (2017). https://doi.org/10.3906/elk-161 2-277

A Convolutional Neural Network-Based Web Prototype to Support COVID-19 Detection Using Chest X-rays Mauro Rosas-Lara, Julio C. Mendoza-Tello(B) , Diana C. López-Olives, and Andrea P. Robles-Loján Faculty of Engineering and Applied Sciences, Central University of Ecuador, Quito, Ecuador {mrosas,jcmendoza,dclopezo,aprobles}@uce.edu.ec

Abstract. COVID-19 continues to cause health problems to humanity. Some fields of science have conducted research to mitigate and reduce the harmful effects of this virus. In the healthcare field, radiographs are very important because they provide data that allow detection and assessment of pathologies in a reliable way. In this context, machine learning and data mining provide the mechanisms and algorithms that can support health care activities. Machine learning capability allows the neural network to learn, identify and interpret the results of a radiographs set. With these considerations, this research develops a web prototype based on convolutional neural networks to support the detection of COVID-19 using chest X-rays. For this, two sequential phases were defined, namely: data mining and software development. In this context, Cross Industry Standard Process for Data Mining (CRISP-DM) was used to select the deep convolutional neural network that best fits our case study. With this previous analysis, a web prototype was developed using two frameworks: Flask (for backend) and Angular (for frontend). Conclusions and future work are described at the end of the document. Keywords: Machine learning · Neural network · COVID-19 · Data mining

1 Introduction COVID-19 is a disease that is causing a worldwide pandemic with very serious consequences for human health. This disease is caused by the SARS-CoV-2 virus whose usual symptoms are fever, dyspnea, dry cough, myalgia, and fatigue. In severe cases it can cause pneumonia, septic shock and difficulty breathing, which puts the patient’s life at risk [1]. Currently, there is no definitive cure for this disease. To maintain the patient’s vital functions, the scientific community has made collaborative efforts to define treatments that allow alleviating the ailments and symptoms of the disease. In the healthcare field, artificial intelligence and data mining techniques have been playing a very important role in disease prediction and prevention. In this context, this research develops a web application based on convolutional neural networks to support COVID-19 detection © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 28–42, 2023. https://doi.org/10.1007/978-3-031-25942-5_3

A Convolutional Neural Network-Based Web Prototype

29

using chest X-rays. In this way, this web prototype aims to provide accurate, safe, and real-time predictions that contribute to an agile identification of the disease. With these considerations, the paper structure is as follows. Section 2 explains the methodology through a research phases overview. Section 3 describes the techniques, theoretical concepts and materials used in the research. In Sect. 4, the data mining tasks are applied to our case study. Section 5 explains the web prototype development process. In Sect. 6, a comparison with previous investigations was approached. In Sect. 7, future work and conclusions are described.

2 Methodology This research is addressed through two consecutive phases, namely: data mining and software development. According to the Cross Industry Standard Process for Data Mining (CRISP-DM), five sequential phases were carried out, namely: business understanding, data understanding, data preparation, modeling, evaluation and deployment [2]. This process is carried out to know which is the best convolutional network model that adapts to our case study. With this consideration, a web application is developed using a prototype-based model. Figure 1 describe the research phases overview.

Business understanding: project objectives and requirements from the business perspective.

Data understanding: data collection, description, and data exploration.

Modeling: selection of the model that best suits the requirements.

Data preparation: cleaning, data construction, format activities, data integration.

Evaluation and deployment: use of metrics based on confusion matrix and evaluation.

Sofware development: requirements analysis, development, verification and maintenance

Fig. 1. Research phases overview.

3 Materials and Background In this section, the theoretical concepts and materials used in the research are briefly described, as follows:

30

M. Rosas-Lara et al.

– Format extraction technique. Initially, DICOM is used as an image format and network protocol for information exchange between medical information systems [3]. DICOM format consists of header and image set. The header contains the exchange syntax, data comprehension and patient information data. However, this research only extracts the image (JPG format) for training and testing the proposed model. In this way, the anonymity of patient information is guaranteed. – Convolutional Neural Network (CNN). It is a type of artificial neural network, which is a variation of a multi-layer perceptron used in two-dimensional arrays. Two convolutional neural networks or algorithms are used, namely: DenseNet201 [4], and ResNet152V2 [5]. These algorithms have demonstrated effectiveness in the segmentation and classification of images. In addition, bidirectional long short-term memory (BI-LSTM) was used to process the data both forward and backward. Three general layers make up this algorithm, namely: (i) convolution, (ii) reduction, and (iii) classifier. (i) The convolution layer performs product and sum operations on the image feature map. For this, filters are used to extract the same feature from the image via an activation function and a learning rate. (ii) The reduction layer decreases the number of parameters and obtains only the most common features. (iii) The convolution classification layer obtains a probability (between 0 and 1) for each of the classification labels that the model attempts to predict [6]. – Model evaluation metrics based on confusion matrix. The estimates obtained from a confusion matrix are based on four values, namely: TP (true positive), FP (false positive), TN (true negative), and FN (false negative). With these considerations, three metrics are used, namely: accuracy (percentage of correct predictions), sensitivity (percentage of TP that were correctly identified by the algorithm), and specificity (percentage of TN that were correctly identified by the algorithm) [7]. – Hardware. Two servers were used. (a) PACS (Picture Archiving and Communication System) responsible for the transmission and storage of radiographs. This research uses DCM4CHEE Open-source server [8], which fully implements the DICOM protocol. The main features, as follows: 24 GB RAM, 11 TB HDD, and Gigabit Ethernet. (b) A server for software development with 32 GB RAM and 32 TB HDD for software development. – Software. Python Flask Framework to encode business rules. Google Colab for running code. Angular Framework to develop the client application.

4 Data Mining: Experimental Phase and Results In this section, CRISP-DM phases are applied to our study. 4.1 Business Understanding Phase This phase highlights the importance of aligning the data mining approach with business objectives. In this line, the business objective is to support the detection of COVID-19 through chest X-rays. In this way, it is possible to identify if a patient is infected with this pathology. To guarantee the success of this project, it is necessary that health personnel can easily identify the pathology using the information provided by the application. In

A Convolutional Neural Network-Based Web Prototype

31

this way, doctors will be able to quickly detect Covid-19, monitor their patients, and save many human lives. Regarding data mining, the focus is to identify COVID-19 pathology using digital image processing and deep learning algorithms. 4.2 Data Understanding Phase Preliminary, 2500 DICOM images corresponding to chest radiographs were obtained from various sources, namely: Medical Imaging Databank of the Valencia Region [9], COVID-19 Image Data Collection [10], and Stanford ML Group [11]. These images were classified into two pathology datasets: normal and COVID-19. A normal pathology is considered for patients with influenza, pneumonia, asthma, or diseases that can be confused with COVID-19; smokers are also included in this dataset. Each dataset contains the same number of images, that is, 1250. This correct balancing of images improves the performance of the algorithms used. 4.3 Data Preparation Phase In this phase, quantity and quality of images are necessary to obtain a good deep learning model. For this, it is necessary to select images without noise and increase the number of images with format. In this context, four activities are carried out, as follows.First, data cleansing. It was verified that the quality of the images is optimal, without noise or biases that could confuse the patterns of the Covid-19 pathology with any other pathology. In addition, in this phase the images (jpg format) were extracted from the DICOM format using PAC viewing software. Second, data construction. In each dataset, 80% of images (i.e., 1.000) were the initial sample size for training tasks, and 20% (i.e., 250) were used for testing tasks. Next, data augmentation technique was used to increase 5 times the initial training sample (1000 images); that is, a total of 5000 images for each dataset. Figure 2 shows the augmentation technique. With these considerations, the total number of images for training is 10,000. Table 1 shows the samples by each dataset.

Fig. 2. Data augmentation technique

Third, data normalization and format. All images were normalized and converted from DICOM to JPG format. Through normalization, the [0-255] range pixel array is

32

M. Rosas-Lara et al. Table 1. Samples by each dataset

Sample

Normal dataset

Covid-19 dataset

Source

Initial

1250

1250

[1–4]

Initial training

1000

1000

80% of initial sample

250

250

20% of initial sample

5000

5000

Test Final training

Augmentation

converted to another [0-1] range. In our research, the value of 0 was assigned to the normal dataset, and the value of 1 was assigned to the COVID-19 dataset. Figure 3 describes the data normalization process. Fourth, data integration. An image repository based on Google Drive was used. In this context, two folders were created, namely: (i) training_data, a folder that contains the medical images for both training, ii) test_data, a folder that contains the medical images for tests, and (ii) output, a folder that contains the deep learning model to be built.

Fig. 3. Data normalization process

4.4 Modeling Phase Two activities are essential in this phase: image preprocessing and modeling technique selection. Image Preprocessing Preprocessing was performed for both the training dataset and the testing dataset. For the training dataset, two hyperparameters stand out, namely: epoch and batch size. Epoch is the interaction or step that each image (of the training dataset) goes through the network. The batch size is the set of images that the computer processes in each step or iteration. An iteration updates the weights of the model during training, and therefore it is not enough to pass the data set only once but to pass it several times through the neural network; the goal of training a neural network is to adapt the weights and thresholds of

A Convolutional Neural Network-Based Web Prototype

33

the neurons to minimize the error. For this, an epoch is defined, which represents the complete path of the training data set via the neural network to update the weights. The diversity of data can affect the appropriate number of epochs. However, both accuracy and loss converge when the epoch value is between 10 and 20; otherwise, they do not undergo any variation. Due to the use of GPU, the range of values for batch size is between 16 and 128. In our research, the following parameterizations are performed. All images have the same dimensions (224 × 224). Regarding the values of the hyperparameters, 10 epochs were used, and a batch size equal to 40. The images were rotated between 0° and 15° clockwise, both for the vertical and horizontal axis. In addition, a 15% zoom was performed for each generated image. Finally, the same process was performed for the test dataset, except for the augmentation technique. Selection of Modeling Technique A neural network is made up of the network architecture, activation function, and training algorithm. In addition, these networks have several layers, such as: input, convolution, and activations. Models based on neural networks have high performance in data processing. Algorithms were implemented using python libraries, such as Keras and Tensorflow. In this context, four robust architectures were used, and the following training times were obtained, namely: – – – –

DenseNet201 with 18 min, and 6 s. DenseNet201 + Bi-LSTM with 7 h, 35 min, and 53 s. ResNet-152V2 with 17 min, and 31 s. ResNet-152V2 + Bi-LSTM with 4 h, 56 min, and 25 s.

4.5 Test and Evaluation Phase Learning techniques were used for the implementation of DenseNet201 and ResNet152V2 models, then BILSTM memory was added to each model. A total of 10 epochs were executed and the predictions were obtained with the following values: 1 for patients with a Covid-19 pathology, and 0 for patients with a normal pattern. For the testing phase, predictions were obtained with images from the test dataset. According to the confusion matrix, the following interpretations were expressed: – TP (True positive): a person with COVID-19 and the model classified her with COVID19 pathology. – TN (True negative): a person who does not have COVID-19 and the model classified her with normal pathology. – FP (False positive, type error I): a person who does not have COVID-19 and the model classified her with Covid-19 pathology. – FN (False negative, type error II): a person with COVID-19 and the model classified her with normal pathology. Regarding the DenseNet201 model (Table 2), the following results were obtained, namely:

34

M. Rosas-Lara et al.

– 94.80% accuracy for forecasts. – Approximately 95% sensitivity for COVID-19 pathology; that is, 95 of 100 forecasts for the COVID-19 pathology will be correct and 5 will be wrong. – 96% specificity for normal pathology; that is, 96 of 100 forecasts for normal pathology will be correct, and 4 will be wrong.

Table 2. Confusion matrix and metrics for DenseNet201 model

Confusion matrix

Metrics Accuracy = 94.80% Loss = 0.4617 Sensitivity = 93.60% Specificity = 96% TP = 240 TN = 234 FP = 16 FN = 10

Regarding the ResNet152V2 model (Table 3), the following results were obtained, namely: – 92% accuracy for forecasts. – Approximately 94% sensitivity for Covid-19 pathology; that is, 94 of 100 forecasts for the COVID-19 pathology will be correct, and 6 will be wrong. – 90.4% specificity for normal pathology; that is, 90 of 100 forecasts for normal pathology will be correct, and approximately 10 will be wrong. Regarding the DenseNet201 + BILSTM model (Table 4), the following results were obtained, namely: – 86.60% accuracy for forecasts. – Approximately 100% sensitivity for COVID-19 pathology; that is, all the forecasts of this pathology will be correct. – 73.2% specificity for normal pathology; that is, 73 of 100 forecasts for normal pathology will be correct, and approximately 27 will be wrong. Regarding the ResNet152V2 + BILSTM model (Table 5), the following results were obtained, namely: – 91.20% accuracy for forecasts.

A Convolutional Neural Network-Based Web Prototype

35

Table 3. Confusion matrix and metrics for ResNet152V2 model

Confusion matrix

Metrics Accuracy = 92% Loss = 0.1563 Sensitivity = 93.60% Specificity = 90.40% TP = 226 TN = 234 FP = 16 FN = 24

Table 4. Confusion matrix and metrics for DenseNet201 + BILSTM model

Confusion Matrix

Metrics Accuracy = 86.60% Loss = 0.02836 Sensitivity = 100% Specificity = 73.20% TP = 183 TN = 250 FP = 0 FN = 67

– Approximately 97% sensitivity for COVID-19 pathology; that is, 97 of 100 forecasts for the Covid-19 pathology will be correct, and 3 will be wrong. – Approximately 86% specificity for normal pathology; that is, 86 of 100 forecasts for normal pathology will be correct, and 14 will be wrong. Table 6 compares the evaluation for the proposed models. With these considerations, DenseNet201 is the best Neural Network model option that fits our case study.

5 Prototype Development and Results The software development is based on the prototyping model using the DenseNet201 model. In this line, four phases were executed, as follows:

36

M. Rosas-Lara et al. Table 5. Confusion matrix and metrics for ResNet152V2 + BILSTM model

Confusion matrix

Metrics Accuracy = 91.20% Loss = 0.004296 Sensitivity = 96.80% Specificity = 85.60% TP = 214 TN = 242 FP = 8 FN = 36

Table 6. Evaluation for the proposed models. Model

Accuracy (%) Sensitivity (%) Specificity (%) Runtime

DenseNet 201

94.80

93.60

96

18 min 6s

ResNet 152V2

92.00

93.60

90.4

17 min 31s

DenseNet201 + BILSTM

86.60

100

73.20

7 h 35 min 53s

96.80

85.60

4 h 56 min 25s

ResNet152V2 + BILSTM 91.20

– Requirements. Two types of requirements were defined, namely: functional and nonfunctional. In this context, three functionalities were defined, namely: (i) loading of chest X-rays in JPG format (ii) viewing of X-rays of patients with normal pathology and COVID-19, and (iii) detection of COVID-19 pathology through images. In addition, non-functional requirements were identified, such as: ease of use, agility in forecast response time, and prototype development using DenseNet-201-Based Deep Neural Network. Because the users of this application are Ecuadorians, the language used on the website is Spanish. – Design and Construction. A Google Cloud-based web service was built using Flask Python Framework. The client application was built using the Angular Framework. The service is accessed through this client application. The exchange format used is JSON (Java Script Object Notation). – Evaluation. Tests were conducted to verify the requirements. Good results were obtained according to accuracy and response agility. Figures 4, 5, and 6 show a representative functionality of the prototype. – Maintenance. We hope that new requirements arise so that they can be added to the project. In this way, medical personnel will be able to have a versatile application that supports the detection of COVID-19.

A Convolutional Neural Network-Based Web Prototype

Fig. 4. Main web page

Fig. 5. Diagnosis of a patient with COVID-19 pathology

37

38

M. Rosas-Lara et al.

Fig. 6. Diagnosis of a patient without COVID-19 pathology

6 Comparison with Related Works Previous research also developed CNN-based models to detect COVID-19, Normal, Pneumonia cases from chest X-ray images. Masud [12] proposed a lightweight CNN model and obtains 92.7% accuracy with 2823 COVID and non-COVID-19 images. Das et al. [13] combined three models (DenseNet201, Resnet50V2 and InceptionV3) using a new method of weighted average technique. 1006 images were assigned to training, test, and validation sets. The approach gave 96.2% accuracy. Mukherjee et al. [14] designed a light-weight CNN-tailored shallow architecture to detect COVID-19 using no false negatives. 6000 images were used, the model achieved 99.69% sensitivity and 99.99% AUC. In addition, Kim et al.[15] also demonstrated that shallow networks obtained better accuracy with a lower error rate. Abbas [16] uses DeTraC deep convolutional neural network. Using DeTraC (Decompose, Transfer, and Compose), 196 images from normal, and severe respiratory syndrome cases were used to obtain 93.1% accuracy and 100% sensitivity. Goel et al. [17] proposed an Optimized Convolutional Neural network based on the GreyWolf Optimizer algorithm. 1890 images of cases of pneumonia, COVID-19 and normal pathologies were used. The approach provided 97.78% accuracy, 97.75% sensitivity, and 96.25% specificity. In addition, Rajasenbagam et al. [18] proposed to Deep CNN based to detect Pneumonia infection. For this, VGG19 network and Deep Convolutional Generative Adversarial Network (DCGAN) were used to obtain 99.34% accuracy.

A Convolutional Neural Network-Based Web Prototype

39

Chakraborty et al. [x] designed a lightweight deep CNN using a minority class oversampling algorithm to solve the unbalanced dataset problem. The model achieved 95% accuracy, and 94% precision. Additionally, the findings showed that the VGG19 model gave 93% accuracy, and 93% precision for COVID-19 detection. AbouEl-Magd et al. [19] exposed a pre-trained CNN based on VGG16 and CapsNet (Capsule Neural Networks) to avoid the production of changes in data distribution. For this, SMOTE (Synthetic Minority Over-sampling Technique) and Gaussian optimization method were used to optimize parameters, such as capsule dimensionality and routing number. With 2905 sample images, the model gave 96.58% accuracy, and 97.08% F1 score. Chowdhury et al. [20] propone a parallel-dilated convolutional neural network architecture for detecting COVID-19. Through visualization methods, gradients are computed for an image category related to the characteristics of the last convolutional layer. In this way, a class-discriminative region is created to provide the model with an accuracy of 96.58%. Kumar et al. [21] proposes a model based on convolutional neural and graph convolutional network with 97.60% accuracy and 92.90% sensitivity. Sun et al. [22] combines three convolutional neural networks (LeNet-5, VGG-16, and ResNet-18) to detect COVID-19 and pneumonia via chest X-ray images. The findings show that the biogeography-based method improves the accuracy by 1.56% and optimizes the hyperparameters of the model. In addition to the investigations mentioned above, other image segmentation methods also play an important role in machine learning and artificial intelligence research. Currently, seven methods stand out in the field of medical image segmentation [23]. (1) Region growing. It extracts an image region based on the intensity and edges in the image. It requires manual interaction to extract a seed point [24]; this causes region augmentation to be sensitive to noise [23]. (2) Thresholding. It segments scalar images through binary partitioning of the image, without considering the spatial characteristics of the image [25]. This causes sensitivity to noise and inhomogeneity of an image. [23]. (3) Classifiers. It requires that the structures to be segmented have quantifiable characteristics [26]. Classifier methods do not perform special models and manual interaction is required to obtain training data. This produces biased results without considering anatomical and physiological differences in humans. (4) Artificial neural network. It is made up of processing nodes or elements that simulate human learning. This learning is based on the adaptation of assigned weights to the interconnections between nodes. The most common use is the classification of training data, whose spatial information can be easily integrated. (5) Clustering. It is an unsupervised method because it does not use training data. In this line, it segments the image and characterizes the properties of each class. It does not use spatial modeling directly, which leads to noise sensitivity and intensity inhomogeneity; however, this deficiency provides speed for the computational calculation. (6) Atlas-guided approaches. It collects information about the anatomy and serves as a reference to segment other images. This classifier approach is implemented in the image spatial domain [27]. (7) Deformable methods. Technique for delineating the boundaries of an image by closed parametric curves or deformed surfaces. It may exhibit poor convergence for concave boundaries [23]. Unlike previous studies, our research evaluated four models (DenseNet 201, ResNet 152V2, DenseNet201 + BILSTM, ResNet152V2 + BILSTM). After evaluation,

40

M. Rosas-Lara et al.

DenseNet 201 was chosen as the best model for web prototype development. Consequently, our model gave the following results: (a) accuracy of 94.80%, (b) sensitivity of 93.6%, and (c) specificity of 96%. This difference is due to the size and number of images used for the training and evaluation stage.

7 Conclusions The research case study is the detection of COVID-19 using chest X-rays. This document highlights the usefulness of neural networks and data mining to support the detection of this pathology. The mining process executed five consecutive phases to select the best deep learning algorithm that fits our study. In addition, format extraction techniques were applied on DICOM images; that is, only the jpg format was extracted, and the patient’s personal data were omitted. In this way, this research guaranteed the anonymity and privacy of the patient. Then, the models were evaluated, and it was deduced that DensetNet201 (with 94.80% accuracy, 93.60% sensitivity, 96% specificity, and 18 min and 6 s of runtime) is the best option for our case study. With this selection, a DenseNet201-based web application was developed. The main contribution of this research is to provide a tool to support the detection of Covid-19. In this way, the medical staff will be able to have accurate forecasts that contribute to decision-making and the quick detection of the disease. As future work, this research can be extended to detect other pathologies. For this, it is necessary that health institutions provide access to more image data sets. In this way, collaborative efforts between research groups and medical organizations can increase common benefits for human well-being and health.

References 1. Li, M., Yuan, F.: Historical redlining and resident exposure to COVID-19: a study of New York city. Race Soc. Probl. 14(2), 85–100 (2021). https://doi.org/10.1007/s12552-021-093 38-z 2. Schröer, C., Kruse, F., Marx, J., Kruse, F., Marx, J.: A systematic literature review on applying process model on applying CRISP-DM process model. Procedia Comput. Sci. 181, 526–534 (2021). https://doi.org/10.1016/j.procs.2021.01.199 3. Trivedi, D.N., Shah, N.D., Kothari, A.M., Thanki, R.: DICOM ® medical image standard. In: Trivedi, D.N., Shah, N.D., Kothari, A.M., Thanki, R. (eds.) Dental Image Processing for Human Identification, pp. 41–49. Springer, Cham (2019). https://doi.org/10.1007/978-3-31999471-0_4 4. Tensorflow: Models & datasets. https://www.tensorflow.org/api_docs/python/tf/keras/applic ations/densenet/DenseNet201 5. Tensorflow: Models & datasets. https://www.tensorflow.org/api_docs/python/tf/keras/applic ations/resnet_v2/ResNet152V2 6. Erickson, B.J.: Deep learning and machine learning in imaging: Basic principles. In: Ranschaert, E.R., Morozov, S., Algra, P.R. (eds.) Artificial Intelligence in Medical Imaging: Opportunities, Applications and Risks, pp. 39–46. Springer, Cham (2019). https://doi.org/10. 1007/978-3-319-94878-2_4 7. Caelen, O.: A Bayesian interpretation of the confusion matrix. Ann. Math. Artif. Intell. 81(3– 4), 429–450 (2017). https://doi.org/10.1007/s10472-017-9564-8

A Convolutional Neural Network-Based Web Prototype

41

8. Dcm4che.org: Open Source Clinical Image and Object Management. http://www.dcm4che. org/ 9. The Medical Image Bank of the Valencian Community: New BIMCV-COVID-19 1st + 2nd iteration. https://github.com/BIMCV-CSUSP/BIMCV-COVID-19 10. Cohen, J.P.: GitHub - ieee8023_covid-chestxray-dataset. https://github.com/ieee8023/covidchestxray-dataset 11. Irvin, J., et al.: CheXpert: a large chest radiograph dataset with uncertainty labels and expert comparison. https://stanfordmlgroup.github.io/competitions/chexpert/ 12. Masud, M.: A hierarchical convolutional neural network architecture. Multimed. Syst. 28, 1165–1174 (2022). https://doi.org/10.1007/s00530-021-00857-8 13. Das, A.K., Ghosh, S., Thunder, S., Dutta, R., Agarwal, S., Chakrabarti, A.: Automatic COVID19 detection from X-ray images using ensemble learning with convolutional neural network (2021) 14. Mukherjee, H., Ghosh, S., Dhar, A., Obaidullah, S.M., Santosh, K.C., Roy, K.: Shallow convolutional neural network for COVID-19 outbreak screening using chest X-rays. Cognit. Comput. (2021). https://doi.org/10.1007/s12559-020-09775-9 15. Kim, D.E., Gofman, M.: Comparison of shallow and deep neural networks for network intrusion detection. In: 2018 IEEE 8th Annual Computing and Communication Workshop and Conference, CCWC 2018, pp. 204–208. IEEE (2018) 16. Abbas, A., Abdelsamea, M.M., Gaber, M.M.: Classification of COVID-19 in chest X-ray images using DeTraC deep convolutional neural network. Appl. Intell. 51(2), 854–864 (2021). https://doi.org/10.1007/s10489-020-01829-7 17. Goel, T., Murugan, R., Mirjalili, S., Chakrabartty, D.K.: OptCoNet: an optimized convolutional neural network for an automatic diagnosis of COVID-19. Appl. Intell. 51(3), 1351–1366 (2020). https://doi.org/10.1007/s10489-020-01904-z 18. Rajasenbagam, T., Jeyanthi, S., Pandian, J.A.: Detection of pneumonia infection in lungs from chest X-ray images using deep convolutional neural network and content-based image retrieval techniques. J. Ambient Intell. Humaniz. Comput. (2021). https://doi.org/10.1007/ s12652-021-03075-2 19. AbouEl-Magd, L.M., Darwish, A., Snasel, V., Hassanien, A.E.: A pre-trained convolutional neural network with optimized capsule networks for chest X-rays COVID-19 diagnosis. Cluster Comput. 2 (2022). https://doi.org/10.1007/s10586-022-03703-2 20. Chowdhury, N.K., Rahman, M.M., Kabir, M.A.: PDCOVIDNet: a parallel-dilated convolutional neural network architecture for detecting COVID-19 from chest X-ray images. Heal. Inf. Sci. Syst. 8 (2020). https://doi.org/10.1007/s13755-020-00119-3 21. Kumar, A., Tripathi, A.R., Satapathy, S.C., Zhang, Y.D.: SARS-Net: COVID-19 detection from chest x-rays by combining graph convolutional network and convolutional neural network. Pattern Recognit. 122, 108255 (2022). https://doi.org/10.1016/j.patcog.2021. 108255 22. Sun, J., Li, X., Tang, C., Wang, S.H., Zhang, Y.D.: MFBCNNC: Momentum factor biogeography convolutional neural network for COVID-19 detection via chest X-ray images. Knowl.-Based Syst. 232, 107494 (2021). https://doi.org/10.1016/j.knosys.2021.107494 23. Pham, D.L., Xu, C., Prince, J.L.: Current methods in medical image segmentation. Annu. Rev. Biomed. Eng. 2, 315–337 (2000). https://doi.org/10.1146/annurev.bioeng.2.1.315 24. Jiang, Y., Qian, J., Lu, S., Tao, Y., Lin, J., Lin, H.: LRVRG: a local region-based variational region growing algorithm for fast mandible segmentation from CBCT images. Oral Radiol. 37(4), 631–640 (2021). https://doi.org/10.1007/s11282-020-00503-5 25. Khairuzzaman, A.K.M., Chaudhury, S.: Masi entropy based multilevel thresholding for image segmentation. Multimedia Tools Appl. 78(23), 33573–33591 (2019). https://doi.org/10.1007/ s11042-019-08117-8

42

M. Rosas-Lara et al.

26. Frigau, L., Conversano, C., Mola, F.: Consistent validation of gray-level thresholding image segmentation algorithms based on machine learning classifiers. Stat. Pap. 62(3), 1363–1386 (2019). https://doi.org/10.1007/s00362-019-01138-3 27. Gordon, S., Kodner, B., Goldfryd, T., Sidorov, M., Goldberger, J., Raviv, T.R.: An atlas of classifiers—a machine learning paradigm for brain MRI segmentation. Med. Biol. Eng. Comput. 59(9), 1833–1849 (2021). https://doi.org/10.1007/s11517-021-02414-x

Chatbots and Its Impact on the Information Support Service for Students of the Faculty of Computer Science of the Technical University of Manabí Marco Giler(B)

, Emilio Cedeño , Walter Zambrano , Michellc Zambrano , and David Zambrano

Facultad de Ciencias Informáticas, Universidad Técnica de Manabí, Portoviejo, Manabí, Ecuador {mgiler2846,emilio.cedeno,walter.zambrano,michellc.zambrano, david.zambrano}@utm.edu.ec

Abstract. Currently, the implementation of chatbots or virtual assistants presents great benefits for the companies or institutions that use them, since their use allows the user to establish a communication through a program that is integrated to a certain messaging system. Virtual assistants are programmed to communicate with the client and solve their doubts without a person attending them, and their main advantage is that they are always available to interact with users at any time the person requires it. This research aims to evaluate how it affects the use of chatbots in the process of counseling students of the Technical University of Manabi, since with all the technological advances that have been occurring over the years with the implementation of chatbots in the area of education has increased, the application of these virtual assistant systems in schools and universities has increased its popularity of implementation in these times, where virtual education is widely used as a result of the pandemic of covid-19, a situation that has faced the vast majority of countries worldwide. The methodology used for this research is quantitative, bibliographic and experimental. Bibliographic because a research is carried out on works related to the proposed topic. Experimental because it is based on the development of a chatbot in Telegram, which will be executed and subsequently evaluated through a Likert scale survey that aims to determine the QoS of users with respect to the chatbot. The results obtained from the student surveys show a high level of acceptability and satisfaction with the chatbot service, therefore, it is recommended that this service be incorporated at the institutional level, in order to provide information support to users without time constraints. Keywords: Chatbot · Likert scale · Support · Education · Implementation

1 Introduction This research refers to the topic of Chatbots and their impact on the information support service to students of the Faculty of Computer Science of the Technical University of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 43–54, 2023. https://doi.org/10.1007/978-3-031-25942-5_4

44

M. Giler et al.

Manabi. When we talk about chatbots we refer to software systems based on Artificial Intelligence and Natural Language Processing able to perceive their environment, process the information they perceive and give an answer in a rational way, generating a coherent conversation. Today, users expect the information they want to be instantly accessible. Since their creation, chatbots have always played the role of being information and/or service providers, especially in the world of e-commerce business [1]. Nowadays, anyone can create their own chatbot on various social networking platforms such as Facebook Messenger, Twitter, and WeChat without advanced programming skills and other sophisticated technical skills. Similarly, there are other chatbot development platforms capable of being efficient enough to be implemented in most companies, as is the case of IMB Watson Assistant. The main cause of this research was conducted for the interest of knowing the impact of the use of chatbots in the information support service to students of the Technical University of Manabi to conclude if it is a tool that would bring many benefits to students and if it is convenient its further implementation, considering that many educational institutions in Ecuador already make use of these tools. This research has the objective of investigating the importance and characteristics of the chatbot in the Student-Authority communication process. It is necessary to express that the chatbot will only allow to know information about the academic processes of the Faculty of Computer Science (hereinafter FCI). The importance of this work lies in the research on these rapid response systems that help a lot in modern education. Based on the information presented above, chatbots and their impact on the information support service to students of the FCI of the Technical University of Manabi will be analyzed, it is known with certainty that not all universities in Manabi implement these systems. The present study will be carried out through an experimental, quantitative and bibliographic research.

2 Materials and Methods The methodology of this research is quantitative, bibliographic and experimental. The bibliographic method was applied by searching for information in journals, institutional repositories and web pages, content that was related to chatbots and their characteristics, to conclude on their importance within higher education institutions and their main characteristics. Experimental, due to the development of a chatbot in Telegram, which was shared with the students of the Technical University of Manabí, who once they executed it, had to issue their criteria regarding the quality of service (hereinafter QoS). The instrument used so that students could issue their criteria was a virtual questionnaire, with the Likert scale structure, in turn, the information obtained, allowed the construction of the conclusions of this work and consequently that these are discussed in comparison to other similar works. 2.1 Use of Chatbots in Higher Education Institutions Today, online services in higher education institutions are an essential part of the interaction with users for promotion, academic services and/or information related to the

Chatbots and Its Impact on the Information Support Service

45

institution’s activities. Applications based on chat services are ideal for facilitating a variety of responses to user requests or questions. Research based on the use of chatbots shows a propensity towards their use, especially to contribute to the teaching and learning process. For example, in [2] they present a chatbot that supports students in English language study and the results show that most of the elementary functions of the software are used by students. Furthermore, in [3] pedagogical agents and chatbots are combined to provide students with access to personalized feedback to guide them through the virtual environment. The advantages offered by chatbots in higher education institutions are several, in [4] they explain the development of a chatbot for the guidance of a system for third level students. 2.2 Chatbots and Their Importance in University Communication Students in higher education regularly face many difficulties that affect their motivation, participation and academic success. Disconnection with the academic environment can become a reason for dropout. Therefore, student engagement should be encouraged through online tools, in this case chatbots. In the year 2022, a social platform that includes a chatbot called “Bo” was started to be used at the University of Leeds (UK), Differ, which aims to promote and engage students in an interactive way, for the evaluation and performance of the chatbot a survey was applied to students, subsequently for data collection, a lie meter, three focus groups and a semi-structured interview were used. This study resulted in a high evaluation of the quality of the chat service, and aspects such as design and response time were also considered. The students stated that the chat is very interactive, elegant and that it is fast and accurate in the responses [5]. The chatbot has become an important tool for institutional information service, because it simulates a conversation through a textual, auditory or graphical user interface. For universities, it is an additional worker, capable of responding to academic and non-academic queries from any user. For the development of a chatbot it is necessary to know about natural language processing, and one of the languages dedicated to those functionalities is Python. Some social networks have bet on this technology to offer automatic chat services [6]. 2.3 Conversational Protocol This protocol serves to define a friendly treatment with the user, since it contains the rules to follow when establishing communication and provides a guide to define the conversational flow of the chatbot [7]. The protocol is composed of three points that allow offering a personalized attention allowing the user to obtain a clear and concise answer requested on a specific topic. First, the chatbot starts with the greeting, it is essential because it explains the situation or the context of the chatbot’s function to understand how the conversation will develop. Next, the chatbot will perform an analysis of the user’s request, to process and provide a response. On the other hand, if the situation requires it, the chatbot will continue to function if the user makes further queries and will continue to provide answers

46

M. Giler et al.

if necessary. Finally, if the chatbot has solved the user’s doubts and the user ends the subscription to the chatbot, the conversation is terminated. 2.4 A Session with @MGUTMbot In this research, a bot has been developed on the Telegram messaging platform that does not require the user to identify themselves with the student’s username and/or password. The form of identification in Telegram is automatic, by means of the telephone number from which they interrelate. On the other hand, since the “Chatbot FCI-UTM” uses the same interface as all the other Telegram chats, it was not necessary to explain to the students how it worked, since for the users it was already a very well known and friendly interface; the same one that they often use for different purposes (leisure, entertainment, reading news, communication…). When using the “FCI-UTM” chatbot for the first time, the student will start by pressing the “start” button to start using the chatbot (pressing the button will send the /start command automatically). The bot will then show the student the list of available options (this list can always be returned by pressing the “Go Back” button). For each option, the name of the option is detailed (always related to information related to any topic that is of interest to a student of the Faculty of Computer Science of the Technical University of Manabi) (see Fig. 1).

Fig. 1. User interface of @MGUTMbot and several examples in the usage session.

The student can select any of the options through the different main commands such as /academic\_calendar which prints the academic calendar for the academic period May 2022 to September 2022 in.JPG format. In this case being an option without sub-menus the main menu options will remain the same.

Chatbots and Its Impact on the Information Support Service

47

2.5 Architecture Users can interact with bots via messages, commands and inline requests, these are handled using an HTTPS callback to the bot API developed in Telegram, this is what is known as WebHook, i.e. user callbackHTTPS [8]. Additionally, it is necessary to mention that it is essential for Telegram servers to know the address of the external server where the bot is implemented and its token (unique identifier or code of each bot implemented in Telegram), it is also necessary to send a WebHook to the chatbot API with the URL of the external server along with the bot’s identifier. Messages, commands and requests submitted by users are processed in the software running on Telegram servers. Data exchange between Telegram servers and the external server or local bot API server where the bot is hosted is done in JSON (JavaScript Object Notation) format, a lightweight text format for data exchange that describes the data with a dedicated syntax used to identify and manage it [8]. This is best exemplified in Fig. 2.

Fig. 2. Information flow in which the information is produced.

2.6 Implementation To test our project we developed a prototype called “FCI-UTM”. As a platform for our project we chose to use Telegram, because unlike the WhatsApp platform, Telegram offers an API for the creation of bots. The “FCI-UTM” chatbot architecture makes use of a local bot API server, this is presented in Fig. 2. The bot communicates with Telegram using the Bot API of this platform (https://github.com/tdlib/telegram-bot-api) [7]. To keep the files persistently and reliably hosted it is automatically stored on Telegram servers, the advantage is that the data stored in the bot is stored on Telegram servers. Figure 3 shows the set of commands available in the developed bot.

48

M. Giler et al.

Fig. 3. Bot commands.

3 Results Once the chatbot has been implemented, it is essential to identify the results obtained from the surveys and thus evaluate the degree of satisfaction of the students who have used the chatbot. In addition, in this session a comparison of the results with other related work is made. In order to establish the research and the results, it is essential to establish the place where the project was developed. The work was carried out in Ecuador at the Technical University of Manabí. It is important to mention that no system at the University has an integrated chatbot. Therefore, the research and results are simplified in the Faculty of Computer Science at UTM. The main problem faced by the UTM systems is that in times of enrollment the systems collapse due to the large number of students who must access the system, causing conflict including students who only want to consult information. The methodology used to evaluate the results was the Likert scale, because this method requires respondents to map their opinions on a topic from their personal opinion to a discrete scale. This method has several positive qualities for survey designers and analysts [9]. The main advantage of this methodology is that the questions posed with Likert-scale responses are simple to design and understand. Likewise, the results of the questions are quite simple to analyze statistically since with this methodology the qualitative opinion

Chatbots and Its Impact on the Information Support Service

49

of the user is interpreted to a quantitative feeling. With the Likert scale, respondents are asked to answer a questionnaire that requires them to indicate their level of agreement or disagreement with a series of questions (Fig. 4).

Fig. 4. Statements of the questionnaire applied to the research participants.

According to [10] and [11] there are five particularities that must be considered for an analysis of this type: 1. Ease of learning: The system must be easy to learn so that the user can quickly start interacting. 2. Memorability: The way the system is used needs to be easy to remember. Thus, when the user returns after a while, he/she knows how to use it without having to relearn the form. 3. Efficiency of use: The system needs to be efficient in use, so that once learned, the user has a high level of productivity. 4. Error prevention: A system focused on usability should have a small error rate. In addition, if the user makes errors, he must be able to return to an error-free state without losing what he has done. 5. Satisfaction: set of emotions and feelings related to the use of a system. Before analyzing the points related to the use of the chatbot, the surveys were applied to the users who made use of the bot and sought to capture their opinions about the chatbot

50

M. Giler et al.

“FCI-UTM”, in total there were 134 respondents who contributed to this research. The user experience mentioned above will be analyzed soon. 3.1 Ease of Learning Therefore, the ease of learning was analyzed based on questions that sought to identify whether users claim to have been successful in obtaining information support through the chatbot (Q.1.1). Taking into consideration that the user understood how to use the “FCI-UTM” chatbot, students were also asked about the chatbot interface (Q.1.2) and whether the information obtained was useful (Q.1.3). In (Q.1.1) the survey results show that 40.3% of the respondents totally agree that the chatbot adequately met their needs for information on procedures at the FCI, 38.8% partially agreed with the statement of obtaining adequate information for their needs. This is an optimistic figure considering that the chatbot is only a prototype that can be developed much further. In addition, 14.2% neither agreed nor disagreed, 2.2% disagreed and 4.5% strongly disagreed. In (Q.1.2) about the chatbot interface 41.8% of the respondents strongly agreed that the interface was visually pleasing and 40.3% partially agreed with this statement, with 82.1% approval, all indicating that the Telegram platform has a low rate of rejection of the bot interface. In addition, 11.2% neither agreed nor disagreed, 1.5% disagreed and 5.2% strongly disagreed. In (Q.1.3) on the usefulness of the information provided by the chatbot 44% of the respondents strongly agreed that the information provided is useful and 38.8% partially agreed with this statement, 12.7% neither agreed nor disagreed and only 4.4% had an unfavorable opinion of the information provided. In addition, 12.7% neither agreed nor disagreed, 2.2% disagreed and 2.2% totally disagreed. 3.2 Memorization Capacity To obtain a simple-to-use solution, several design explorations are needed, in addition to discussions about the priority of functions, the most fundamental ones and those that can be eliminated or ignored [12]. In other words, to facilitate the memorization capacity of the user in opting to develop the system as intuitive as possible. In this context respondents were asked how intuitive the process of obtaining information through the chatbot was (Q.2.1). The rest of the questions are detailed below. In (Q.2.1) the survey results show that 52.2% of the respondents strongly agree that the chatbot is quite intuitive and simple to use, 37.3% partially agreed with the statement that the chatbot is intuitive. In other words, 89.5% affirmed the simplicity of use, which is quite positive for the success of this tool, since it is essential that the user can master the system. In addition, 6.7% neither agreed nor disagreed, no one disagreed and 3.7% strongly disagreed. In (Q.2.2) users were asked if they would use the FCI-UTM chatbot the next time they need to search for information related to the FCI, 32.1% of the respondents totally agree that they would use the chatbot again and 41.8% partially agree with this statement. 8% partially agree with this statement, with only 73.9% of approval would indicate that

Chatbots and Its Impact on the Information Support Service

51

the chatbot would need to improve its service because with these data it is close to the minimum acceptable, based on comments and suggestions from respondents this would be because users want to get more information options, this can be solved taking into account that it is only a prototype. In addition, 20.1% neither agreed nor disagreed, 2.2% disagreed and 3.7% totally disagreed. In (Q.2.3) in relation to the feeling of conversing with a bot, 19.4% of the respondents did not have this feeling and 32.8% partially agreed that they did not have the feeling of conversing with a bot. This gives us a result that 47.8% of the respondents did not feel immersed in the dialogue with the bot, it can be said that the answers given are the users feel it in a mechanical and automated way, this could be improved to improve the user experience. In addition 26.1% neither agreed nor disagreed, 14.2% disagreed and 7.5% strongly disagreed. 3.3 Efficiency of Use According to [13] a software is considered efficient when, once learned, it provides the user with a high level of productivity. In this project, efficiency was evaluated by asking the user about his perception of the chatbot’s response time (Q.3.1), as well as about the feeling of executing unnecessary commands (Q.3.2) and, finally, whether he considers it more interesting to consult information through the chatbot and not in conventional information systems (Q.3.3). In addition, it is worth mentioning that in the case of chatbots whose differential is their ability to provide real-time responses, immediate responses are essential [14]. In (Q.3.1) the survey results show that 12.7% of the respondents strongly agree that the chatbot response time was too long, 20.9% partially agreed with the statement that the chatbot response time was slow. On the other hand, 20.1% neither agreed nor disagreed that the chatbot response was slow. In other words, given these responses it is feasible to assume that the speed of the chatbot response time is adequate, but it could be improved, although the speed of the chatbot could be influenced by external factors such as the speed of the users’ internet. In addition, 18.7% disagreed and 27.6% strongly disagreed. In (Q.3.2) on the feeling of executing unneeded commands, only 11.2% of the respondents had the feeling of executing unneeded commands and 29.1% partially agreed with this statement. It can be concluded that for most of the users the commands presented were mostly or partially the necessary ones. In addition, 26.9% neither agreed nor disagreed, 17.9% disagreed and 14.9% strongly disagreed. In (Q.3.3) the results indicated that 35.1% of the users totally agree that it is more interesting to consult information through the FCI-UTM chatbot than in the conventional information systems and 36.6% partially agree with this statement. On the other hand, 24.6% neither agree nor disagree. This means that 71.7% liked the idea of being able to consult information about the FCI through a chatbot, the results are positive considering that the bot could be much better. Additionally, only 0.7% disagreed and 3% strongly disagreed.

52

M. Giler et al.

3.4 Error Prevention We can consider that an error is any action performed by the user that does not lead to an expected result [15]. Then it can be said that an error can result from a wrong design (that prompts the user to make wrong decisions) or programming flaws that lead the user to a dead end [7]. Given this, participants were asked if the chatbot presented failures in its use (Q.4.1). Usually in Telegram error messages are clear and simple, so users were asked if error messages arose in its use (Q.4.2) and if these messages were clear and simple. In (Q.4.1) the survey results show that 11.2% of the respondents totally agree that the chatbot presented failures during its use, 12.7% partially agree with the statement that the FCI-UTM chatbot presented failures and 19.4% neither agree nor disagree with this statement. As can be seen, the respondents who consider that they have presented errors are in the minority, but this could also be due to the simplicity of the system. In addition, 25.4% disagreed and 31.3% totally disagreed. In (Q.4.2) on whether error messages arose during the use of the chatbot 11.9% of the respondents strongly agreed that the chatbot did present error messages and 12.7% partially agreed with this statement, with 56.7% disapproving, all indicating that error messages were not presented during the use of the bot. In addition, 18.7% neither agreed nor disagreed, 22.4% disagreed and 34.3% strongly disagreed. In (Q.4.3) on whether the error messages presented were simple, 16.7% of respondents strongly agreed that the error messages were clear and 36.5% partially agreed with this statement, 33.3% neither agreed nor disagreed and only 13.5% had an unfavorable opinion of the information provided, it should be clarified that this question was not mandatory to answer, as it was only for users who found errors. 3.5 Satisfaction According to satisfaction represents a set of emotions and feelings related to the use of a system; that is, it is an evaluation related to the user that is directly related to the pleasure of use and the ability to recommend the system [16]. In (Q.5.1) the survey results show that 35.1% of the respondents strongly agree that they felt comfortable chatting with the bot, 42.5% partially agreed with the statement that they felt comfortable with the experience of chatting with a bot to search for information related to the FCI. With a 77.6% approval rating, this is a positive statistic because most users were satisfied with using the chatbot. In addition, 17.9% neither agreed nor disagreed, 1.5% disagreed and 3% strongly disagreed. In (Q.5.2) on whether they would recommend the FCI-UTM chatbot to their peers the results show that 40.3% of the respondents strongly agree that they would recommend the use of the chatbot to the rest of their peers and 39.6% partially agree with this statement, only 14.9% neither agree nor disagree and finally only 5.2% do not agree with recommending the use of this tool. In (Q.5.3) on whether they will positively remember the experience of using the UTM chatbot, 38.8% of the respondents strongly agreed that they will positively remember the experience of using the UTM chatbot and 44% partially agreed with this statement,

Chatbots and Its Impact on the Information Support Service

53

13.4% neither agreed nor disagreed and only 3.7% had an unfavorable opinion about their experience using the chatbot. Students are satisfied with the service offered by the chatbot, taking into consideration the quality of information, security and availability. According to [17] in his research work, the chatbot should be an informative tool that meets three criteria: availability, security and quality of information. Therefore, the FCI-UTM chatbot has these criteria, although it will obviously have improvements in the future.

4 Conclusions Universities can leverage the resources offered by messaging platforms such as Telegram or other existing platforms and adapt them to function as a chatbot. In addition, bots could provide both professors and students with a continuous flow of universityrelated information (academic calendars, class schedules, curriculum reports, student IDs, academic history, syllabi, grades, professors’ names and emails, among others). This information would help students to have an agile and reliable means of consultation. In addition to providing students with the consulted information, the bots can send notifications alerting about any event or activity to students who are subscribed to the chatbot. With proper use of the bot, students would not be forced to consult other systems for most university-related information. In fact, they would not even have to learn how to handle the use of chatbots, as students are often very accustomed to using messaging applications. Since a chatbot actually simulates a conversation in a channel, and as mentioned above, being a messaging platform students know very well how to interact in this type of dialogues without the need of previous training. In this work we have addressed the implementation and use of chatbots to provide information support to students of the Faculty of Computer Science. Despite being only a prototype, the vast majority of students consider that a chatbot can adequately meet the needs of obtaining information related to the Faculty of Computer Science and even improving this system could be implemented general information related to the university. All these features could serve as an incentive to students to motivate them to make use of this tool. Currently there are few studies that address the user experience in the use of chatbots in university centers, so in this research we set out to analyze the case of the chatbot “FCI-UTM”. From a survey with statements made from categories constructed by [10] and [11], it was identified that in most cases the implementation of a well-developed chatbot is able to provide a pleasant experience for the user, as long as user feedback is taken into account, In the case of the “FCI-UTM” chatbot, respondents were asked if they would have any suggestions and a large portion of them provided ideas to improve the chatbot satisfaction, if these comments and suggestions are taken into account, the chatbot would be much more versatile to meet the users’ informational needs.

54

M. Giler et al.

References 1. Katayama, S., Mathur, A., Van den Broeck, M., Okoshi, T., Nakazawa, J., Kawsar, F.: Situation-aware emotion regulation of conversational agents with kinetic earables. In: 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII), pp. 725–731 (2019) 2. Pham, X., Pham, T., Nguyen, Q., Nguyen, T., Cao, T.: Chatbot as an intelligent personal assistant for mobile language learning. In: ICEEL 2018: Proceedings of the 2018 2nd International Conference on Education and E-Learning, pp. 16–21 (2018) 3. Ahn, J., et al.: Wizard’s apprentice: cognitive suggestion support for wizard-of-Oz question answering. In: International Conference on Artificial Intelligence in Education, pp. 630–635 (2017) 4. Chun Ho, C., Lee, H., Lo, W., Lui, K.: Developing a chatbot for college student programme advisement. In: 2018 International Symposium on Educational Technology (ISET), pp. 52–56 (2018). https://doi.org/10.1109/ISET.2018.00021 5. Abbas, N., Whitfield, J., Atwell, E., Bowman, H., Pickard, T., Walker, A.: Online chat and chatbots to enhance mature student engagement in higher education. Int. J. Lifelong Educ. 41(3), 308–326 (2022). https://doi.org/10.1080/02601370.2022.2066213 6. Budiharto, W., Andreas, V., Agung, A.: AVA: knowledge-based chatbot as virtual assistant in university, vol. 13, no. 4, pp. 308–326 (2022). https://doi.org/10.24507/icicelb.13.04.437 7. Cordero, J., Toledo, A., Guamán, F., Barba-Guamán, L.: Use of chatbots for user service in higher education institutions. In: 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–6 (2020). https://doi.org/10.23919/CISTI49556.2020.9141108 8. Valera, J.: Telegram bots con python (2020, Marzo). Github. https://github.com/jmv74211/ Telegram_bots 9. Kuiper, P., Hood, K.: Examining sentiment analysis when evaluating survey responses. In: 2019 IEEE 13th International Conference on Semantic Computing (ICSC), pp. 412–415 (2019). https://doi.org/10.1109/ICOSC.2019.8665581 10. Nielsen, J.: Usability Engineering. Morgan Kaufmann Publishers Inc., San Francisco (1994) 11. Shneiderman, B.: Designing the User Interface: Strategies for Effective Human-Computer Interaction, 5th edn. Pearson (2009) 12. Teixeira, F.: Introdução e Boas Práticas em Ux Design. Casa do Código; 1ª edição (1 janeiro 2014) (2014) 13. Lowdermilk, T.: Design Centrado no Usuário: Um guia para o desenvolvimento de aplicativos amigáveis. Novatec Editora; 1ª edição (24 maio 2019) (2019) 14. Galert, A.: Chatbot Report 2018: Global Trends and Analysis. Chatbotsmagazine (2018). https://chatbotsmagazine.com/chatbot-report-2018-global-trends-and-analysis-4d8 bbe4d924b 15. Ferreira, Á.: O Estudo da Experiência do Utilizador e da Usabilidade em Contexto MóvelDesenvolvimento de uma Aplicação Móvel Intitulada Think an App. Faculdade de Belas Artes da Universidade do Porto Mestrado em Design Gráfico e Projetos Editoriais (2015). https:// repositorio-aberto.up.pt/bitstream/10216/82396/2/37998.pdf 16. Montero, Y.: Experiencia de Usuario: Principios y Métodos (Spanish Edition). Independently published (2017) 17. Alzaabi, N., Almsbehi, M., Almansoori, H., Alhosani, M., Ababneh, N.: ADPOLY Student Information Chatbot. In: 2022 the 5th International Conference on Data Storage and Data Engineering (2022). https://dl.acm.org/doi/10.1145/3528114.3528132

Neural Networks on Noninvasive Electrocardiographic Imaging Reconstructions: Preliminary Results Dagoberto Mayorca-Torres1,2(B) , Alejandro Jos´e Le´on-Salas2 , and Diego Hern´ an Peluffo-Ordo˜ nez3 1

2

Universidad Mariana, Grupo de investigaci´ on de Ingenier´ıa Mecatr´ onica, Pasto, Colombia [email protected] Departamento de Lenguajes y Sistemas Inform´ aticos, Universidad de Granada, C/Periodista Daniel Saucedo Aranda s/n, 18071 Granada, Spain 3 Modeling, Simulation and Data Analysis (MSDA) Research Program, Mohammed VI Polytechnic University, Ben Guerir, Morocco

Abstract. In the reverse electrocardiography (ECG) problem, the objective is to reconstruct the heart’s electrical activity from a set of body surface potentials by solving the direct model and the geometry of the torso. Over the years, researchers have used various approaches to solve this problem, from direct, iterative, probabilistic, and those based on deep learning. The interest of the latter, among the wide range of techniques, is because the complexity of the problem can be significantly reduced while increasing the precision of the estimation. In this article, we evaluate the performance of a deep learning-based neural network compared to the Tikhonov method of zero order (ZOT), first (FOT), and second (SOT). Preliminary results show an improvement in performance over real data when Pearson’s correlation coefficient (CC) and (RMSE) are calculated. The CC’s mean value and standard deviation for the proposed method were 0.960 (0.065), well above ZOT, which was 0.864 (0.047). Keywords: Computational geometry cycles

1

· Graph theory · Hamilton

Introduction

The ECG is the most commonly used for diagnosing various cardiac diseases to conventional electrophysiology mapping recording. However, due to their low resolution and some limitations of these techniques, electrocardiographic image reconstruction (ECGI), a method that allows obtaining high-resolution maps of cardiac electrical activity, has been chosen [6,9]. The multi-lead body surface potentials (BSMP) and digitization of the torso-heart geometry allow obtaining ECGI, usually based on computed tomography (CT) [12]. ECGI is the basis c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 55–63, 2023. https://doi.org/10.1007/978-3-031-25942-5_5

56

D. Mayorca-Torres et al.

for some promising clinical applications, such diagnose and detecting the origins of focal atrial and ventricular tachycardia and ectopic beats and optimizing cardiac resynchronization and ablation [2]. Despite ECGI efforts and advances, noninvasive reconstruction of cardiac activity remains challenging for clinical, mathematical, and engineering research [10]. The accuracy of ECGI reconstruction will depend on the reliability of the direct and inverse model estimates. The solution to the direct problem of electrocardiography describes electromagnetic wave propagation from the surface of the heart to the potential measurement on the surface of the torso. The objective is to find a matrix that relates the potentials of both measures regarding the geometrical and conductivities of the wave propagation medium. The anisotropy and discontinuity of the medium determine the model simplicity (lungs, fatty tissue, bones, and muscles) [8]. On the other hand, the solution to the inverse problem of electrocardiography seeks to reverse the natural process of electromagnetic wave propagation, where the objective is to find the heart potentials as a function of the torso potentials [6]. The inverse problem is inherently ill-posed, which means that any error in the measure signals or the model can influence the solution. There is no unique solution; small perturbations in the output can lead to overwhelmed and unstable solutions in the input. Some studies have focused on regularization methods incorporating mathematical, physical, and electrophysiological knowledge and constraints. The purpose is to find a balance between the pure solution found and overly restrictive solutions. One of the most widely used methods described extensively in the literature is called Tikhonov. It makes it possible to obtain a stable solution by limiting the amplitude of the reconstructed cardiac potentials [7,13]. In recent years, the modeling of complex biological systems through of machine learning [11] or deep learning techniques, specifically neural networks, has been explored. However, the effect of these methods based on deep learning has not yet been reported compared to regularization methods such as Tikhonov.

2 2.1

Materials and Methods Experimental Dataset

This study’s experimental data are from the CRVTI-SCI Institute (Salt Lake City, Utah) torso tank experimental setup, available at http://edgar.sci.utah. edu/. The CVRTI-SCI database prepared by Dr. Robert S. MacLeod and collaborators at the University of Utah uses the isolated heart from a Langendorff perfused canine where a second live canine provides circulatory support. The experimental process conformed to the Guide for the Care and Use of Laboratory Animals published by the US National Institutes of Health [1,3]. The heart was suspended in a human torso-shaped tank with 192 electrodes placed on the torso with a spacing of 40.2 ± 16.8 mm and 599 electrodes placed on the epicardium (inter-electrode spacing of 6.5 ± 1.3 mm). The epicardial and torso tank electrodes were referenced to a central Wilson terminal and sampled at a frequency of 1000 Hz.

Neural Networks on ECGI Reconstructions

2.2

57

Preprocessing the Data Set

During the preprocessing stage, each input signal is filtered to prevent difficulties in the numerical reconstruction results. First, a moving average filter with a time window of 20 ms is applied. Subsequently, a filter to calculate the Baseline Shift using window length of 3000 ms. 2.3

Inverse Mapping Methods

The ECGI approach used in this study was chosen to reflect the most common approach comprising the stages: Forward Problem Solution and Reverse Problem Solution. Forward Problem Solution. In general, the direct problem is solved through an integral over the solution domain that solves the weak form of simple geometries such as concentric or even eccentric spheres. The resulting equations solve partial derivatives (PDE) by applying analytical expansions. However, in complex geometries such as accurate torso models, numerical solutions must be applied; two, in particular, are mentioned in the literature: the finite element method (FEM) and the boundary element method (BEM). The advance matrix was provided by the CRVTI-SCI database and calculated through the Boundary Element Method (BEM). Specific geometries employ homogeneous conductance values for the epicardial (7.4 ± 2.0 mm), and torso-tank (24.2 ± 5.1 mm) mesh. Subsequently, the integration of potential measurements of the torso and epicardial electrodes allows refined mesh nodes. Inverse Problem Solution. The inverse problem of electrocardiography is challenging for many scientists; with more than three decades of study [4], its solution has been addressed from various approaches. Given the instability of the solutions to the inverse problem, so-called regularization methods, which incorporate constraints to the solutions, are required to produce more realistic results [5]. The most widely used regularization method is called Tikhonov, a quasi-static method capable of reconstructing, with relatively good accuracy, important events in cardiac excitation. The methods for regularizing the outputs were as follows: – Tikhonov regularization’s: zero order (ZOT), first order (FOT), and second order (SOT). – Deep learning technique-Neural networks. The architecture of the layers is shown in Fig. 1. Evaluation Methods. The following features of the recorded and ECGI reconstructed electrograms were quantified and compared: – Electrogram Amplitude: Electrogram amplitude was measured as the mean of the peak-to-peak amplitudes from each lead. Comparisons of ECGI to

58

D. Mayorca-Torres et al.

Fig. 1. Representative recorded and reconstructed electrograms of output channel number 1 using the proposed method

recorded electrogram amplitudes were made using the root Mean Square Relative Error (RMSE).   n  (y − yˆ)2 (1) RM SE =  n i=1 – Electrogram Morphology: The morphology of ECGI reconstructed electrograms were compared to those recorded over the QRS complex using a Pearson’s correlation. n n n n.( i=1 y.ˆ y ) − ( i=1 y)( i=1 yˆ) r= (2) n   n n n ((n. i=1 y 2 ) − ( i=1 y)2 ).(n.( i=1 yˆ2 ) − ( i=1 yˆ)2 )

3

Results

Performance evaluations of the proposed methods comprise quantitative metrics and qualitative comparisons. The correlation coefficient CC and RMSE are used for quantitative evaluation of the results, comparing the estimated and actual epicardial potential vectors. Map3d visualization software was used for qualitative assessments to display isopotential maps of actual and estimated epicardial potentials. In Fig. 2 and 3, we present the box plots for CC and RE for all instants of the QRS complex. These Figs represent the CC and RE values for both training set scenarios; for the Tikhonov regularization and the model based on the deep learning proposed.

Neural Networks on ECGI Reconstructions

59

Fig. 2. Box plots for the CC values

Fig. 3. Box plots for the RMSE values

Evaluation Methods. Below, the results of the Pearson correlation are presented with the labels ZOT, FOT, SOT, and RNN; note that the Tikhonov regularization methods do not require a training scenario. The results presented in Fig. 2 show that the methods based on Tikhonov regularization have the lowest CC values, so the average values obtained for ZOT, FOT, SOT and RNN are 0.864 (0.047), 0.507 (0.103), 0.065 (0.032) and 0.960 (0.065). In Fig. 3 show of RMSE average values obtained for ZOT, FOT, SOT and RNN are 0.714 (0.556), 2.090 (1.602), 18.619 (16.398) and 0.087 (0.100). Evaluation of Electrograms. In Fig. 4 and 5, we present the graphs of the electrograms reconstructed by the proposed method and the real epicardial

60

D. Mayorca-Torres et al.

Fig. 4. Representative recorded and reconstructed electrograms of output in (t = 20 ms) using the proposed method

Fig. 5. Representative recorded and reconstructed electrograms of output in (t = 100 ms) using the proposed method

potential of channels 0, 15, 30, 45, 60, 75, 90, 105, 120, 135, 150 and 165 at two different times. The times are selected so that the first is closest to the stimulation time in the heart (t = 20 ms) and the second, when the wave has propagated along the

Neural Networks on ECGI Reconstructions

61

epicardial surface (t = 100 ms). In this Paper, we present a deep learning-based approach for reconstructing potential distributions on the epicardial surface of the heart. The purpose was to develop a low-cost adaptive method that significantly reduces the problem’s dimension while increasing the estimation precision. Unlike the classical methods, this one does not require a direct model solution or a regularization parameter. Evaluation of Potential Maps. In Fig. 6 and 7, we present the graphs of the maps reconstructed by the proposed method and the real epicardial potential at time (t = 198 ms). Evaluation of Metrics. Instead, it builds the first approximation to a dense single-layer model, which first allows an approximate solution. The method uses the Utah database, where the quantitative and qualitative evaluation of the method provides results from various perspectives. Despite obtaining a CC value above Tikhonov, it is necessary to compare with at least another datset to reduce bias and overfitting. Therefore, in the future, it is intended to evaluate the methods with similar data sets available in the EDGAR database [1]. The following parameter values were used for the RNN algorithm to solve the reverse ECG problem; The number of iterations was 10 with a loss of 0.05, with an average of 900 us per iteration. This study starts from the assumption that the datset is a good training set representing the electrophysiological properties of the test data available. Moreover, for this reason, the results focus on increasing the precision of the possible inverse solutions. Tikhonov regularization is not the

Fig. 6. Reconstructed maps of output in (t = 198 ms) using the proposed method

62

D. Mayorca-Torres et al.

Fig. 7. Real maps of output in (t = 198 ms) using the proposed method) using the proposed method

only inverse solution method, so the results should be thoroughly evaluated with other ECGI methods for broader validity and greater clinical applicability.

4

Conclusions

The use of methods based on deep learning to solve the inverse problem of electrocardiography is still exploratory because the models are poorly interpretable, and their effectiveness for many databases has not yet been demonstrated. However, the preliminary results show that comparable results can be presented with deterministic methods such as Tikhonov and their potential use in reconstructing potentials in ECGI. Acknowledgments. The authors would like to acknowledge the valuable support given by the SDAS Research Group (https://sdas-group.com/).

References 1. Aras, K., et al.: Experimental data and geometric analysis repository - EDGAR. J. Electrocardiol. 48(6), 975–981 (2015). https://doi.org/10.1016/j.jelectrocard.2015. 08.008 2. Bear, L.R., et al.: How accurate is inverse electrocardiographic mapping? Circ. Arrhythm. Electrophysiol. 11(5), 6108 (2018). https://doi.org/10.1161/CIRCEP. 117.006108. https://www.ahajournals.org/doi/10.1161/CIRCEP.117.006108

Neural Networks on ECGI Reconstructions

63

3. Cluitmans, M., et al.: Validation and opportunities of electrocardiographic imaging: from technical achievements to clinical applications. Front. Physiol. 9, 1305 (2018). https://doi.org/10.3389/fphys.2018.01305. https://www.frontiersin. org/article/10.3389/fphys.2018.01305/full 4. Cluitmans, M.J.M., Clerx, M., Vandersickel, N., Peeters, R.L.M., Volders, P.G.A., Westra, R.L.: Physiology-based regularization of the electrocardiographic inverse problem. Med. Biol. Eng. Comput. 55(8), 1353–1365 (2017). https://doi.org/10.1007/s11517-016-1595-5, https://drive.google.com/ le/ d/199s6HPwSYF1WCWHiYBkJdt3ZAD99EeW/view?usp=sharing 5. Cluitmans, M., et al.: In vivo validation of electrocardiographic imaging. JACC: Clin. Electrophysiol. 3(3), 232–242 (2017). https://doi.org/10.1016/j.jacep.2016. 11.012 6. Figuera, C., et al.: Regularization techniques for ECG imaging during atrial fibrillation: a computational study. Front. Physiol. 7, 466 (2016). https://doi.org/10. 3389/fphys.2016.00466. http://journal.frontiersin.org/article/10.3389/fphys.2016. 00466/full 7. Milaniˇc, M., Jazbinˇsek, V., MacLeod, R.S., Brooks, D.H., Hren, R.: Assessment of regularization techniques for electrocardiographic imaging. J. Electrocardiol. 47(1), 20–28 (2014). https://doi.org/10.1007/978-3-031-10450-3 20. https:// linkinghub.elsevier.com/retrieve/pii/S0022073613005566 8. Potyagaylo, D., Dossel, O., van Dam, P.: Influence of modeling errors on the initial estimate for nonlinear myocardial activation times imaging calculated with fastest route algorithm. IEEE Trans. Biomed. Eng. 63(12), 2576–2584 (2016). https://doi. org/10.1109/TBME.2016.2561973. http://ieeexplore.ieee.org/document/7464310/ 9. Rajagopal, A., Radzicki, V., Lee, H., Chandrasekaran, S.: Nonlinear electrocardiographic imaging using polynomial approximation networks. APL Bioeng. 2(4), 046101 (2018). https://doi.org/10.1063/1.5038046. http://aip.scitation.org/ doi/10.1063/1.5038046 10. Rodr´ıguez-Sotelo, J.L., Peluffo-Ordo˜ nez, D., Cuesta-Frau, D., CastellanosDom´ınguez, G.: Unsupervised feature relevance analysis applied to improve ECG heartbeat clustering. Comput. Methods Programs Biomed. 108(1), 250– 261 (2012). https://doi.org/10.1016/J.CMPB.2012.04.007, https://pubmed.ncbi. nlm.nih.gov/22672933/ 11. Sanchez-Pozo, N.N., Mejia-Ordonez, J.S., Chamorro, D.C., Mayorca-Torres, D., Peluffo-Ordonez, D.H.: Predicting high school students’ academic performance: a comparative study of supervised machine learning techniques. Future of Educational Innovation Workshop Series - Machine Learning-Driven Digital Technologies for Educational Innovation Workshop 2021 (2021). https://doi.org/10.1109/ IEEECONF53024.2021.9733756 12. Wang, L., Gharbia, O.A., Hor´ aˇcek, B.M., Sapp, J.L.: Noninvasive epicardial and endocardial electrocardiographic imaging of scar-related ventricular tachycardia. J. Electrocardiol. 49(6), 887–893 (2016). https://doi.org/10.1016/j.jelectrocard.2016. 07.026. https://linkinghub.elsevier.com/retrieve/pii/S0022073616301042 13. Zhou, Z., Han, C., Yang, T., He, B.: Noninvasive imaging of 3-dimensional myocardial infarction from the inverse solution of equivalent current density in pathological hearts. IEEE Trans. Biomed. Eng. 62(2), 468–476 (2 2015). https://doi.org/ 10.1109/TBME.2014.2358618. http://ieeexplore.ieee.org/document/6901202/

Socio-spatial Segregation Using Computational Algorithms: Case Study in Ambato, Ecuador Manuel Ayala-Chauvin1

, Paola Maigua2 , Andrea Medina-Enríquez2 and Jorge Buele3,4(B)

,

1 Centro de Investigaciones de Ciencias Humanas y de la Educación CICHE, Universidad

Indoamérica, Ambato 180103, Ecuador [email protected] 2 Carrera de Arquitectura, Facultad de Arquitectura y Construcción, Universidad Indoamérica, Ambato 180103, Ecuador {pmaigua,amedina}@indoamerica.edu.ec 3 Carrera de Ingeniería Industrial, Facultad de Ingeniería, Industria y Producción, Universidad Indoamérica, Ambato 180103, Ecuador [email protected] 4 Department of Electronic Engineering and Communications, University of Zaragoza, 44003 Teruel, Spain

Abstract. Access to basic services, housing, and social security influence people’s quality of life. Within the cities, it is common for there to be specific sectors where the presence of those groups that have an abundance of resources predominates. The same occurs with the opposite group, motivated by various social and economic conditions. For this reason, this study explicitly considers the population of Ambato, Ecuador, to evaluate the existence of socio-spatial segregation. The data are obtained from the latest census base of 2010, which is publicly accessible by the Institute of Statistics and Census. The socioeconomic characterization of the population consists of the calculation of the condition index. A programming algorithm developed in the statistical software RStudio has been used to process the information. With the calculations obtained, we proceeded to generate geographic maps where the location of the different social groups could be seen. The results show that the values 0.77 and 0.90 predominate in the city’s west. Also, we identify that only the fourth quartile achieves well-being and an abundance of resources, while those in the first quartile are well below the average. The information describes a very low and positive spatial autocorrelation, where most of the population is concentrated in the city’s southwest. Thus, this proposal, which combines computational algorithms for the exposition of social and spatial characteristics of a specific population, is validated. Keywords: Spatial segregation · Social interaction · Segregation indicators · RStudio

1 Introduction Socio-spatial segregation shows the social differences between groups and their geographic distribution within a specific territory. The study of this phenomenon is not © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 64–75, 2023. https://doi.org/10.1007/978-3-031-25942-5_6

Socio-spatial Segregation Using Computational Algorithms

65

recent; various concepts and approaches have evolved throughout the 20th century. Thus, in the late 1950s, the Economic Commission for Latin America and the Caribbean (ECLAC) and the Center for Economic and Social Development of Latin America (DESAL) promoted the analysis of poverty peripheries [1]. During the 1970s, researchers Falk and Cohen [2] opened the way for developing new methods and theories on the subject. Consequently, different variables were theorized to evaluate segregation, as in the case of Massey and Denton, who proposed five approaches: uniformity, exposure, concentration, centralization, and conglomeration [3]. Other authors, on the other hand, proposed only four measurement parameters since they understood centrality and conglomeration as a single set [4]. There are several theories, concepts, and methods that have been used globally throughout the study of spatial segregation; in the context of Latin America, research is quite conclusive. In Mexico, for example, researcher Schteingart categorizes urban segregation studies linked to five themes: 1) urban expansion and population growth; 2) urban services and roads; 3) historical structures; 4) different sectors within the city, and 5) gated communities [5]. Likewise, one of the contemporary approaches to spatial segregation, which has not been so widespread, focuses on analyzing the “luxuries” and needs of social groups in terms of different lifestyles [6]. Throughout the sixties and seventies, poverty was studied from the concept of marginality, and the previous paradigms on social structures and vulnerable neighborhoods were transformed [5]. In the case of Santiago de Chile, ECLAC and DESAL decided to investigate segregation from the perspective of marginal neighborhoods; studies on poverty were carried out from the historical and cultural dimensions [1]. During the 1990s, the concept of exclusion was on the rise, replacing the marginality approach; the results of these new studies showed more clearly the link between social groups and structural organization [1]. Saravi [7] refers to two specific concepts of exclusion: the biographies of exclusion and the accumulation of disadvantages. Concerning this, exclusions occur in various dimensions, i.e., an overlapping and collection of disadvantages for isolated groups. Exclusion as a concept not only addresses the socioeconomic aspect but also opens up to cultural factors, such as the identity of a social group, and highlights the issue of migration [5]. Urban segregation generally can be understood as the remoteness of two or more residential areas in different city locations. The distance or proximity depends on factors such as race, culture, and migratory origin, among others; however, in Latin America, the main focus is socioeconomic [8]. According to Schteingart [5], the concept of segregation is evident in the most vulnerable groups and is also present among those with greater purchasing power. For the first group, segregation goes beyond marginality; it implies neighborhoods without essential services, lack of public transportation, gender isolation, and, generally, scarcity of opportunities. For Préteceille [9], spatial segregation stems from macro-structural factors and individual causalities. For example, the labor and land markets impact the value of housing, dividing social groups according to their purchasing power. These structural factors, which force less elite social groups to isolate themselves, could be the leading cause of “active segregation” understood by several authors as the result of choice resulting from the exclusion exercised by the dominant parties [10]. Regarding personal causes,

66

M. Ayala-Chauvin et al.

segregation is generated due to the tastes or preferences of family groups that can choose freely within the residential offer [9].

2 Related Works This issue has been explored in various parts of the world, depending on the approaches considered. The reasons for Tehran’s great geographical and social diversity are studied. A distribution is made using spatial analysis and inferential statistics to visualize how the most vulnerable groups locate in specific neighborhoods. Similarly, in [11], they study the socio-spatial patterns of the population of Tehran, as it has a very prominent cultural and religious diversity. Even building or railroad station construction can generate geographic distribution changes, as shown by Forouhar et al. [12]. While in [13], they study the interaction of the population of Karanrang Island, Indonesia. According to this research, several factors influence social participation in this place. In the Latin American context, research developed in intermediate Mexican cities analyzes urban segregation from the evolution of urban morphology and transformations in the population’s lifestyle [14]. Another Mexican case proposes a methodology developed from the concept of spatial justice, incorporating other spatial aspects linked to relative living conditions (RLC) [15]. Another study on spatial segregation in the region is that of Mayorga Henao and Ortiz Veliz [16], which establishes the relationship between socio-spatial patterns and inequalities in access to “city rights”. At the same time, to know the inference on the quality of life. The methodology used was based on three processes [16]. In a local context, similar investigations, such as that of Orellana and Osorio [8], developed in Cuenca, Ecuador. The existence of socio-spatial segregation is determined, and its level is analyzed through spatial analysis and statistical techniques framed within quantitative geography. The study of the phenomenon of socio-spatial segregation in the territories is not recent; however, the current evidence is not conclusive. As the literature presented shows, each locality has its characteristics, and different analyses can be made even in the same place. This has motivated the development of the present study, which proposes to analyze whether there are social differences in the study population. For this purpose, we suggest using computational algorithms that convert census data into easily interpretable graphs. This paper consists of 5 sections, including the introduction in Sect. 1. The related work is described in Sect. 2, the materials and methods in Sect. 3, the results in Sect. 4, and the discussions and conclusions in Sect. 5.

3 Materials and Methods 3.1 Materials The Ecuadorian Institute of Statistics and Census (INEC) data from the 2010 national census were used to acquire information. For this study, a differentiation must be made between diversity and segregation that could be occurring. As previously mentioned, segregation in this continent is usually based on economic factors.

Socio-spatial Segregation Using Computational Algorithms

67

For this evaluation, both urban and rural areas of Ambato, Ecuador, were considered. The living conditions index (LCI) was used as a reference to demonstrate social and spatial stratification on a scale between 0 and 1. The following software was used to perform the computational calculations: • RStudio is free software that allows statistical analysis and graphs and installs packages and libraries according to the requirements. It was used for data processing, reassignment of variables, and calculating the living conditions index. • ArcGis is a software that provides mapping and spatial reasoning options in Geographic Information Systems (GIS). It allows the extraction of spatial weights and sees the close relationship between spatial units. It also allows for evaluating the LCI values, the obtained quartiles, and the spatial autocorrelation and, on this basis, generating the respective maps with different shades. 3.2 Methods The approach chosen for this research is epistemological since it shows how the population is distributed based on social and economic characteristics. Spatial segregation was studied using quantitative data from variables and calculations. At the same time, the topic is checked using a deductive approach due to the exploration that had to be carried out. The starting point is the pertinent bibliographic review that allows in-depth knowledge of the parameters that must be analyzed in this application. It also highlights the quantitative approach that establishes the sociodemographic behavior of the population studied. It should be mentioned that these variables have a descriptive scope since they show social information, stratification, economy, health, and spatial distribution. They are related and allow the establishment of the LCI, a reference measure that shows the level of abundance or need of the people living in a house. The initial step is to characterize and distribute people at a specific spatial desegregation level to identify the most notable differences based on the abovementioned characteristics. Using the LCI, levels or ranges can be established to facilitate the interpretation of results. The technical basis for obtaining the index has been used previously in other studies where similar approaches exist for different samples [8]. The variables available in the INEC census database that could provide relevant information have been reviewed. These have been grouped into two levels, as shown in Table 1. 3.3 Algorithm Development The script adapted from Orellana and Osorio allows for calculating the Living Conditions Index (ICV) in the R language from data from the Population and Housing Census [8]. The code is developed in the RStudio editor, which starts by placing specific libraries necessary for the operations. “Readxl” is used to import files from Microsoft Excel, and “lubridate” is used to process time and date data. The “dplyr” package speeds up the handling of data files, especially strings, and “dplyr” makes it easier to convert data between width and length. “Stringr” was used to detect patterns in the text, and “plyr” was used to manipulate the data through various tools. Subsequently, the data from the INEC database corresponding to the province of Tungurahua (the region where the

68

M. Ayala-Chauvin et al. Table 1. Table captions should be placed above the tables.

Level 1

W

Level 2

W

Variable

W

Social security indicator (health)

1/4





Differentiation of affiliates (IESS, ISSFA, ISSPOL)

1/3





Social security for household members (includes spouse and children and private insurance)

1/3





Social security for housing

1/3





Years of education by age

1/3





Education of each person in the household11

1/3





Education per household

1/3

Overcrowding: bedrooms

1/3

Single-person households

1/2

Number of persons per bedroom1,2

1/2

Existence of an exclusive kitchen for the household

1/4

Existence of an exclusive bathroom

1/4

Existence of an extra room per household

1/4

Education indicator

Housing indicator

1/4

1/4

Average overcrowding per household

Integrated housing quality indicator

Basic services indicator

1/4

Integrated water and sanitation indicator

1/3

1/3

1/3

Total overcrowding

1/4

Housing quality3

1/4

Roof quality3

1/4

Wall quality3

1/4

Quality of floors3

1/4

Water availability

1/4

Water quality

1/4 (continued)

Socio-spatial Segregation Using Computational Algorithms

69

Table 1. (continued) Level 1

W

Level 2

W

Variable

W

Drainage quality

1/4

Waste disposal

1/4

Energy efficiency of housing

1/3

Weighted sum of fuel and electricity

1/2

Integrated communication indicator

1/3

Landline telephone availability

Electricity availability 1/2 1/4

Cell phone availability 1/4 Internet availability

1/4

Availability of cable television

1/4

Note: W = weight; 1 = recoded variable; 2 = approved to a maximum of two persons per room; 3 = rescaled to a range of 0 to 1.

city studied is located) is imported, and only selected information from Ambato. We recoded those variables that required prior treatment, while others were only assigned to a local variable. Most variables are categorical, so a transformation to metric scales was performed [17]. Once the responses have been organized (lowest to highest), they are assigned a cardinal value. According to the conditions in Table 1, some conditioning and range changes were made to the variables. Subsequently, in an organized and hierarchical manner, the calculations are started to obtain the values of the four main indicators. The variables corresponding to the number of rooms in the home and education years have values greater than ‘2’ and, therefore, must be rescaled. We obtain the average social security indicators, education, housing, and basic services. Once the value of the LCI is available, the data is segmented into quintiles, allowing further analysis. A range of ‘0’ to ‘2’ is used, where ‘1’ is considered an optimal state and lower values indicate household deprivation. On the other hand, those with a value higher than ‘1’ evidence an abundance of resources. Finally, the information is checked, and a new Microsoft Excel document is generated, where the data is summarized. 3.4 Case Study This research is carried out in the city of Ambato, located in the central sector of the Andean region of Ecuador. It is the capital of the province of Tungurahua, one of the most populated cities in the country, with a population of 395893 people. This city was founded in 1534, and in July 2005, its historic center has declared a property belonging to the cultural heritage of Ecuador. An earthquake was recorded in August 1949 on the fault near Pisayambo, with a magnitude of 6.8 on the Richter scale, with a depth of fewer than 15 km. This seismic event had a high level of destruction that affected 75%

70

M. Ayala-Chauvin et al.

of the buildings. It also affected other nearby cantons such as Píllaro (90%), located 23 km away, Pelileo (100%), 22 km away; and Guano (80%), 53 km away. This was one of the natural phenomena that most affected Ecuador in the 20th century, generating incalculable losses and, thus, socioeconomic changes that lasted for decades.

4 Results 4.1 Living Conditions Index A formal way to present the spatial distribution of a group of people is through their residential location. As a result, an Excel document has been generated with data from almost 45 thousand households, where the value of the average LCI has been defined according to the census sector. In addition, the housing code, the sector code, and the quartiles are generated based on the number of dwellings. With this information, a geographic distribution map is generated, as shown in Fig. 1, where the values 0,77 and 0,90 predominate in the city’s west.

Fig. 1. Geographical representation of average LCI values in Ambato.

Those sectors with lower-than-expected levels are located in the city’s southern, northern, and northeastern sectors. Sectors with lower LCI values can also be seen in the

Socio-spatial Segregation Using Computational Algorithms

71

city’s center; the rest are distributed without an evident concentration. In contrast, those households that reach the average value or higher are located in the Ficoa, Miraflores, and España neighborhoods, in the northwest of the city. The dark colors represent abundance, while the light colors represent scarcity of resources. This confirms the theory that people with poorer living conditions cluster in specific city sectors, forming segregation groups in the peripheries. The same occurs with those who have abundant resources; this generates high segregation and reduced interaction among the population, considering their place of residence. 4.2 Uniformity Another important aspect is determining the values’ uniformity for the different population groups. As can be seen in Table 2, there is no uniform distribution since most households (Q1, Q2, and Q3) have limited access to resources. These values could be

Fig. 2. Representation of spatial segregation according to each quartile: (a) Q1; (b) Q2; (c) Q3; (d) Q4.

72

M. Ayala-Chauvin et al.

affected by the distinctions in the size of the population groups. When analyzing the range of LCI by quartiles, it can be seen that both Q4 and especially Q1 concentrate most of the population, while the Q2 and Q3 quartiles have smaller differences. This indicates that there are two segregation processes taking place at the same time. On the one hand, Q1 and Q4 show considerable active segregation in the lower and higher conditioned population, respectively. On the other hand, this component is weaker in the intermediate-level groups (Q2 and Q3) and is presented in Fig. 2. Table 2. LCI distribution in quartiles. Quartile

Average LCI

Minimum and maximum

Population in need

Q1

0,565

0,085 to 0,659

Yes

Q2

0,716

0,659 to 0,773

Yes

Q3

0,853

0,773 to 0,928

Yes

Q4

0,993

0,929 to 1,110

No

4.3 Centralization The highest concentration of Ambato’s population tends to be concentrated in the southwest of the city; however, the LCI corresponds to 0.81–0.90. This can be better appreciated in Fig. 3.

Fig. 3. Concentration levels of population: (a) map of Ambato; (b) population concentration.

The correlation between the LCI and the population (spatial correlation) that exists in the different sectors of the city has a low and positive relationship. As shown in Fig. 4. Conversely, the Moran index is equal to 0,868, which offers a solid spatial component in the LCI, as shown in Table 3. This confirms a very well-defined spatial structure concerning the different social groups.

Socio-spatial Segregation Using Computational Algorithms

73

Table 3. Moran index calculation summary. Indicator

Value

Expected index

0,868243

Expected index

−0,000019

Variance

0,000012 251,948071

p-value

0,000000

Population

z-score

y = 39.851x + 60.032 R² = 0.001

900 800 700 600 500 400 300 200 100 0 0

0.2

0.4

0.6

0.8

1

1.2

ICV Fig. 4. Geographical representation of the average values of the LCI in Ambato.

5 Discussion and Conclusions The study of the phenomenon of socio-spatial segregation in the territories is not recent. These provide relevant information on how the most affluent and vulnerable people are concentrating geographically [18]. Vulnerability is often confused with poverty, but it also includes other minority groups, such as the disabled, indigenous communities, and migrants, among the main ones. Those who belong to another nationality, speak another language, or profess another religion tend to gather with their peers, further contributing to segregation. Another factor considered is home ownership since renting people are likelier to have lower incomes. There are old buildings prone to deterioration; this is called physical vulnerability; some of these in the center of the city concentrate people of a bad reputation. The city of Ambato currently has an average area of 46,5 km2 , including nine urban and 18 rural parishes. Since the last century, due to the third industrial revolution, it has become a strategic site for trade and agriculture in the country’s center. This caused a rural exodus, which generated demographic and social changes in the area. Information from the 1990 statistical census shows that there were 110522 inhabitants, and now there are

74

M. Ayala-Chauvin et al.

395893; i.e., an increase of 258,2% in 32 years. As indicated by Torres-Oñate et al. [19], population growth has led to changes in construction and an expansion of the city, mainly to the south. It is undeniable that, as in every metropolis in the world, the concentration of wealthy people is concentrated in specific, well-defined sectors. It should also be noted that this area has an intense concentration of indigenous populations and communities, which should be analyzed as a social phenomenon. The Tomabela people stand out, adopting millenary customs that relate man, woman, and nature as a sacred trilogy. We can also find the Chibuleo people and the Quisapincha community, whose most significant festival is the Inti Raymi or Festival of the Sun, a religious ceremony that takes place during the winter solstices in the Andes. Despite the questions that some authors have raised about analyzing segregation indexes based on census bases, these measures are sufficient to show evident results. This becomes a reference for future, more in-depth, updated studies. For a high level of desegregation, it was necessary to carry out separation by blocks to respect what is established in the SS theory. This study has shown that active spatial segregation processes could reduce citizens’ quality of life. However, most of the literature used other parameters to identify the existence of segregation; the more significant the cultural diversity, the greater the possibility of this phenomenon [11–15, 20]. In the local context, similar results were obtained in Cuenca, where there is a marked distinction between the Q1 and Q4 quartiles [8]. Although this research is quantitative data, other conditions should be explained to avoid confusion or misinterpretation. The main limitation was using data that were not up to date since there are no recent censuses in the country. According to information from INEC, a new census will be conducted at the end of 2022 to know the country’s reality. The advantage of the methodology is that the values used come from a national census, and information could be used in other cities in the country. For this reason, the authors of this study propose, for future work, the execution of comparisons with similar studies.

References 1. Figueroa, A.R., Robles, M.S., Downey, F.S., Quiero, G.C., Trebilcock, M.P.: From segregation to residential exclusion Where are the new poor households (2000–2017) in the city of Santiago, Chile? (2021). https://doi.org/10.5354/0717-5051.2021.55948 2. Cortese, C.F., Falk, R.F., Cohen, J.K.: Further considerations on the methodological analysis of segregation indices. Am. Sociol. Rev. 41, 630 (1976). https://doi.org/10.2307/2094840 3. Massey, D.S., Denton, N.A.: The dimensions of residential segregation. Soc. Forces. 67, 281–315 (1988). https://doi.org/10.1093/sf/67.2.281 4. Buzai, G., Baxendale, C., Rodríguez, L., Escanes, V.: Distribución y segregación espacial de los extranjeros en la ciudad de Luján. Un análisis desde la Geografía Cuantitativa. Signos Univ. 29–52 (2003) 5. Schteingart, M.: La división social del espacio en las ciudades. Perfiles Latinoam. 19, 13–31 (2001) 6. Bourdieu, P.: Distinction: a social critique of the judgement of taste. In: Inequality: Classic Readings in Race, Class, and Gender, pp. 287–318. Londres, Routledge (2018). https://doi. org/10.4324/9781315680347-10

Socio-spatial Segregation Using Computational Algorithms

75

7. Saraví, G.A.: Nuevas realidades y nuevos enfoques: exclusión social en América Latina. In: Prometeo Libros (ed.) De la pobreza a la exclusión: continuidades y rupturas de la cuestión social en América Latina, pp. 19–52 (2006) 8. Orellana, D., Osorio, P.: Segregación socio-espacial urbana en Cuenca. Ecuador. Rev. Análisis Estadístico Analítika. 8, 27–38 (2014) 9. Préteceille, E.: Ségrégation, classes et politique dans la grande ville. In: Villes en Europe, pp. 99–127. París (1997) 10. Gallissot, R., Moulin, B.: Les quartiers de la ségrégation: Tiers Monde ou Quart Monde? París (1995) 11. Ghazaie, M., Rafieian, M., Dadashpoor, H.: Exploring the socio-spatial patterns of diversity and its influencing factors at a metropolitan scale. J. Urban. 13, 325–356 (2020). https://doi. org/10.1080/17549175.2019.1677263 12. Forouhar, A., Zamani, B., Rafieian, M.: Socio-spatial transformation of neighbourhoods around rail transit stations: an experience from Tehran. Iran. Bull. Geogr. Socio-econ. Ser. 55, 7–15 (2022). https://doi.org/10.12775/bgss-2022-0001 13. Ishak, R.A., Trisutomo, S., Wikantari, R., Harisah, A.: Socio-spatial relation in small island (case study: Karanrang island, South Sulawesi, Indonesia). Civ. Eng. Archit. 9, 2326–2337 (2021). https://doi.org/10.13189/cea.2021.090720 14. López, C.F.R., Méndez-Lemus, Y.M., Medrano, J.A.V.: Methodological approach to analyze social-spatial segregation in the peri-urban areas of Mexican intermediary cities. Estud. Geogr. 82, 60 (2021). https://doi.org/10.3989/ESTGEOGR.202072.072 15. Campos-Alanís, J., Ramírez-Sánchez, L.G., Garrocho, C.: Inclusion of the spatial variable in the measurement of relative living conditions in Mexican Cities. Papeles Poblac. 26, 53–88 (2020). https://doi.org/10.22185/24487147.2020.103.03 16. Henao, J.M.M., Véliz, J.O.: Segregation and inequality in the access to education, culture, and recreation services in Bogotá. Colombia. Cuad. Geogr. Rev. Colomb. Geogr. 29, 171–189 (2020). https://doi.org/10.15446/rcdg.v29n1.73395 17. García-Magariño, I., Gonzalez Bedia, M., Palacios-Navarro, G.: FAMAP: a framework for developing m-health apps. In: Rocha, Á., Adeli, H., Reis, L.P., Costanzo, S. (eds.) WorldCIST’18 2018. AISC, vol. 745, pp. 850–859. Springer, Cham (2018). https://doi.org/10.1007/ 978-3-319-77703-0_83 18. Salazar, F.W., et al.: Prototype system of geolocation educational public transport through Google maps API. In: Gervasi, O., et al. (eds.) ICCSA 2020. LNCS, vol. 12254, pp. 367–382. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58817-5_28 19. Torres-Oñate, F., Viteri, M.F., Infante-Paredes, R., Donato-Moreira, S., Tamayo-Soria, R., Núñez-Espinoza, M.: Heritage cooking as tourist motivation: ambato case study. In: Stankov, U., Boemi, S.-N., Attia, S., Kostopoulou, S., Mohareb, N. (eds.) Cultural Sustainable Tourism. ASTI, pp. 109–114. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-10804-5_11 20. Ghazaie, M., Rafieian, M., Dadashpoor, H.: Making the invisible segregation of diverse neighbourhoods visible. J. Hous. Built Environ. 37, 459–482 (2022). https://doi.org/10.1007/s10 901-021-09850-z

Expert System with Facial Recognition Implemented in Human-Machine Conversation Services for the Automation of Multi-platform Remote Processes in the Identification of People Reported Missing Jefferson Panchi-Chacón(B) , Cindy Ortiz-Araujo , and Milton Patricio Navas-Moya Universidad de las Fuerzas Armadas ESPE, Latacunga, Ecuador {jfpanchi,cportiz3,mpnavas}@espe.edu.ec Abstract. In Ecuador, the disappearance of people is a problem experienced by thousands of families. However, the advancement of technologies promotes the progress of public and private organizations working in this type of case. Therefore, the need for technological tools to identify missing people was analyzed by developing a software system with facial recognition in the chatbot; these services improve the identification of people reported as missing. Also, it was proposed to examine and retrieve information about missing people from official sources to capture, filter, organize and select data, which highlight the face in an image format that can be compared by facial recognition with pictures of suspicious people that are sent to the chat, to generate reports of missing individuals who are identified subsequently. The results obtained by evaluating the system while using the metrics of accessibility, efficiency and speed helped to validate the design; and, at the same time, the facial recognition module allowed the testing of three models: Single Shot Multibox Detector (SSD model), Tiny Face Detector (TFD model), and Multi-Task Cascade Convolutional Neural Networks (MTCNN model). This, so that it was evidenced that MissingApp optimizes the time for the identification of people reported as missing. Keywords: Missing · Multi-platform · Facial recognition · Detection · Chatbot

1 Introduction The chronology of the problem of people identified as missing around the world is immense. It is a perceptible frequent occurrence in society; disappearance is not linked to time or location, nor does it happen only in times of war [1]. In Ecuador, the disappearance of people affects indigenous families, Afro-Ecuadorians, and the mestizo society. This is reflected in the difficulty to access justice and the total effectiveness of the pronouncements of the competent authorities [2]. The Ministry of Government and the Attorney General’s Office of Ecuador, on their websites, state that there were 57397 reports of disappearances from 1947 to 2019. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 76–87, 2023. https://doi.org/10.1007/978-3-031-25942-5_7

Expert System with Facial Recognition

77

Among these disappearances, 1392 are still under investigation [3]. These data raise doubts about the reliability of these figures. Therefore, to improve the process of identifying people reported missing, it is necessary for this social phenomenon to affect the whole country [4]. In January of 2021, National Direction of Crimes against Life, Violent Deaths, Disappearances, Extortion and Kidnappings (DINASED) counted 256 cases of missing people in Ecuador; of these, more than 200 have been found. This is a trend that has been maintained since 2018; ever since, 15 522 people have disappeared in Ecuador, however, 95% of them were found. Figure 1 indicates the registered cases of people reported missing during 2022, showing a total of 3124, where 2743 have been found and 282 are still missing. In addition, it can be seen that the number of reports increases each month. According to DINASED, one of the reasons that help to locate missing people is that most of them disappear voluntarily [5] (Table 1). Table 1. Missing people statistics for the year 2022. Month/state

Missing

Found

Dead

Total

January

32

591

16

639

February

46

557

27

630

March

58

596

19

673

April

67

537

20

624

May

79

462

17

558

Total

282

2743

99

3124

Fig. 1. Missing people statistics for year 2022.

The advancement of technologies promotes the progress of organizations in all aspects. This progress has been related to the use of today’s most valuable asset, also

78

J. Panchi-Chacón et al.

referred to as “the oil of the 21st century”, which is data. Personal data is within the category of information that is generated, processed and stored in entities, and there are also references to a sensitive nature caused by missing people investigations [6]. According to Wolfe [7], the disappearance of people is a social category problem that should be a priority for the State, which was used to focus only on cases of people who had visibly disappeared from official view. In addition, the use of technology has boosted the development of projects such as [8] and [9], that implement technological solutions for the tracking and identification of people reported as missing, making improvements to the current system to search for missing people in India and to develop an experimental design that combines facial recognition with deep learning through a convolutional neural network, respectively. In 2020, DINASED created the “Alerta Desaparecidos” (Missing People Alert) application to collaborate in the search process by providing a rapid response when a case is reported. This application is managed by the National Police; therefore, there is a continuous interaction between police officers, community, complainants, and public and private institutions: hospitals, shelters, foster homes, private clinics, medical centers, nursing homes, morgues, and cemeteries. It is easy to use and it is free, which allows citizens to have an efficient interaction that facilitates the search. Thanks to technological advances, it is possible to use them in favor of this social problem, and there is still much to be done, taking into account that it is a problem that affects not only one person, but their entire family and close acquaintances or friends [10]. Disappearance affects countless Ecuadorians, regardless of their characteristics; under this background appears the Association of Relatives and Friends of Disappeared People in Ecuador (ASFADEC), an organization that promotes the dissemination and awareness of the problem, and struggles to locate the missing in order not to be seen only as one more statistic in an archive, working on actions of memory, truth, and justice [11]. This problem is the ideal context to develop a tool to improve the identification process of people reported missing. The information on missing people handled by the association corresponds to personal data made through its website and social networks, which limit the knowledge and recognition process. This project focuses on developing a computer system that improves this procedure; thus, reducing the complication of using figures and speeding up the deployment of reports on identified missing people. In addition, it makes it easier for those responsible to present relevant results before reporting to State entities.

2 Methods Several aspects were considered to develop and implement the expert system with facial recognition in human-machine conversation services for the automation of multiplatform remote processes when identifying people reported missing, such as data collection and management, and the visualization of the results obtained [12]. The Missing system was developed based on a structure divided into three modules: MissingApp, MissingBot, and MissingApi. The following figure explains the basic general process of the operation of the expert system where the reporter sends through

Expert System with Facial Recognition

79

the social networks Messenger and Telegram the picture of the person suspected of disappearance, which is obtained through the MissingBot module. Once the picture is received, it is evaluated by the MissingApi facial recognition module. The results are displayed on the MissingApp web page, which the volunteer and the system administrator manage (Fig. 2).

Fig. 2. General scheme of the system.

Once the general operation has been explained, the system architecture is detailed using the C4 model, where the context levels and system containers are described for a better understanding [13]. Figure 3 shows the interaction of people with the Missing system and the Desaparecidos Ecuador external system, where official information on people reported missing is obtained.

Fig. 3. System context diagram for Missing system.

Also, the following figure shows the system container diagram corresponding to level 2 of the C4 model, which expands the Missing system and offers all the containers involved in the development and the technologies used (Fig. 4).

80

J. Panchi-Chacón et al.

Fig. 4. Container diagram for Missing system.

In addition, it can be seen that the Missing system consists of seven containers. The first one is MissingBot, on the server-side, which obtains the intentions and entities for the second container, DialogFlow, responsible for providing information to the first component and, in turn, it directs the user to the corresponding chat to the third container. This chat will send the requests to Missing Back-end, the fourth component located on the server-side. MissingBot will send records and reports to the fifth component, MissingApp Front-end, a client-side web application where MissingApp Back-end sends data about people reported missing. MissingApp Back-end sends records and reports to the sixth component, the database, but, in order to do this, it intervenes the seventh component, MissingApi, which allows the facial recognition and sends facial features to MissingApp Back-end. Missing consists of a web application, a chatbot, and facial recognition. The web application was divided into front-end and back-end, developed with react and express.js, respectively; and all data is stored using the MongoDB database, complementing the stack of technologies known as MERN (MongoDb, Express.js, React, Node.js). For the development of the chatbot, DialogFlow was used to facilitate the processing of natural language and to integrate it into the Telegram and Facebook Messenger social networks, taking advantage of the same database and server from the web application [14]. Finally, for the identification of people reported missing, there is a face recognition process from artificial intelligence, where the use of the SSD model, which increases the speed of inference processing by eliminating the need for the Region Proposal Network (RPN), is compared [15] to the TFD model, a real-time face detector and consumes fewer

Expert System with Facial Recognition

81

resources than the SSD face detector [16]. The aforementioned, in addition to performing experimentation with the MTCNN model, being a three-stage cascaded CNN, which simultaneously returns five face landmarks along with the bounding boxes and scores for each face [17] to determine the performance of each of the models. The dataset with which we worked was obtained by scraping from the official sources Desaparecidos Ecuador and ASFADEC, where the pictures obtained from the people reported as missing with their respective personal data will be collected. Also, for the deployment, the entire infrastructure was distributed on the Microsoft Azure cloud computing platform to lift all services with high-speed adaptability in a production environment [18]. ASFADEC members will supervise this. On the other hand, the implementation of the system was organized within the cloud infrastructure by using Azure, where several services were scheduled, starting with an application function Missing-fun-scraping of the TimeTrigger type that was developed in Javascript language, which runs every 24 h to retrieve information of missing people using a trigger that activates the function. The App Services service was raised to deploy two web application modules: Missing-app-web, and Missing-bot-chatbot, which were also developed in Javascript language launched on a Node.js server each. In addition, an internal virtual machine resource was created to generate the FaceApi.js library to run Tensorflow and perform the facial recognition and identification process. In addition, the incorporation of Mongo Atlas facilitated the communication of the data with the cloud storage, allowing its connection with all the resources to update the information (see Fig. 5).

Fig. 5. Deployment diagram.

3 Results This section presents the results obtained by evaluating the system using three parameters: accessibility, efficiency and speed, to guarantee the quality of the product avoiding a deficient level of usability [19].

82

J. Panchi-Chacón et al.

3.1 Accessibility This parameter has two aspects: the access time to search and download posters of people reported as missing, and the access time to search and download information about people identified as missing, where the MissingApp system was compared with the currently used by ASFADEC, Desaparecidos Ecuador.

Fig. 6. Access time to search and download posters of people reported missing from a cellphone.

Fig. 7. Access time to search and download posters of people reported missing from a computer.

Expert System with Facial Recognition

83

Fig. 8. Access time to search and download information of identified missing people from a cellphone.

Fig. 9. Access time to search and download information of identified missing people from a computer.

3.2 Efficiency This parameter has two aspects: the average time to identify missing people and the average time to send and receive messages from the chatbot. For the first aspect, the three identification models (SSD model, TFD model, and MTCNN model) were compared; for the second, the chatbots in two social networks (Facebook Messenger and Telegram) were used (Figs. 10 and 11). 3.3 Speed This parameter has two aspects that were taken into account: the identification time for facial recognition by face, and the alert notification time for the identified person, where

84

J. Panchi-Chacón et al.

Fig. 10. Average time to identify people reported missing using the SSD, TFD, and MTCNN models after obtaining the necessary data through the chatbot.

Fig. 11. Average time to send and receive messages from the chatbot.

Fig. 12. Identification time for face recognition using the SSD, TFD, and MTCNN models.

the three identification models (SSD model, TFD model, and MTCNN model) were compared for both aspects.

Expert System with Facial Recognition

85

Fig. 13. The SSD, TFD and MTCNN models’ identified person alert notification time.

4 Discussion In the analysis of accessibility, it is observed that the MissingApp application has a lower access time to search and download posters of people reported as missing, compared to the Desaparecidos Ecuador application. Also, even though Desaparecidos Ecuador does not have a registration option, log-in, or HTTPS protocol, the time in the search and data found is superior to the MissingApp’s. About the devices, it is observed that access to the web application from a computer is faster than the access on a mobile device; thus, demonstrating that the application developed in this work offers a good accessibility to the user (see Fig. 6). Moreover, as part of the accessibility, the access time to search and download information about identified missing people was evaluated, showing that MissingApp is much faster than Desaparecidos Ecuador in all the aspects mentioned, and access from a computer is much quicker than that from a mobile device (see Fig. 7). In terms of efficiency, the three models (SSD, TFD and MTCNN) were evaluated to see how the facial recognition module interacts with the application. MissingApp uses the TFD model when assessing the time it takes to identify people reported missing, and it is much faster than the other models (see Fig. 8). Another critical point was to analyze the average time to send and receive messages from the chatbot where the social networks Telegram and Facebook Messenger were contrasted. The aspects considered were: sending text, sending the picture of the person reported missing, and sending the location, where the Telegram application stood out, showing that it is much faster in all aspects (see Fig. 9). Finally, when analyzing speed, the three models (SSD, TFD and MTCNN) were used again, focusing on two aspects: the face recognition identification time and the alert notification time for the identified person. For the first aspect, it can be seen that for both successful and unsuccessful face detections, the TFD model is much faster (see Fig. 12). However, the difference between the SSD and TFD models is minimal. To sum up, the TFD model is much faster (see Fig. 13).

86

J. Panchi-Chacón et al.

5 Conclusions This paper presents the development and validation of an expert system with facial recognition implemented in human-machine conversation services to automate multiplatform remote processes when identifying people reported missing. The system’s efficiency allows ASFADEC volunteers to handle data better and enable users to report more quickly and dynamically through chatbots. Also, the quality parameters to be analyzed showed that MissingApp performs well when searching and retrieving information from the chatbot or database. In addition, the TFD model used for the facial recognition module within the system is faster when identifying an image-making MissingApp, a high-performance system. Moreover, the use of architecture for the modeling of the system helped with the vision of the product for its implementation, and to start with the MERN technology for the development of the web application. For this reason, it is essential to have a good architecture when raising the results of a system and the approach to the objectives. With the analysis of this article, it can be seen that the use of artificial intelligence can automate processes such as the recognition of a person identified as missing within the ASFADEC database, a task that was performed manually and was prone to errors. For this reason, it is recommended to deepen the topic of facial recognition and, in a later work, improve the algorithm to speed up the process. Also, as a future work, it could support the application and its expansion by making it a multiplatform and converting the structure of the chatbot module in one of the microservices to ensure the availability of the server.

References 1. Robledo, C., Querales, M.: Desaparición de personas en el mundo globalizado: desafíos desde América Latina [Disappearance of people in the globalized world: challenges from Latin America]. Íconos 67, 7–15 (2020). https://revistas.flacsoandes.edu.ec/iconos/issue/vie w/185 2. Cepeda Alvear, H.: Factores que incidieron en la desaparición involuntaria de personas en la provincia de Manabí periodo 2020 – 2021 [Factors that influenced the involuntary disappearance of people in the province of Manabí period 2020–2021]. Polo del Conocimiento 7(1), 919–933 (2022). https://dialnet.unirioja.es/servlet/articulo?codigo=8331464 3. INREDH – Derechos humanos: la segunda caravana por las personas desaparecidas denuncia inconsistencias en las estadísticas del Estado [The second caravan for disappeared persons denounces inconsistencies in State statistics]. https://inredh.org/la-segunda-caravana-por-laspersonas-desaparecidas-denuncia-inconsistencias-en-las-estadisticas-del-estado/. Accessed 23 May 2022 4. El Comercio: La pandemia no frena a los familiares que reclaman a sus desaparecidos en Ecuador: “No nos van a callar, hasta nuestra vida por encontrarlos” [The pandemic does not stop the relatives who claim their disappeared in Ecuador: “They will not silence us, until our lives to find them”. https://www.elcomercio.com/actualidad/seguridad/familiares-busquedadesaparecidos-denuncia-ecuador.html. Accessed 16 June 2022 5. Primicias: Ecuador: problemas familiares son la causa del 43 % de desapariciones [Ecuador: family problems are the cause of 43 % of disappearances]. https://www.primicias.ec/noticias/ sociedad/ecuador-personas-desaparecidas-dinased-problemas-sociales/. Accessed 16 June 2022

Expert System with Facial Recognition

87

6. Fernández Lasquetty, J.: Hacia un modelo para la economía de los datos [Towards a model for the data economy]. https://telos.fundaciontelefonica.com/telos-113-regulacion-hacia-unmodelo-para-la-economia-de-los-datos/. Accessed 07 July 2022 7. Wolfe, M.: Policing the lost: the emergence of missing persons and the classification of deviant absence. Theory Soc. 51(6), 511–541 (2022). https://doi.org/10.1007/s11186-021-09466-w 8. Besra, A., Ahmed, A.J., Priyadarshini, S.: Missing person tracking system. In: Nath, V., Mandal, J.K. (eds.) Proceeding of Fifth International Conference on Microelectronics, Computing and Communication Systems. LNEE, vol. 748, pp. 783–790. Springer, Singapore (2021). https://doi.org/10.1007/978-981-16-0275-7_65 9. Imoh, N., Vajjhala, N.R., Rakshit, S. Experimental face recognition using applied deep learning approaches to find missing persons. In: Basu, S., Kole, D.K., Maji, A.K., Plewczynski, D., Bhattacharjee, D. (Eds.) Proceedings of International Conference on Frontiers in Computing and Systems. Lecture Notes in Networks and Systems, vol. 404, pp. 3–11. Springer, Singapore (2022). https://doi.org/10.1007/978-981-19-0105-8_1 10. Ministerio de Gobierno: La Policía Nacional presenta aplicación para personas desaparecidas [The National Police presents application for missing persons]. https://www.ministeriodegob ierno.gob.ec/policia-aplicacion-desaparecidos-ecuador/. Accessed 03 Aug 2022 11. Asfadec – Desaparecidos en Ecuador: Desapariciones en Ecuador: una herida abierta [Disappearances in Ecuador: an open wound]. https://asfadec.org/memoria-colectiva-desaparic iones-una-herida-abierta-en-ecuador/. Accessed 03 Aug 2022 12. Mohammadi, H., Ghardallou, W., Mili, A.: Assume, capture, verify, establish: ingredients for scalable software analysis. In: 2021 IEEE 21st International Conference on Software Quality, Reliability and Security Companion (QRS-C), pp. 415–424. IEEE, Hainan (2021). https:// doi.org/10.1109/QRS-C55045.2021.00068 13. Vázquez, A., García, A., García, F.: C4 model in a software engineering subject to ease the comprehension of UML and the software. In: 2020 IEEE Global Engineering Education Conference (EDUCON), pp. 919–924. IEEE, Porto (2020). https://doi.org/10.1109/EDUCON 45650.2020.9125335 14. Maldonado, J., Cuadra, J.: Natural language interface to database using the DialogFlow voice recognition and text conversion API. In: 2019 8th International Conference on Software Process Improvement (CIMPS), pp. 1–10. IEEE, Leon (2019). https://doi.org/10.1109/CIM PS49236.2019.9082438 15. Tsunakawa, H., Kameya, Y., Lee, H., Shinya, Y., Mitsumoto, N.: Contrastive relevance propagation for interpreting predictions by a single-shot object detector. In: 2019 International Joint Conference on Neural Networks (IJCNN), 2019, pp. 1–9. IEEE, Budapest (2019). https://doi. org/10.1109/IJCNN.2019.8851770 16. Ye, F., Ding, M., Gong, E., Zhao, X., Hang, L.: Tiny face detection based on deep learning. In: 14th IEEE Conference on Industrial Electronics and Applications (ICIEA), pp. 407–412. IEEE, Xi’an (2019). https://doi.org/10.1109/ICIEA.2019.8834282 17. Zhang, L., Wang, H., Chen, Z.: A multi-task cascaded algorithm with optimized convolution neural network for face detection. In: Asia-Pacific Conference on Communications Technology and Computer Science (ACCTCS), pp. 242–245. IEEE, Shenyang (2021) 18. Verma, A., Malla, D., Choudhary, A., Arora, V.: A detailed study of Azure platform & its cognitive services. In: International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), pp. 129–134. IEEE, Faridabad (2019). https://doi.org/10. 1109/COMITCon.2019.8862178 19. García, F.J., García, A., Vázquez, A.: Modelo C4. Zenodo. https://doi.org/10.5281/zenodo. 6509695. Accessed 07 Aug 2022

Communications

Radar Probability of Detection in Multipath Environments Juan Minango1(B) , Andrea Flores2 , Marcelo Zambrano1 , Wladimir Paredes Parada1 , and Cristian Tasiguano1 1

Instituto Tecnol´ ogico Universitario Rumi˜ nahui, Sangolqu´ı, Ecuador [email protected] 2 State University of Campinas, Campinas, Brazil

Abstract. In this paper, we derive new analytical expressions for the probability of detection for non-fluctuating (Swerling 0) and fluctuating (Swerling 1) targets when more than one specular reflection occurs. These results allow us to investigate the influence of the multipath and compare it to the known results in the literature for free space and single reflection cases. Different from the conventional approach, our novel expressions allow the estimation of the probability of detection for specific targets in a multipath scenario, along with a specular reflection search algorithm. The new derived expressions allow inferring that the constructive effect of the specular reflections is advantageous in the detection of targets with lower SNR, improving the detection capability. Keywords: Multipath · Irregular terrain algorithm · Probability of detection

1

· Specular multipath

Introduction

The probability of detection of the radar is strongly affected by the topography and topology of the environment. Several radio propagation phenomena occur due to the terrain, one of them being the reflection of the waves, leading to multipath propagation. In radars, the pattern propagation factor is reserved for handling the phenomenon of multipath propagation, which often causes very large departures from the free-space field intensity [1]. Various detectors related to the free-space conditions have been well studied in the literature and used as a reference to the performance prediction of radar systems [2]. In order to take into account the multipath effects and characteristics of the antenna pattern for different height and range of the target, some geometric mechanisms such as Geometric Optics (GO) to predict the propagation factor were proposed in the past [1,2]. Analytical approaches have extended these analyzes based on detectors that take into account the time-delays associated with the reflection components [3], as well as, the effect on the signal-to-noise ratio (SNR) of the target [4–6]. Moreover, some detection coverage and visibility prediction tools based on a multipath scenario have been developed to provide an intuitive radar operation graphical control [7]. c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 91–103, 2023. https://doi.org/10.1007/978-3-031-25942-5_8

92

J. Minango et al.

Despite those efforts, these models are limited to smooth surfaces or without irregularity, where only one reflection occurs on the surface to achieve the target. Additionally, in general, most of those tools that predict the coverage taking into account the topographic relief are commercial and not easily accessible. Motivated by the necessity to know the effect of the multipath on the probability of detection coverage over irregular terrains, this paper describes the effects resulting from the reflection of the waves over the surface. This effect is included in the SNR of the target through the propagation factor in the radar equation. Once the irregular nature of the terrain and surface material influence the number of reflections [8], we reformulate the pattern propagation factor distribution described in [4–6], to include more than one reflected ray. From the resulting two way power multipath factor distribution, we found analytical expressions for detection probability in multipath environments for non-fluctuating (Swerling 0, SW0) and fluctuating (Swerling 1, SW1) targets. The constructive and destructive effects of the multipath are analyzed by comparing the expressions with the classic solution of free-space and that of a single multipath, where only one specular reflection occurs. Another contribution is given by the evaluation of such expressions by using the number of reflections obtained from the proposed non-linear reflection search algorithm described in [8], as a useful mechanism of detection probability coverage estimate. To the best of our knowledge, there are no similar results that have been derived in the literature. The remainder of this paper is organized as follows. Section 2 derive the pattern propagation factor for more than one reflected ray. Section 3 presents the SNR distribution in multipath for SW0 and SW1 targets, and consequently, in Sect. 4 the probability of detection expressions. Numerical results and simulations are presented in Sect. 5. Finally, the conclusions are drawn in Sect. 6. Notation: |·| denotes the absolute value operator.

2

Pattern Propagation Factor for More Than One Reflected Ray

The parameter that accounts for (i) wave propagation effect resulting from non free-space conditions such as multipath propagation, and (ii) the effect of the antenna pattern, is denoted as one-way voltage propagation factor, F . Since the one-way power factor is F 2 , the two-way power factor in the radar equation is denoted as F 4 (for monostatic radar system1 ). Thus, in multipath conditions, the SNR of the target for free-space, χ0 , which is calculated from the well known radar equation is now scaled by the factor W = F 4 as follows [1,2] χ ˆ0 = χ0 W. where χ0 = 1

σT2 Pt Gt Gr λ2 σ0 , = 3 σn2 (4π) R4 KT0 Nf B

Monostatic radar consists of a transmitter placed with a receiver.

(1)

(2)

Radar Probability

93

Fig. 1. Multipath propagation scenario over irregular terrain. This example describes the tracing of a direct ray and two reflected rays through the surface, from the radar at R to the target at T. In this example, k = 2.

where Pt is the transmission power of the radar given in Watts, Gt and Gr are the maximum gains of the transmitting and receiving antenna (dimensionless), respectively; λ is the wavelength given in meters; σ0 is the mean value of radar cross section (RCS) of the target given in m2 ; R is the slant range to the target given in m; K = 1.38 × 10−23 is the Boltzmann constant given in J K−1 ; T0 = 290 is the standard temperature given in Kelvin; Nf is the noise figure (dimensionless); and B is the instantaneous bandwidth in the reception given in hertz. A simple multipath model based on the approach of Kerr was very well described in [1,2], where a single reflection occurs over a completely smooth surface. This approach considers an one-way propagation with two possible paths resulting in re-radiated waves that return to the receiver via the same path. On the other hand, if an irregular surface is considered instead of a smooth one, this can cause the existence of more than one ray to be reflected [1,8]. Now turning our attention to the case where more than one reflection, we consider that the time difference between the direct ray and the reflected ray does not exceed the radar resolution, that is, both rays will fall in the same range bin. Other multipath rays that fall outside are not considered. Defining k as the number of reflections, the general case for F is given by [1]:   k    ρz frz   (3) exp (−jαz ) . F = fd 1 +   fd z=1

where fd and frz are the magnitude of the antenna pattern factor (between 0 and 1) on the elevation angle of the direct and z-th reflected ray, respectively, where frz is the magnitude of the antenna pattern factor on the elevation angle of the

94

J. Minango et al.

z-th reflected ray, ρz is the magnitude of the reflection coefficient of surface for the z-th reflected ray, and αz is the phase difference between the direct and the z-th reflected ray. In this expression, a special attention is given to the terms of the summation of this equation, where (i) αz ∼ U [0, 2π], and (ii) ρz which includes the effect of the material substance, the roughness and the convex curvature of the local reflecting surface, is distributed uniformly between a positive constant C (C < 1) and 1, i.e., ρz ∼ U [C, 1], for horizontal polarization [1]. Thus, the factor F is defined by a specular reflection component which is weakened by the diffuse reflection components due to surface characteristics included in reflection coefficient magnitude [9]. Due to the central limit theorem [10], as the number of reflections k increase, the distribution of F can be approximated by the Rayleigh distribution with scale parameter σr . Since the antenna factors, fd and fr , are scalars and depend on the prior knowledge of a specific radiation pattern, for simplicity, we consider fd = 1 and fr to be included in ρz . Moreover, to facilitate the mathematical manipulation, the total summation term of (3) can be represented by its total. Thus, the resulting ρz frz and αz through k reflections are denoted by  and ∅, respectively. Thus, from (3), F is written in real terms as  (4) F = |1 +  exp (−j∅)| = 1 + 2 + 2 cos (∅). 2.1

One Way Voltage Multipath Factor Distribution

In [4–6], the distribution of F for a single specular reflection (k = 1) is derived assuming that  is fixed and that ∅ is uniformly distributed between 0 and 2π. Using these assumptions f (F ) can be written as 

f (F ) = π 2.2

2F 42 − (F 2 − 2 − 1)

2

,

F ∈ [|1 − | , |1 + |] .

(5)

Two Way Power Multipath Factor Distribution

Once W in (1) defines the multipath factor in terms of the round-trip power, i.e., W = F 4 , the distribution of W for k = 1 is obtained using a simple change of variables in (5) as [4–6]: 1 4 4   . (6) , W ∈ |1 − | f (W ) = , |1 + | √ 2

2π W 42 − W − 2 − 1 In the same way, for k > 1, f (W ) can be written as

√  1 W f (W ) = √ exp − 2 , W ≥ 0. 2 2σ 4 W σr r

(7)

Radar Probability

3

95

Signal Plus Noise to Noise Ratio Distribution

In the hypothesis that the SW0 target echo comes along with the noise at the radar receiver, it is well known that the envelope of the received signal, v, is given by (8) v = Z + σT 2 where Z is complex Gaussian distributed with zero mean and variance σN /2, and v is Rician distributed with probability density function given by [2]  2



2vσT 2v v + σT2 I , v ≥ 0. (9) f0 (v) = 2 exp − 0 σn σn2 σn2

where σT2 and σn2 are the mean target echo and noise power, respectively; and I0 (·) is the modified Bessel function of the first kind and zero order. For the case of SW1 target, σT is a complex Gaussian random variable with zero mean and variance σT2 /2. Therefore, the envelope v of the received signal is Rayleigh distributed as 

2v v2 f1 (v) = 2 exp − , v ≥ 0. (10) σT + σn2 σT2 + σn2 Once the analysis is performed in terms of SNR, it is convenient to normalize the received signal power embedded in noise, v, with respect to the noise power, σn2 , as ς = v 2 /σn2 . Considering that v results from the distribution of signal target plus noise SNR, ς, for both targets is obtained through a variable transformation and the relation χ0 = σT2 /σn2 . Therefore f0 (ς) and f1 (ς) can, respectively, be written as [2] √ (11) f0 (ς) = exp (−ς − χ0 ) I0 (2 χ0 ς) , ς ≥ 0 SW0, 

ς 1 f1 (ς) = exp − , ς ≥ 0 SW1. (12) χ0 + 1 χ0 + 1 Similarly, considering that v results from multipath environments, the distribution of ς for both targets is obtained replacing χ0 by W χ0 . In this case, the distribution of ς is conditioned to W , and therefore can be obtained as  ∞ f (ς|W ) f (W ) dW, (13) f (ς) = 0

where f (W ) is given in (7) and f (ς|W ) is given in (11) and (12) for SW0 and SW1 targets, respectively. For SW0 targets in multipath, f (ς) is obtained as  f0 (ς) =

∞ 0

 f0 (ς|W ) f (W ) dW =

∞ 0

 √   √  exp (−ς − χ0 W ) I0 2 χ0 W ς W √ exp − 2 dW. 2σr 4 W σr2

(14) Analogously, for SW1 target in multipath, f (ς) is obtained as  f1 (ς) =



∞ 0

f1 (ς|W ) f (W ) dW =

∞ 0

1 W χ0 +1

   √  exp − χ0 Wς +1 W √ exp − 2 dW. 2σr 4 W σr2

(15)

96

4

J. Minango et al.

Probability of Detection

The probability of detection, PD , is defined as the integral of SNR distribution from a defined threshold SNR level, T˜, to infinity [2]. Now using 14, the probability of detection for SW0 targets can be obtained as

√   √   ∞ ∞ exp (−ς − χ0 W ) I0 2 χ0 W ς W √ exp − 2 dW dς. (16) PD = 2 2σ ˜ 4 W σ T 0 r r Fortunately, one of the integrals can be further simplified by using the relation for the probability of detection for the free-space case as  ∞   √ exp (−ς − χ0 ) I0 (2 χ0 ς) dς = Q1 2χ0 , −2 log (PFA ) , (17) PDFree = T˜

where Q1 (·, ·) is the Marcum function. Finally the probability of detection for SW0 targets for k > 1 can be obtained as  

√   ∞ Q1 √2 W χ0 , −2 log (PFA ) W √ PD0 = exp − 2 dW , (18) 2 2σr 4 W σr 0 In the same way, we can compute the probability of detection for SW1 targets using 15 as 

√  ς 1  ∞ ∞ exp − χ0 W +1 χ0 W +1 W √ exp − 2 dW dς, (19) PD = 2 2σ 4 W σr T˜ 0 r which can be further simplified using the relation for the probability of detection for the free-space case given as

 

 ∞ ς T˜ 1 PD Free = exp − dς = exp − . (20) χ0 + 1 χ0 + 1 T˜ χ0 + 1 Therefore, the probability of detection for SW1 targets for k > 1 is given as 

√   ∞ exp − T˜ W χ0 +1 W √ PD 1 = exp − 2 dW . (21) 2 2σ 4 W σr 0 r For the sake of completeness, in [4–6], the probability of detection in multipath for a single specular reflection (k = 1) for both targets, is given respectively by    |1+|4 Q1 √2W χ0 , −2 log (PFA ) )   dW, (22) PD0 = √ 2

|1−|4 2 2 2π W 4 − W − −1

Radar Probability

 PD 1 =

5

|1+|4

|1−|4

 2π

 W

 ˜ exp − W χT0 +1

√ 2

42 − W − 2 − 1

97

dW.

(23)

Results and Discussions

In this section, we evaluate the PD performance by applying the analytical expressions derived in Sect. 4. Numerical results of the probability of detection coverage over different multipath propagation scenarios are also provided in order to corroborate that analytical results. For this, an alternative to obtain a 2D coverage profile is based on the definition of a matrix of targets located hypothetically at T(x, y) as shown in Fig. 1. This is achieved by repeating the calculating of SNR and consequently of PD for each target. 5.1

Analytical Results

1

1

0.8

0.8

0.6 Free-space k=1 k=5 k = 12 k = 45

0.4 0.2 0 -10

(a)

PD1

PD0

In the design of a radar, the adjustment of the operational parameters of the system is important in order to reach a required probability of detection and false alarm. To this end, it is necessary to know how the external factors are affecting the performance of the radar. An analysis of radar performance in multipath environments is shown in Fig. 2. This figure shows a comparison of the PD as a function of target SNR, for different reflection cases, when the detected target is SW0 and SW1, respectively.

-5

0

5

10

SNR, dB

15

20

0.6 Free-space k=1 k=5 k = 12 k = 45

0.4 0.2 0 -10

25

(b)

-5

0

5

10

15

20

25

SNR, dB

Fig. 2. Probability of detection of (a) non–fluctuating (SW0) and (b) fluctuating (SW1) targets as a function of target SNR, for a free–space and multipath environments with k = 1, 5, 12, and 45 specular reflections.

The results are obtained using (17) and (20), respectively, for the free-space case; (22) and (23), respectively, for a single specular reflection case; and (18) and (21), respectively, for the complex multipath environment cases where more than one specular reflection occurs. All of them with a fixed false alarm probability of 10−6 .

98

J. Minango et al.

As seen in Fig. 2, the multipath case for both targets leads to constructive and destructive effects on radar detection with respect to the free-space model case. Generally, the specular reflections are advantageous in the detection of targets with lower SNR (less than 12 dB for SW0 or less than 15 dB for SW1). Such a constructive effect is improved for SNR less than −5 dB as the number of reflections increases with respect to that of a single reflection. On the other hand, for higher SNRs, a minimal destructive effect is produced for a few specular reflections different than one (e.g., k = 5 and k = 12). This occurs since the exponential nature of two-way power multipath factor distribution, f (W ), decays faster as the number of reflections increases. In general, the destructive effect increases with the number of reflections and very high SNR values. Let us assume a required PD of at least 80 % in a radar system. In this condition, this is well known that a minimum of 12.6 dB and 17.9 dB will be necessary to detect SW0 and SW1 targets in the free–space case. In the one reflection case (k = 1), 20 dB and 23.7 dB will be necessary for SW0 and SW1 targets, respectively; and for the few reflections case (e.g. k = 5) 7.4 dB and 11.2 dB will be necessary for SW0 and SW1 targets, respectively. Clearly, for a greater amount of reflections (e.g. k = 45), more than 25 dB will be necessary. This case can be the situation of a distant target in elevation, where the reflections increase with height. Thus, by using the same system, a distant target captured with 45 reflections will be detected with approximately 73% of probability. 5.2

Probability of Detection Coverage

Based on the number of specular reflection obtained from the algorithm described in [8], this analysis intends to incorporate the effect of the antenna pattern in addition to the multipath caused by the surface. The examples generated here consider the system parameters described in Table 1. The resulting SNR and the probability of detection of such examples are shown from Fig. 3, 4 and 5. The labels A and B2 on the figures are chosen in order to verify the corresponding probability of detection values for both targets according to Fig. 2 . In free-space case, the multipath effect is null and equal to F = 1 at the maximum of the radiation pattern. The decrease in an elevation angle different from that of the maximum is due only to the effect of the radiation pattern. In the reflective flat surface case, the alternating constructive and destructive effect is verified. For an ideal case where the surface is totally conductive with ρz = 1 (without losses due to reflection), the maximum value of F can be 2

2

The labels of Fig. 3, 4 and 5 describe the position of hypothetical targets (A or B) at the coordinates X and Y, for their (a) target SNR, and (b) (c) estimated detection probability, given by Z.

Radar Probability

99

Table 1. System parameters of the radar. Parameter

Value

Radiation pattern

Gaussian

Height antenna

30 m

Pulse repetition interval

75.8 µs

Transmission power, Pt

3W

Maximum transmitting antenna gain, Gt 33.81 dB Maximum receiving antenna gain, Gr

33.81 dB

Wavelength, λ

0.0333 m

Mean RCS, σ0

3m

Noise figure, Nf

2.4 dB

Receiver bandwidth, B

10 MHz

Polarization

H

Dielectric constant

15

Conductivity

0.005 Ωm

Reflection coefficient magnitude, ρz

0.95

False alarm probability

10−6

and consequently W = 16, representing an increase of 12 dB in the SNR value of the same target captured in free space, at the maximum of the radiation pattern [2]. Assuming a loss due to the diffuse reflection components, the local ρz may be less than 1 and will vary according to the local grazing angle. In this example, a maximum constructive effect is given for F = 1.4 (and then W = 3.8) representing an increase of 5.8 dB in the target SNR value at the maximum of the radiation pattern. For a reflective irregular terrain, this effect is random and depends on the number of reflections capturing the target. In this example, a maximum constructive effect is given for F = 4 (and then W = 256) representing an increase of 24.08 dB in the target SNR value and this can not necessarily be at the maximum of the radiation pattern. Based on the results obtained here, it is verified that a radar on any reflective terrain will be able to exploit the constructive effect of the multipath in the detection of low SNR targets, compared with the theoretical free-space case. This can be seen as a diversity mechanism to mitigate possible detection losses of distant targets. However, when working with high transmission power, the nature of the terrain can affect the performance of the radar (Fig. 4).

100

J. Minango et al.

[SNR, Free-space]

[PD0 , Free-space]

[PD1 , Free-space] Fig. 3. Coverage of (a) Target SNR, (b) Probability of detection for non-fluctuating target (SW0), and (c) probability of detection for fluctuating target (SW1) in freespace.

Radar Probability

101

[SNR, Multipath k = 1]

[PD0 , Multipath k = 1]

[PD1 , Multipath k = 1] Fig. 4. Coverage of (a) Target SNR, (b) Probability of detection for non-fluctuating target (SW0), and (c) probability of detection for fluctuating target (SW1) on reflective flat surface.

102

J. Minango et al.

[SNR, Multipath k = 1]

[PD0 , Multipath k > 1]

[PD1 , Multipath k > 1] Fig. 5. Coverage of (a) Target SNR, (b) Probability of detection for non-fluctuating target (SW0), and (c) probability of detection for fluctuating target (SW1) on reflective irregular surface. The triangle on the right side indicates the position of the radar.

The effect of multipath could be evaluated in practice taking into account those suggested in [11]. This is made by using a secondary surveillance radar (SSR) ground station and a transponder carried onboard aircraft. The data show a variation of the received power signal whose losses are associated with reflections of the waves. This phenomenon also coincides with the results presented in Fig. 5.

Radar Probability

6

103

Conclusions

In this paper, we have investigated the effect of the multipath on the probability of detection when the radar is operating on different reflective surfaces. We reviewed the concept of the propagation factor which is included in the radar equation to describe the effect of reflection of the waves over the surface. Once the magnitude of the total reflection coefficient can be very well approximated by a Rayleigh distribution, the two-way propagation factor distributed could be defined as a function of the scale parameter based on the number of reflections. From that, we derived new analytical expressions for the probability of detection for non-fluctuating (Swerling 0) and fluctuating (Swerling 1) target when more than one specular reflection occurs. The results were compared with the known results in the literature obtained in a free-space environment where there is no reflection effect, and in a single multipath environment (reflective flat surface) where only one reflection occurs.

References 1. Blake, L.V.: Radar Range-Performance Analysis, 1st edn. Artech House, Norwood (1980) 2. Richards, M.A.: Principles of Modern Radar Basic Principles. Scitech (2015) 3. Hayvaci, H.T., De Maio, A., Erricolo, D.: Performance analysis of diverse GLRT detectors in the presence of multipath. In: 2012 IEEE Radar Conference, pp. 0902– 0906 (2012) 4. Wilson, S.L., Carlson, D.: Radar detection in multipath. IEE Proc. Radar, Sonar Navig. 146, 45–54 (1999) 5. Jang, Y., Lim, H., Yoon, D.: Multipath effect on radar detection of nonfluctuating targets. IEEE Trans. Aerosp. Electron. Syst. 51, 792–795 (2015) 6. Yang, Y., Xiao, S.P., Zhang, W.M.: Effect of specular reflection on radar detection. In: CIE International Conference on Radar, pp. 1–5 (2016) 7. Brooke, M.: Cambridge Pixel Provides Free Tool for Radar Coverage Visualisation (2016). http://www.cambridgepixel.com/news/2016-09-26-Radar-CoverageTool.asp 8. Flores, A.C., et al.: Radar coverage over irregular terrain: a practical algorithm for multipath propagation. In: IEEE Radar Conference (RadarConf18), pp. 1–4 (2018) 9. Blair, W.D., Brandt-Pearce, M.: Statistics of monopulse measurements of rayleigh targets in the presence of specular and diffuse multipath. In Proceedings of the 2001 IEEE Radar Conference (Cat. No.01CH37200), pp. 369–375 (2001) 10. Papoulis, A.: Probability, Random Variables, and Stochastic Processes, 4th edn. McGraw-Hill Higher Education, New York (2002) 11. Stevens, M.C.: Multipath and interference effects in secondary surveillance radar systems. IEE Proc. F - Commun. Radar Signal Process. 128(1), 43–53 (1981)

Low Detection Symbol Algorithm for MIMO Systems with Big Number of Antennas Juan Minango(B) , Marcelo Zambrano , Wladimir Paredes Parada , and Cristian Tasiguano Instituto Tecnol´ ogico Universitario Rumi˜ nahui, Sangolqu´ı, Ecuador [email protected] Abstract. Low detection algorithm on symbol-flipping (SF) is presented, which obtain the Maximum Likelihood performance in MIMO system with big number of antennas. Through a simulation spatial search analysis, we reveal that SF procedure, which is based on local search. Based on the analysis, we develop a novel detector algorithm with simple SF for MIMO systems. Further, we proposed to employ several SF steps whose initial points are in a set of random initial solutions chooses the best SF as the solution. Keywords: Spatial multiplexing MIMO systems · Maximum likelihood · Symbol-flipping · Local search · BER

1

Introduction

It is well-known that symmetric spatial multiplexing multiple-input multipleoutput (SM-MIMO) systems with large number of transmit and receive antennas present high spectral efficiency of the order of tens to hundreds of bits per second per Hertz (b/s/Hz) by transmitting independent symbols simultaneously through multiple transmit antennas [1–4]. Thus, SM-MIMO is currently considered the most promising technology to meet the increasing demand for high data rates on limited spectrum resources for the next generation of wireless communication networks, as the fifth generation (5G) cellular network [4–6]. For SM-MIMO systems, the full diversity is equal to the number of receive antennas, which is achieved by the maximum likelihood (ML) detector [7]. Although the ML detector offers the optimum performance, it encounters difficulties in practical implementations due to its high computational complexity [8–10], which is exponential with the number of transmit antennas. An alternative algorithm to the ML detector but with less complexity is the sphere detector (SD), which limits the search inside a sphere of a chosen radius [9,10]. Although SD achieves the optimum performance, its mean complexity is still exponential, but with a smaller exponent which depends on the signal-to-noise ratio (SNR) and on the sphere radius [11,12]. In contrast, linear detectors, such as matched-filter (MF), zero-forcing (ZF) and minimum-mean square-error (MMSE) [4], are attractive from a complexity c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 104–114, 2023. https://doi.org/10.1007/978-3-031-25942-5_9

Symbol-Flipping Detector

105

point of view. For example, ZF detector does not achieve the full diversity of SM-MIMO systems. In order to improve the performance of linear detectors without a significant increase in complexity, several studies have focused on the implementation of linear detectors preceded by lattice reduction (LR) [10–12], whose main purpose is to use a pre-processing of the channel matrix in order to be transformed into a nearly orthogonal matrix, that mitigates the interantenna interference. Linear detectors with LR have the potential to achieve full diversity at the expense of a performance penalty in comparison to ML detector [11]. However, this performance difference increases with the quantity of antennas transmitter in the spatial multiplexing (SM) MIMO system. Thus, it is necessary to propose detector that achieve a better trade-off between performance and complexity for SM-MIMO. By focusing on behavior of local search solutions, we find out that the likely of achieving the optimum solution from local solutions using SF procedure increases with the quantity of transmitter and receiver antennas. This SF may show the optimum ML in SM-MIMO systems with hundreds of antennas. Specifically, SF procedure, which is a heuristic local search (LS) algorithm for solving computationally hard optimization problems [13,14], allows to move among different local sub-optimum solutions in the spatial search of limited possible transmitted signal vectors until a solution deemed optimum is found. On the other hand, for SM-MIMO with reduce number of antennas (from 10 to 100), SF can not obtain ML performance. Thus a perturbation method to generate an interesting set of initial point detection for SF is proposed. In summary, this paper proposes a detection with low complexity using SF with different initial solution vectors for symmetric SM-MIMO system.

2

System Model

A symmetric SM-MIMO with NT and NR transmitter and receiver antennas is used. NT symbols are transmitted for the NT antennas at the same time. NR × 1 received vector y is: y = Hx + n, (1) where x ∈ M NT denotes the NT × 1 transmitted signal vector, H is NR × NT Rayleigh fading MIMO matrix and n is the NR × 1 AWGN noise vector. 2.1

Maximum Likelihood Detector

From (1), the aim of the maximum likelihood (ML) detector is to determine the transmitted signal vector x ˆ over the whole set of M NT possible transmitted vector, which is nearest, in terms of Euclidean distance, to the received vector y, for a given channel matrix H. Mathematically, this can be written as [3,4]: 2

x ˆML = arg min y − H˜ x ˜ ∈M NT x

Δ

= arg min{˜ xH G˜ x − 2˜ xH y ˜}. ˜ ∈M NT x

(2)

106

J. Minango et al.

where G = HH H is Gram matrix, y ˜ = HH y. Its computational complexity is exponential with NT , that is O(M NT ), that is not feasible even for a moderate number of transmit antennas NT > 5 [4].

3 3.1

Proposed Detector Symbol Flipping Procedure

SF for the SM-MIMO detection consists of the exploration of local solutions to achieve the optimum solution. The SF is used to a current solution is defined by the constellation M . SF begins from first solution and then moves to other solutions. Given x ˆ(k) (0) as the k-th iteration solution, the initial solution x ˆ to start the SF is given by: (3) x ˆ(0) = ByM , where B is a matrix and the operation ·M is the quantization of the constellation with set of amplitudes M . ˜ = HH y is the MF solution before the quantiFor MF, B = HH and hence y yM . MF detector achieves a better zation. Then, from (3), we have that x ˆ(0) = ˜ performance than ZF detector for big number of transmit and receive antennas (see Fig. 1) [15, section 10.3.2]. 100

10

-1

10

-2

10

-3

10-4

0

2

4

6

8

10

12

Fig. 1. BER versus Eb /N0 for SM-MIMO with NT = NR = 100 antennas with ZF, MF and ML detector.

Considering (2), let

ˆ H Gˆ x − 2ˆ xH y ˜ Φ[ˆ x] = x

(4)

ˆ . Then, with x the ML cost function of vector x ˆ , the SF is used from the first ˆ(0) when the ML cost given by to the last element of x ˆ(0) . The SF eliminates x (4) is lesser than the current ML cost. (0)

Symbol-Flipping Detector

107

When a local solution is obtained, we pass to a second stage, where SF are performed to two consecutive positions of the current solution, then to three positions and so forth. Further, in this paper we consider just one symbol flipping for iteration. (k) ˆi , i = Let x ˆ(k) be a solution of the k-th iteration, whose elements are x (k+1) ˆ can be 1, · · · , NT . Suppose that at (k + 1)-th iteration, the solution x obtained from x ˆ(k) by flipping the i-th element only, that is: (k)

x ˆ(k+1) = x ˆ(k) + δi ei ,

(5)

where ei represents a vector which i-th entry {±1} and all other elements are (k) (k+1) (k) (k) zero. δi = x ˆi −x ˆi depends on M of x ˆi . The SF is successful, if: Δ(k+1) = Φ[ˆ x(k+1) ] − Φ[ˆ x(k) ] < 0. (k)

(k)

(k)

(6) H

H

Δ(k+1) = [(ˆ x(k) + δi ei )H G(ˆ x(k) + δi ei ) − 2(ˆ x(k) + δi ei )H y ˜] − [ˆ x(k) Gˆ x(k) − 2ˆ x(k) y ˜] 2   (k)  H (k) (k)H (k) H (k) H (k) ˆ Gei + δi ei Gˆ x − 2δi ei y ˜ = δi  ei Gei + δi x 2   (k)  (k) H = δi  eH x(k) − y ˜) i Gei + 2δi ei (Gˆ  2  (k)  (k) H (k) = δi  eH , (7) i Gei + 2δi ei r H

where we have used that eH x(k) = x ˆ(k) Gei once G = GH and r(k) = i Gˆ (k) ˜ denotes the k-th residual vector. Besides, it is easy to show that Gˆ x −y      (k) 2 H  (k) 2 (8) δi  ei Gei = δi  Gi,i and

δi eH x(k) − y ˜) = δi ri . i (Gˆ (k)

(k) (k)

(9)

The residual vector r(k) is: (k−1)

r(k) = r(k−1) + δi

gi .

(10)

Thus, considering (8) and (9), we can rewrite (7) as:    (k) 2 (k) (k) Δ(k+1) = δi  (G)i,i + 2δi ri . (k)

(11)

From (11), note that Δ(k+1) depends on δi , which takes only certain integer values according to the constellation set of amplitudes M . Then, Δ(k+1) can be evaluated as:   2   (k) Δ(k+1) = min δ (k)  Gi,i + 2δri , (12) δ∈P

108

J. Minango et al. (k)

where P is the set of all possible δi values. Finally, from (6) and (12), we have that the k-th SF is successful if Δ(k+1) < 0, i.e.,    (k)  (k) (k) δi Gi,i < −2δi ri (k)∗

δi

(k)

Gi,i < −2ri ,

(13)

where (·)∗ represents the complex conjugate. The SF is presented in Algorithm 1. Algorithm 1. SF 1: 2: 3: 4: 5: 6: 7: 8: 9:

input: x ˆ(0) = HH yM ; y ˜ = HH y; G = HH H; k = 0; x(k) − y ˜; r(k) = Gˆ i = 1; while i ≤ NT do (k) (k) Based on x ˆi generate P = {δi,j }, j = 1, · · · , M − 1;   (k) ; 10: Δ(k+1) = min |δ|2 Gi,i + 2δri δ∈P

11: if Δ(k+1) < 0 then (k) ˆ(k) + δi ei ; 12: x ˆ(k+1) = x (k) 13: r(k+1) = r(k) + δi gi ; 14: i = 0; 15: k = k + 1; 16: end if 17: i = i + 1; 18: end while 19: output: 20: Detected vector x ˆ(k) .

Figure 2 and Fig. 3 show BER versus NT = NR for SM-MIMO system with 4-QAM for Eb /N0 of 7 and 10 dB, respectively. From these figures, we note that for a low number of antennas NT and NR , there is a degradation on the BER of SF procedure in comparison to the optimum performance. However, for high number of antennas, the BER employing SF procedure approaches the optimum performance as Eb /N0 increases, which is verified by comparing Fig. 2 and Fig. 3. Thus, for an Eb /N0 = 10 dB and NT = NR = 140 antennas, SF procedure achieves an optimum performance (see Fig. 2) while for an Eb /N0 = 7 dB, we can affirm that SF procedure presents a quasi-optimum performance for hundreds of antennas. The above can be explained according to the random matrix theory [16,17], where it is known that when NT and NR is high, the matrix G = HH H becomes a diagonal matrix NR INT for an uncorrelated Rayleigh fading channel, which is known in the literature as channel hardening [16].

Symbol-Flipping Detector

109

10-2

10-3

20

40

60

80

100

120

140

160

180

200

Fig. 2. BER versus NT = NR for an Eb /N0 = 7 dB. 10

-1

10

-2

10

-3

10

-4

10

-5

10

-6

20

40

60

80

100

120

140

160

180

200

Fig. 3. BER as a function of the number of transmitted and received antennas with NT = NR for an Eb /N0 = 10 dB.

3.2

Initial Random Solution

For SF escapes from a local solution, new initial random solution vectors are necessary. Thus, these initial random solution vectors are used as a startingpoint to reach to global optimum solution. The initial random solution vectors are obtained by applying perturbations on the initial MF solution x ˆ(0) , described in Sect. 3.1. By exploiting the information of the received signal vector y, we construct a set of initial solution vectors Ψ , where the first initial solution vector of Ψ is x ˆ(0) and the other initial solution vectors are randomly generated based on both x ˆ(0) and y.

110

J. Minango et al.

The set of initial vectors in Ψ using (1), the first initial solution effects can be eliminated from y, that is: y ˆ = y − Hˆ x(0) = Hd + n,

(14)

ˆ(0) where d = x − x ˆ(0) is the transmitted signal error vector. Considering that x is the global optimum solution, we have that d = 0 and consequently (14) is reduced to: y ˆ = n, (15) that is:

2

ˆ y = NT σn2 .

(16)

However, if x ˆ is a sub-optimum solution, some elements of y ˆ in (15) differ from n, which are named unreliable positions. As n can not be known a priori, the unreliable positions of y ˆ can be determined as follow:  2 1 if : |ˆ yi | > σn2 /2 (17) i = 1, · · · , NT ai = 0 otherwise (0)

Consequently, the number of symbols to be changed from x ˆ(0) is obtained as  c = i ai . Note that the maximum number of possible symbol changes is cmax = ˆ(0) is a good initial solution, consequently NT . But, according to Fig. 1, as x c  NT . In the following, steps 1 to 4 are conducted for the implementation of SF procedure for the initial solution vectors in Ψ : (0)

1. Set x ˆ0 = x ˆ(0) the first initial solution vector in Ψ ; (0) 2. From x ˆ0 , L − 1 initial random vectors can be generated, which must be different in at least c positions. For this, we choose c random elements from the (0) vector x ˆ0 , then these c elements are replaced at a time with symbol constellation M sampled uniformly at random. This process isperformed to generate  (0) (0) (0) each one of the L−1 initial random vectors. Thus, Ψ = x ˆ0 , x ˆ1 , · · · , x ˆL−1 ; (k)

(k)

(k)

ˆ1 , · · · , x ˆL−1 . 3. Perform the SF for each L of Ψ as x ˆ0 , x 4. Finally, the local solution with minimum ML cost is the final solution. The number of the L initial vectors used as starting-point in the SF procedure is obtained by considering Fig. 4 shows BER versus NT = NR with different number of initial solution vectors L for 4-QAM and Eb /N0 = 10 dB. From this figure, observe that as the number of initial solution vectors L increases, a better performance is obtained, especially for a low number of antennas NT = NR . Thus, for example, for a SM-MIMO system with NT = NR = 40 antennas, L = 7 initial solution vectors are necessary to achieve the optimum performance, which demonstrates the effectiveness of this approach.

Symbol-Flipping Detector 10

-1

10

-2

111

10-3

10-4

10

-5

10

-6

20

40

60

80

100

120

140

160

180

200

Fig. 4. BER versus NT = NR , for Eb /N0 = 10 dB.

4 4.1

Numerical Results BER Performance

Figure 5 shows the BER versus Eb /N0 for SM-MIMO system with NT = NR = 20 and 4-QAM with various L initial random vectors, which are generated following the steps 1 and 2 described in Sect. 3.2. Furthermore, the optimum performance of ML detector is included as a benchmark. From this figure, we observe that SF detector algorithm with L = 4 does not provide a good performance. For L = 6, SF produces a significant BER improvement in comparison to the case with L = 4, however, there is a BER floor for high Eb /N0 . As L increases, it is evident that the BER floor level is reduced. Thus, with L = 10, SF detector algorithm is able to achieve the optimum performance. Note the maximum number of initial random vectors L considered in this SM-MIMO systems was based in Fig. 4 Figure 6 and Fig. 7 show BER performance of SF detector algorithm for SMMIMO systems with NT = NR = 50 and NT = NR = 100 antennas employing 16-QAM modulation, respectively. From these figures, we note that for both SM-MIMO systems, SF detector algorithm is able to achieve the optimum performance with 5 and 2 initial random vectors, respectively, which illustrates the ability of SF detector in order to reach the optimum performance in SM-MIMO systems with large number of transmit and receive antennas.

112

J. Minango et al.

10

-1

10

-2

10-3

10

-4

0

5

10

15

20

25

30

Fig. 5. BER versus Eb /N0 for SM-MIMO NT = NR = 20 antennas and 4-QAM employing SF detector algorithm.

10

-1

10-2

10

-3

10

-4

10-5 0

5

10

15

20

25

30

Fig. 6. BER versus Eb /N0 for SM-MIMO NT = NR = 50 antennas and 16-QAM.

10-1

10-2

10-3

10-4

10-5 0

5

10

15

20

25

30

Fig. 7. BER versus Eb /N0 for SM-MIMO NT = NR = 100 antennas and 16-QAM.

Symbol-Flipping Detector

5

113

Conclusion

We have first analyzed through simulations the local solution on the search space of the ML detector by employing symbol-flippping (SF) procedure in SMMIMO systems with big number of antennas. Then, we have showed that the optimum performance can be achieved by SF procedure as the number of antennas increases in SM-MIMO systems. We have proposed a novel low-complexity detector algorithm which employs SF in SM-MIMO with big number of antennas. When the number of antennas is less than 100, SF detector is not able to achieve the optimum performance, to solve this, we have proposed a method, which randomly generates several starting-point vectors from the matched filter detected vector and, through SF procedures, it chooses the SF result with the less ML cost as the detected signal vector or the global optimum solution.

References 1. Qiao, L., Liang, S., Jiang, Z., Chi, N.: Spatial multiplexing by joint superposed signal and power diversity for a 2×2 mimo vlc system. In: International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) vol. 2018, pp. 368–371 (2018) 2. Yousif, B.B., Elsayed, E.E.: Performance enhancement of an orbital-angularmomentum-multiplexed free-space optical link under atmospheric turbulence effects using spatial-mode multiplexing and hybrid diversity based on adaptive MIMO equalization. IEEE Access 7, 84401–84412 (2019) 3. Kalachikov, A.A., Shelkunov, N.S.: Performance evaluation of the detection algorithms for mimo spatial multiplexing based on analytical wireless MIMO channel models. In: XIV International Scientific-Technical Conference on Actual Problems of Electronics Instrument Engineering (APEIE), vol. 2018, pp. 180–183 (2018) 4. Chockalingam, A., Rajan, B.S.: Large MIMO Systems. Cambridge University Press, New York (2014) 5. Chen, Y.-H., Wong, K.-L., Li, W.-Y.: 4 × 4 MIMO performance of two conjoined dual wideband antennas including the feedline effects for 5G smartphones. In: IEEE Asia-Pacific Microwave Conference (APMC), vol. 2019, pp. 1488–1490 (2019) 6. You, C., Jung, D., Song, M., Wong, K.-L.: Advanced 12×12 MIMO antennas for next generation 5g smartphones. In: IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting, vol. 2019, pp. 1079–1080 (2019) 7. Chakiki, M.A.F., Astawa, I.G.P., Budikarso, A.: Performance analysis of dvb-T2 system based on MIMO using low density parity check (ldpc) code technique and maximum likelihood (ml) detection. In: International Electronics Symposium (IES), vol. 2020, pp. 169–173 (2020) 8. Zeng, J., Lin, J., Wang, Z.: A serial maximum-likelihood detection algorithm for massive MIMO systems. In: 2020 18th IEEE International New Circuits and Systems Conference (NEWCAS), pp. 78–81 (2020) 9. Zhu, H.-Y., Zhu, Y.-J., Zhang, J.-K., Zhang, Y.-Y.: A double-layer VLC system with low-complexity ml detection and binary constellation designs. IEEE Commun. Lett. 19(4), 561–564 (2015)

114

J. Minango et al.

10. Kuo, I.-M., Hu, W.-C., Chiueh, T.-D.: Limited search sphere decoder and adaptive detector for NOMA with SU-MIMO. In: IEEE Asia Pacific Conference on Circuits and Systems (APCCAS), vol. 2016, pp. 573–576 (2016) 11. Li, J., Li, W.: A novel sphere detection algorithm for mqam mimo systems. In: IEEE International Conference on Electronic Information and Communication Technology (ICEICT), vol. 2016, pp. 34–37 (2016) 12. Mansour, M.M., Alex, S.P., Jalloul, L.M.: Reduced complexity soft-output MIMO sphere detectors-part i: algorithmic optimizations. IEEE Trans. Signal Process. 62(21), 5505–5520 (2014) 13. Blum, C., Roli, A.: Metaheuristics in combinatorial optimization: overview and conceptual comparison. ACM Comput. Surv. 35(3), 268–308 (2003) 14. Aarts, E., Lenstra, J.K. (eds.): Local Search in Combinatorial Optimization, 1st edn. John Wiley, New York (1997) 15. Barry, J.R., Messerschmitt, D.G., Lee, E.A.: Digital Communication, 3rd edn. Kluwer Academic Publishers, Norwell (2003) 16. Rusek, F., et al.: Scaling up MIMO: opportunities and challenges with very large arrays. IEEE Signal Process. Mag. 30(1), 40–60 (2013) 17. Tulino, A.M., Verd´ u, S.: Random matrix theory and wireless communications. Commun. Inf. Theory 1(1), 1–182 (2004)

Deployable Networks. An Alternative for Communications in Critical Environments Marcelo Zambrano1,2(B) , Ana Zambrano3 and Edgar Maya2

, Juan Minango1

,

1 Instituto Tecnológico Superior Rumiñahui, Sangolquí, Ecuador

[email protected] 2 Universidad Técnica del Norte, Av. 17 de Julio 5-21, Ibarra, Ecuador 3 Escuela Politécnica Nacional, Av. Ladrón de Guevara E11-253, Quito, Ecuador

Abstract. One of the main problems that arise when disaster strikes is the little or no communication that usually exists within the affected area. For an effective incident management, is imperative that the tactical staff deployed within the affected area maintain permanent communication, both among themselves (for security and support issues) and with the strategic staff located outside the risk area (responsible for command and control of operations). This paper introduces the concept of Deployable Networks (DNETs), referring to those communication networks that are autonomous, portable, resilient, highly scalable, but above all, that can be implemented quickly and where and when they are needed. A prototype with a dynamic mesh type network topology was designed and implemented, based on IEEE 802.11s (wireless net) technology, obtaining effective results in terms of communications and information exchange between the nodes that make up the net. Keywords: Deployable networks · Mesh networks · Tactical networks

1 Introduction A disaster can be defined as a damaging incident that negatively affects life, property and/or the environment within the social nucleus in which it occurs [1–3]. By essence, a disaster has a dynamic and unpredictable nature and, to manage it, the intervention of multiple agencies specialized in public security and humanitarian aid (Firefighters, Police, Health, Red Cross, Civil Defense, etc.) is necessary, which, must join forces and coordinate their activities to deliver a comprehensive response that allows them to deal with the incident in the best possible way [4, 5]. Disaster management (DM) refers to those activities necessary to prevent or deal effectively with a disaster, referring to the activities required before (prevention), during (response) and after the presence of a disaster (mitigation and recovery) [2, 4, 5]. During the response and recovery phases, the agencies involved in the DM send their tactical staff (brigade member) to the affected area, with the aim of controlling the situation and minimizing possible losses and damages. Within this type of scenario, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 115–123, 2023. https://doi.org/10.1007/978-3-031-25942-5_10

116

M. Zambrano et al.

characterized by its criticality, instability and hostility, it is essential that tactical staff maintain permanent communication with their peers both inside and outside the disaster area, enabling the command and control of operations and safeguarding the integrity physics of the brigade members [6, 7]. One of the main problems that arise during these DM phases is the scarce or null communication within the affected area, mainly due to failures or disabling of conventional communication systems such as public telephone networks, mobile networks, fiber optic communication, etc. [8]. The Agency for the Regulation and Control of Telecommunications of Ecuador (ARCOTEL), in one of its reports issued on April 17, 2016, states that the main causes for the loss of telecommunications services during a disaster are cuts or failures in the supply of electricity, the collapse of telephone exchanges and repeater stations, cuts or damage to fiber optic networks and the fall of telecommunications towers and antennas [9]. These communication problems not only hinder the success of operations, but also put the lives and integrity of those involved and affected by the disaster (rescuers and rescued) at risk, due to the multiple dangers and new incidents to which they are exposed within the affected area (sharp materials, toxic gases, fires, falls, fractures, etc.). Due to the above, it is necessary to develop a tool or system that enables and guarantees communications in those areas affected by a disaster and where conventional communication networks are not operational. This paper introduces the concept of Deployable Network (DNET), referring to those communication networks that can be easily and quickly implemented, highly scalable and, above all, resilient. The document has been divided into five parts: first, an introduction is made to the subject referring to communications during the response and recovery phases of a disaster and the problems that arise during them; second, the characteristics with which a deployable network must comply and its generic architecture are detailed; third, the implementation of a prototype and the functionality tests performed are described; fourth, the results obtained are presented and finally; fifth, the conclusions and recommendations resulting from the development of this project are listed.

2 Methodology 2.1 Characteristic The main objective of DNETs is to allow the information exchange within critical environments and/or situations where conventional communication services are not operational (such as in the case of disasters or emergencies). They have not been designed to compete or replace conventional communication networks, but to make up for their absence or deficiency. To meet the proposed objectives, the DNETs must have certain characteristics that are described below. Resilience It is one of its main features. DNETs must guarantee communications within a hostile environment, that is, they must be tolerant to power failures, interference, noise,

Deployable Networks. An Alternative for Communications

117

overcome the lack of line of sight between transmitters and receivers, maintain their operation despite the disconnection or disappearance of nodes etc. From this point of view, a mesh-type physical topology was chosen for the communication network. Mesh topologies are characterized by maintaining multiple paths between their nodes, providing resiliency and flexibility to the network, allowing to overcome the disconnection and lack of line of sight between nodes. Figure 1 shows how a mesh topology provides resilience to the network through the implementation of multiple paths from the source node to the destination node.

Fig. 1. Multiple paths between source node and destination node in a mesh network.

Autonomy Another important characteristic of DNETs is the autonomy of their nodes, that is, each network node must operate independently of the others, fulfilling the functions of transmitter, receiver and/or router. Additionally, the DNETs must guarantee their implementation and operation independently of any other service, be it computing, communication or energy. For this, each node must have its own processing, communication and power capacity. In the case of energy, the nodes must be fed individually, through portable energy sources such as battery banks, solar panels, etc. Mobility The DNETs must be able to be transported and deployed in the place where they are required and when they are required, allowing the mobility of their users within the area of interest (brigade members within the area affected by the disaster) and covering geographical sectors of difficult access. Nodes that make up the communications network are going to be in constant movement, so the network topology is also going to be in constant change. For this reason, the protocol used to establish the mesh network must be dynamic, that is, it must allow the

118

M. Zambrano et al.

mobility of the nodes, permanently monitoring and managing the links between peers, Fig. 2.

Fig. 2. Mesh network topology.

It is worth emphasizing that, to provide mobility to its users and a quick and easy implementation, a DNET must be wireless. Ease of Installation and Scalability An easy and fast implementation and incorporation of new nodes is an indispensable feature for DNETs. Nodes must register and join the network automatically, allowing a rapid deployment, scaling and increased coverage, Fig. 3.

Fig. 3. Scalability of a DNET.

Deployable Networks. An Alternative for Communications

119

2.2 Network Architecture To comply with the characteristics described above, a network generic architecture has been designed with the following considerations: • The network is divided into two segments: tactical and access. – The tactical segment has a mesh topology and is responsible for providing communication to users within the affected area (tactical staff). – The only function of the access segment is to interconnect the tactical segment with the outside, that is, with other communication systems and networks that enable communications between the tactical personnel deployed within the disaster zone and the strategic personnel responsible for planning and operations control located outside the danger zone. Basically, the access segment is made up of a communications gateway, located at the safest point closest to the affected area. • The brigade members deployed within the affected area must have the necessary equipment to operate as a communications node for the network (RF radios with the capacity to form a mesh network, portable battery banks, solar panels, etc.). • If necessary, unmanned vehicles, whether land, air or sea, can be used to act as communication nodes in those geographical points of difficult access, allowing the establishment of the communication links required to guarantee the communications between tactical personnel within the target zone. Figure 4 shows the generic architecture proposed for DNETs.

Fig. 4. DNET architecture.

120

M. Zambrano et al.

3 Functionality Tests and Results To verify the functionality of the architecture, communication tests were carried out on a DNET prototype based on IEEE 802.11s technology [10, 11]. This technology allows the implementation networks with a dynamic mesh topology [12], with end user nodes (nodes that act as transmitters and/or receivers) and backbone nodes (nodes that additionally operate as wireless access points for its users). End user nodes. The equipment selected for the implementation of the backbone nodes was the Mikrotic Meta 52 ac [13]. The tactical segment was made up of four communication nodes in constant movement, each one equipped with a Mikrotic Meta 52 ac, a portable lithium battery bank, a smartphone with Android operating system [14]and a tactical backpack to facilitate the movement of users and the transport of network devices, as shown in Fig. 5 and Fig. 6. Smartphones have the function of representing the user nodes, each one of them linked to its respective backbone node through a WiFi connection.

Fig. 5. Mikrotic Metal 52 AC radio with tactical backpack for easy transport.

The access segment was implemented by means of a Huawei LTE CPE B310 router [14], which was physically connected to the closest backbone node (by means of a

Deployable Networks. An Alternative for Communications Solar Charge Controller Battery Bank

121

Mikrotic Metal 52 AC

Photovoltaic panel

Fig. 6. Connection diagram of a Mikrotik Metal 52 AC radio, battery bank and photovoltaic panel.

network patch cord) and provides internet service to the tactical segment through the advanced mobile service of the Claro company [15]. The tests were carried out on different physical configurations and in different geographical settings, Fig. 7. First, four pdf files were sent, with an approximate extension of 5 MB, one for each physical distribution of the tests and between the most distant communication nodes. For this, the File Transfer application [17], downloadable from Google Play, was used as a tool [18]. The file exchange was successful. Second, several voice communication tests were carried out in real time, using the WhatsApp application [16]. The results were also satisfactory; however, it should be noted that there were some packet losses evidenced by random silences during communications. Finally, real-time video tests were carried out, also using WhatsApp as a multimedia tool. As in the case of voice communications, there were breaks during video calls, but with acceptable quality. Additionally, the performance of the communication channels was monitored with the Wireshark tool [20]. Data transmission speeds were acceptable, ranging between 10 and 3 Mbps, with delays between 5 and 150 ms. It was observed that, for each hop within the network, the throughput decreased by approximately 15%.

122

M. Zambrano et al.

Fig. 7. Geographical scenarios and physical configurations of the tests on the DNET prototype.

4 Conclusions In this paper the concept of Deployable Networks (DNETs) is introduced as an alternative for communications in environments where conventional communication networks are not available (as in the case of disasters or emergencies). The design and implementation of a prototype characterized by its autonomy, scalability, portability and resilience are described; that uses a dynamic mesh type network topology to grant mobility to its users and generate multiple links between the nodes that comprise it. The DNET prototype was implemented based on 802.11s technology with four communication nodes organized under different distributions (Fig. 5). The architecture was validated through functionality tests in which data, voice and video packets were exchanged in real time, obtaining positive results in all cases. It was observed that, for each jump (passage of information from one node to another) that occurs within the network, there is a drop of between 10 and 15% in its throughput; For this reason, it is concluded that in practice there should be a maximum of 5 jumps. As future work, we are working on the simultaneous use of different communication technologies and automatic switching between them, depending on the best signal quality perceived by the nodes.

References 1. Wayne, B.: Guide to Emergency Management and related terms, definitions, concepts, acronyms, organizations, programs, guidance, executive orders & legislation. FEMA (2008)

Deployable Networks. An Alternative for Communications

123

2. AC dl NU pl R. UNHCR: Handbook for Emergencies. UNHCR (2018) 3. FEMA FEMA: Course IS-0230.d: Fundamentals of Emergency Management (2021) 4. EI plrdd ISDR: International Strategy for Disaster Reduction - The Americas (2021). http:// www.eird.org/esp/terminologia-esp.htm 5. BSI BSI: Crisis management - Guidance and good practice. British Standard Limited (2014) 6. USAID USAID: Basic Incident Command System Course (2021) 7. AE d. N. and. C. AENOR, UNE - ISO 22320. Protection and safety of citizens - Emergency management - Requirements for response to incidents, Madrid: AENOR (2020) 8. Palttala, P., Boano, C., Lund, R., Vos, M.: Communication gaps in disaster management: perceptions by experts from governmental and non-governmental organizations. J. Contingencies Crisis Manag. 20, 2–12 (2012) 9. ARCOTEL: Rule that regulates the presentation of contingency plans for the operation of public telecommunications networks by service providers of the General Telecommunications Regime (2016) 10. Parvin, J.R.: An overview of wireless mesh networks. In: Wireless Mesh Networks-Security, Architectures and Protocols (2019) 11. Majumder, A., Roy, S.: Implementation of adaptive mobility management technique for wireless mesh network to handle internet Packets. In: Kundu, S., Acharya, U.S., De, C.K., Mukherjee, S. (eds.) Proceedings of the 2nd International Conference on Communication, Devices and Computing. LNEE, vol. 602, pp. 97–107. Springer, Singapore (2020). https://doi.org/10. 1007/978-981-15-0829-5_10 12. Samandarov, E.K.: Wireless mesh network. Eastern Eur. Sci. J. (2019) 13. Mikrotic: Mikrotic Metal 52 ac. https://mikrotik.com/product/RBMetalG-52SHPacn. Accessed 02 Nov 2022 14. Google: Android. https://www.android.com/intl/es_es/. Accessed 02 Nov 2022 15. Huawei: Huawei. https://consumer.huawei.com/mx/routers/b310/specs/. Accessed 02 Nov 2022 16. Telmex: «Claro Ecuador» Wireless Internet. https://clarointernet.ec/internetInalambrico. Accessed 02 Nov 2022 17. Delite Studio: File Transfer. https://play.google.com/store/apps/details?id=com.delitestudio. filetransferfree&hl=en&gl=US. Accessed 02 Nov 2022 18. Google: Google Play. https://play.google.com/store/apps?hl=es. Accessed 02 Nov 2022 19. WhatsApp: WhatsApp. https://www.whatsapp.com/?lang=es. Accessed 02 Nov 2022 20. Wireshark: Wireshark. https://www.wireshark.org/. Accessed 02 Nov 2022

Application for the Study of Underwater Wireless Sensor Networks: Case Study Fabi´ an Cuzme-Rodr´ıguez1,2(B) , Angel Velasco-Su´arez1 , arez-Zambrano1 , Mauricio Dom´ınguez-Limaico1 , Luis Su´ 1 Henry Farinango-Endara , and Mario Mediavilla-Valverde1 1

Universidad T´ecnica del Norte, Av. 17 de Julio, Ibarra 100105, Ecuador [email protected] 2 Universidad de M´ alaga, M´ alaga, Spain

Abstract. Underwater Wireless Sensor Networks (UWSNs) technology is strongly present in the research area, where many applications have been developed for monitoring of the underwater environments such as oil bases, pollution control, natural disasters, among others. Considering that the planet is covered by 70% of water, it is important to understand how to carry out communications in these environments, as a challenge by its limitations such as bandwidth, propagation speed, high delays, complex deployments, difficulty in energy recharge, among the most important. This study addresses an application development that allows simulating UWSNs in a controlled environment that considers attributes of the ISO/IEC 25010 standard that covers the quality of the software product. This will enable telecommunications students simulate, learn and understand about UWSN. It will be easier and simpler compared to more robust simulators such as OMNET, NS2 or NS3.

Keywords: UWSN

1

· SNR · SDR · TWSN · Software

Introduction

UWSNs (Underwater Wireless Sensor Networks) is born from the need to explore and monitor the underwater environment in real time, in order to provide some agility in the development of countless applications, whether for scientific, commercial or military purposes [3,5,11]. According to [5] a UWSN consists of sensor nodes, the number of which depends on the area to be covered, considering factors such as transmission range and network performance. 1.1

UWSNs Architectures

In general, there are two common communication architectures for UWSNs, which are two-dimensional and three-dimensional. The three-dimensional architecture can be deployed with Autonomous Underwater Vehicle (AUV), or in c The Author(s), under exclusive license to Springer Nature Switzerland AG 2023  M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 124–136, 2023. https://doi.org/10.1007/978-3-031-25942-5_11

Application for the Study of Underwater Wireless Sensor Networks

125

Fig. 1. Generic architecture of UWSN.

many cases work with several. Figure 1 shows a three-dimensional architecture of UWSN. Two-Dimensional UWSN. The main characteristic of the 2D architecture is that all nodes are anchored to the bottom of the ocean. The information is gathered from ordinary nodes using a uw-sink. The uw-sinks have two acoustic transceivers, one named vertical and the other horizontal, to help them do this. The sink communicates with the sensor nodes via a horizontal transceiver, either to collect the monitored data or to provide instructions and configuration data to the sensors. Then, it transmits the information to a surface station, which in this case can be considered the gateway, this is done by the vertical transceiver. The surface station can communicate with onshore and surface sinks. Sensors can communicate directly or over multiple-hop pathways with the uw-sink [3,6]. In a peer-to-peer link, each sensor delivers data directly to its corresponding sink node. However, it is not the best option in terms of energy efficiency. Whereas, in a connection with multi-hop routing. Up until it reaches the sink, the data is sent from the source sensor to intermediate sensors. This reduces energy consumption and expands network capacity, but it also makes routing more challenging. Three-Dimensional UWSN. In this case, by having the nodes at different depths, each node is anchored with a cable to the bottom of the ocean, and to offer them floating characteristics they are equipped with a buoy. This allows the nodes to be as static as possible. By modifying the length of the wire that connects the sensor to the anchor, the sensor’s depth may be adjusted thanks to a motor inside the sensor that is electrically controlled. The influence of ocean currents on the outlined process for adjusting sensor depth is a problem that has to be addressed in such an architecture. Multi-hop routes regulate communication because some of the sink node(s) are surface-based [3,6]. Three-Dimensional UWSNs with Autonomous Underwater Vehicles. They are made up of many static sensors interacting with some autonomous

126

F. Cuzme-Rodr´ıguez et al.

underwater vehicles (AUVs). AUVs support data collection and processing, can additionally serve as routing devices between fixed sensors, and perform reconfiguration tasks in the network. 1.2

Characteristics and Challenges of a UWSN

Table 1 briefly lists the main characteristics of underwater WSNs. in addition, a comparison is made between UWSN and TWSN [8]. Table 1. Diferences between UWSN and TWSN. Characteristic

UWANs

TWSNs

Bandwidth

Inversely proportional to communication distance

Unaffected in front of distance

Architecture

Mostly 3D

Mostly 2D

Link quality

Low (High BER and PLR)

Depends on the application but relatively better

Deployment

Sparse deployment

Dense deployment

Recharge difficulty

Hard

Depends on the application

Frecuency

Low frecuency (Hz, KHz), because signal with high fre-quency is quickly absorbed inwater

High frecuency (MHz, GHz)

Communication medium

Acoustic or optical signals

Radio communication

Transmission range

100 [m]–10,000 [m]

10 [m]–100 [m]

Propagation delay

High, due to the low-speed acoustic communication

Low

Topology

Highly dynamic due to water Static or slightly dynamic current

Propagation Underwater sound speed speed (about 1500 [m/s]) Note 1. Adapted from [4, 8].

Radio wave speed (about 3 × 108 [m/s])

Furthermore, [3] mentions other challenges such as: – Equipment-wise, financially, and in terms of deployment and upkeep, underwater sensors are expensive. – Networking elements (See Sect. 1.1). – The onboard storage device’s capacity determines how much memory may be used. – Due to filth, corrosion, and big sensor size, sensors are susceptible to failure.

Application for the Study of Underwater Wireless Sensor Networks

127

– Multipath and fading caused the channel’s quality to degrade. – Due to propagation delay and sound speed, time synchronization and subsequently node localisation are challenging to accomplish under water. – Changes in sound speed brought on by water conditions. The speed of sound is affected by any modification to one of these factors. Might lead to inaccurate location prediction. 1.3

Applications of UWSN

Underwater networks applications have a similar classification as terrestrial sensor networks, which are specified below: Scientific Applications. Which monitor environmental factors such as geological processes at the seafloor, water properties (temperature, salinity, oxygen levels, amount of bacteria and other contaminants, dissolved materials), and the number of animals such as microorganisms, fish, or mammals [3]. Industrial Applications. Which keep an eye on and regulate business operations, such the use of subsea equipment for oil extraction [3]. Military and Homeland Security Applications. Which may involve guarding and monitoring port infrastructure or ships in foreign ports, and communication links between submarines; According to [3] the main applications developed in this area are listed below: – – – –

Operations in mines, in shallow waters. Generation of Early Warning of Natural Events. Subsea Pipeline Monitoring. Protection of Marine Platforms and Power Plants.

On the other hand, the rest of the article is organized as follows. Section 2 deals with the materials and methods applied in the study, where an application design architecture is defined. Section 3 explains the development, emphasizing the use of case diagrams, and the mathematical basis for software development. Section 4 shows the results achieved so far with the graphic interfaces, simulations of the underwater channel establishing the best route, and the discussion of the results about simulation software, and finally, in Sect. 5 the conclusions of the advanced work so far are established.

2

Materials and Methods

This article uses an applied research, which allows solving a learning need in the area of wireless networks supported by technology, this need focuses on the inclusion of involving students and teachers of technological careers especially telecommunications, as well as using an application to understand the basics of the operation of wireless sensor networks underwater. The classical life cycle model is used, better known as the waterfall methodology, which includes the following: analysis, design, implementation, and testing of the system [10].

128

2.1

F. Cuzme-Rodr´ıguez et al.

Analysis Phase

The requirements and characteristics that the software should have are specified, such as the physical variables of the water to be considered in the mathematical calculation process that the algorithm will perform, as well as the position parameters of the sensors in a specific area, and the type or types of UWSN architectures that the system will be able to simulate will also be decided. It should be noted that a specific wireless sensor model is not taken as a reference, but all the results presented in the application will be independent of the characteristics of the sensors used; in other words, the application will be in charge of simulating the networks based on generic underwater wireless sensors. 2.2

Design Phase

In this stage, the application’s operating logic is established, and a model that graphically explains the programming structure and a diagram of use cases explaining the options that the application will provide, in which user interaction will take place, is proposed. In Fig. 2 shows the architecture diagram that represents the system and its interaction with the user. For the application design, use case diagrams of the Unified Modeling Language (UML) are used.

UWSN APPLICATION ARCHITECTURE

User Interaction

Autonomous Functions

Input

Process

Start

Data validation

Correct parameters

Output

Mathematical Calculation

Best Path

Successful delivery rate

Architecture diagram

End

Predefined parameters Incorrect parameters

Create Sensors Select Architecture

Select Routing Protocol

Route and Transmit

Define parameters

Fig. 2. Application architecture

2.3

Coding Phase

A couple of important aspects are considered here: the first is the physical design of the coding itself, this will be done in such a way that the response times are minimal, that is to say, by console and at the same time with plotted graphics with few details. Undoubtedly, the physical design will follow the logical design proposed in the previous stage. The second criterion to take into account is the

Application for the Study of Underwater Wireless Sensor Networks

129

physical design of the graphical interface, which despite being a trivial section in the performance of the simulations, stands out for being the aesthetic image of the application, and in many occasions, it is decisive at the moment of attracting the user’s attention; it is necessary to insist that this interface must be intuitive for the interested party, to facilitate the understanding of the application. The complete implementation is carried out in MATLAB® Student Suite as a programming language and at the same time as a programming environment. 2.4

Testing Phase

Finally, at this stage, in addition to confirming the implementation of all the requirements proposed above, certain characteristics proposed by the product quality model defined by ISO/IEC 25010 will be taken into account. The taxonomy of this model proposes attributes and sub-attributes of quality from which the ideal ones will be selected to measure the performance of the final application, so that a survey can be conducted to determine the students’ experience of use. Obviously, no major emphasis will be placed on the areas of lesser importance. It is important to emphasize that, depending on the results obtained at this stage, it is possible that the application and even the Analysis and/or Design may be modified.

3

Development

The functional requirements of the application are supported by surveys addressed to students and professors of the Wireless Communications course of the Telecommunications Career of the Universidad T´ecnica del Norte; this will help to understand the process that the user has to follow to achieve a successful simulation with the software. The following is a breakdown of the most relevant results of the survey applied, based on the ISO/IEC 25010 standard: In Fig. 3 shows the level of interest that users have in the proposed software.

Fig. 3. Students interest in the application.

In Fig. 4 indicates the priority level of the attributes that users consider important in the proposed software.

130

F. Cuzme-Rodr´ıguez et al.

Fig. 4. Quality attributes of greater relevance for the software.

3.1

Use Case Diagram

The use case diagram is structured to fulfill some specific functions that are detailed below (See Fig. 5):

Fig. 5. Use case diagram of the application.

Create Sensors. Allows the desired sensors to be created in a floating window. In this window there are functions such as importing data, generating them randomly, inserting them manually, and exporting the data of the created nodes. Figure 7 shows the interface to create the nodes. Enter Node Parameters. Allows the user to enter the required parameters for the system to perform the relevant calculations. The initial parameters refer to

Application for the Study of Underwater Wireless Sensor Networks

131

the values of Transmission Power, Signal Frequency and Communication Radio of each sensor. In addition to the Distance between the participating nodes in the path calculated by the software by relating the positions of the nodes through their ordered pairs in the plane [1]. Equation (1) measure the distance between a point A and B.  (1) d(A(x1 ,y1 ),B(x2 ,y2 )) = (x2 − x1 )2 + (y2 − y1 )2 Run Routing Protocol. It allows tracing a route between a source and destination node using a routing protocol. As a routing protocol, a method of the shortest route is used, which is applied firstly by looking for the distance between all the nodes that have communication and applying the Dijkstra’s algorithm [2] for its calculation. Transmit Data. For the simulation of a transmission from a source (S) to a destination (D), it is necessary to find the channel models of each link. In [8] they consider the SNR (Signal-to-Noise ratio), and the establishes the successful delivery ratio. Part 1: Signal-to-Noise Ratio. The Eq. (2) show the SNR per bit γ of the acoustic signal transmitted in the receiving node. Where Slevel , Tloss , Nlevel y Dindex are signal level, transmission loss, noise level and directivity index, respectively. Note that all values are in decibels [dB], and the value Dindex is 0, since in underwater environments omnidirectional hydrophones are usually used. γ = Slevel − Tloss − Nlevel + Dindex

(2)

The Slevel , define the effective sound level, and it is calculated by Eq. (3). Where transmitter power P and transmission range r units are watts [W ], and meters [m], respectively.   (3) Slevel = 10 log(P ) − log(4πr2 ) − log(0.67 × 10−18 ) The Tloss represents the rate at which sound energy is wasted. For a signal of frequency f [KHz], the underwater transmission loss Tloss at a distance d[m] is calculated by Eq. (4) Tloss = 20 log d + α(f ) × d × 10−3

(4)

where α(f ) is the sound absorption coefficient (in seawater) in decibels per kilometer. We may apply Thorp’s formula, represented by Eq. (5), to determine α(f ). 44f 2 0.11f 2 + + 2.75 × 10−4 f 2 + 0.003 (5) α(f ) = 2 1+f 4100 + f 2 Finally, with Eq. (6) Nlevel is calculated, which considers the sum of countless effects from different sources (turbulence, navigation and waves), using a practical approximation of noise in underwater environments. Nlevel = 50 − 18 log f

(6)

132

F. Cuzme-Rodr´ıguez et al.

At this point, you can combine the Eqs. (3) to (6) in Eq. (2), to get the SNR γ concerning transmit power   γ = 10 log(P ) − log(4πr2 ) − log(0.67 × 10−18 ) − 20 log d   0.11f 2 44f 2 −4 2 − + + 2.75 × 10 f + 0.003 × d × 10−3 (7) 1 + f2 4100 + f 2 − 50 + 18 log f where P is the transmitted power, r is the transmission range, d is the transmission distance (both can be interpreted as the same value), and f is the frequency [8]. Part 2: Relationship between the SNR and Successful Delivery Ratio. Calculating the successful delivery ratio, a measure of how reliable the links via UWSN are, is the goal of the second component of the channel model. Depending on the chosen models, the BER of BPSK in a Rayleigh fading channel is defined by Eq. (8) ⎛ ⎞ BER(γ) =

1⎝ 1− 2

10γ/10 ⎠ 1 + 10γ/10

(8)

where BER(γ) is the estimated number of erroneous bits received from a data flow over a communication channel that has been altered due to noise, interference, distortion or bit synchronization errors, per unit of time, where γ is the SNR [8]. Based on a single bit’s error rate, exposed in Eq. (8), you can get the single bit successful delivery ratio in this way 1 (γ) = 1 − BER(γ) Psuccess ⎛ ⎞ γ/10 10 1 ⎠ = 1 − ⎝1 − 2 1 + 10γ/10 1 1 10γ/10 = + 2 2 1 + 10γ/10

(9)

1 where Psuccess (γ) is the probability of successfully delivering a single bit when the SNR is equal to γ. That said, the chance that a packet of m bits is successfully m (γ). According to [8], the successful delivery ratio delivered is defined as Psuccess of a packet of size m bits may be calculated using the value of γ, as specified in Eq. (10). ⎞m ⎛ γ/10 10 1 1 m ⎠ =⎝ + (10) Psuccess 2 2 1 + 10γ/10

Display Results. It allows generating readable values for the user that indicate the result of the simulation, the same values that will have been calculated in the previous step using the process described in Algorithm 1.

Application for the Study of Underwater Wireless Sensor Networks

133

Algorithm 1. Successful delivery ratio of a package 1: 2: 3: 4: 5: 6: 7: 8: 9:

Get P, f, m Compute d from 1 Compute RoutingP rotocol() Compute Slevel (P, r) from 3 Compute Tloss (d, f ) from 4 Compute Nlevel (f ) from 6 Compute γ(Slevel , Tloss , Nlevel ) from 2 Compute BER(γ) from 8 m from 10 Compute Pexito

Edit Parameters. It allows the system to be reused without a reboot. It means that the software allows modifications at all times. 3.2

Interface Coding

At the moment the Graphical User Interface (GUI) of what will be the simulator is programmed. In Fig. 6 the layout of the main system window is shown, where we can find the “Create Sensors” button, whose function is to open the window shown in Fig. 7, the axis in charge of plotting the distribution of created nodes, a panel where required parameters for the simulation are inserted, a routing panel where a protocol is used to create a path between an origin and a destination, a transmission panel to start the simulation, and finally, a panel where the results are displayed.

Fig. 6. Application’s main GUI.

134

F. Cuzme-Rodr´ıguez et al.

As previously mentioned, Fig. 7 shows a floating window, where the sensor creation stage will be carried out. This process can be performed manually, by importing data and based on the implemented random generation function.

Fig. 7. Application’s secondary GUI.

As an example of application operation, Fig. 8 shows a scattering of 50 randomly generated UWSN nodes in an area of 100 × 100 [m] with a maximum radius of 30 [m]. A routing method is applied to this network of nodes by calculating the shortest path, this process is carried out through Dijkstra’s algorithm.

Fig. 8. Sensor scattering example with shortest path highlighted.

Application for the Study of Underwater Wireless Sensor Networks

4

135

Discussion

The application will allow the user to easily create a set of nodes located in a plane (2 or 3 dimensions), with the ability to modify the attributes of each node, create a route between two nodes and check the efficiency of the route. The importance of simulation software in wireless communications lies in the reduction of costs and time of implementing a network. The characteristic that makes the difference between the proposed software and other programs developed such as OMNET++, NS2, or NS3, which offer greater robustness in network simulation [9], lies in the learning curve that these ones imply. Under this criterion, it seeks to have support for teachers and students who are looking for a tool that supports education in this area and that has a simple and intuitive interface. The proposed software establishes functional adaptation, usability, security and portability characteristics of the product quality model defined by ISO/IEC 25010 in [7].

5

Conclusions

Simulators are an essential part of any development in the field of engineering, so wireless communications in an underwater environment are also possible, allowing tests to be carried out in controlled environments before moving on to a real environment. Under this criterion, we present the possibility of creating an application that allows simulating this type of scenario in a practical and simple way. Although the development is nearing completion, students are expected to address issues about these environments so that they present future solutions in these areas. Allowing the academy and especially telecommunications careers to open the way to new technological solutions. The parameters defined for the development of the application allow the creation of nodes in a dynamic and parameterizable way. Furthermore, the software can apply a routing protocol between a source node and a destination node, whose path of the established channel allows simulating a data transmission from one node to another.

References 1. C´ aceres, M., Moreno, Y., Tello, J., Vargas, I.: C´ alculo de a distancia entre dos puntos. In: Dise˜ no, implementaci´ on y evaluaci´ on de unidades did´ acticas de matem´ aticas en MAD, vol. 3, pp. 67–129 (2018) 2. Dijkstra, E.W.: A note on two problems in connexion with graphs. Numerische Math. 1, 269–271 (1959). https://doi.org/10.1007/BF01386390 3. El-Rabaie, E.S., Nabil, R., Alsharqawy, M.: Underwater wireless sensor networks (UWSN), architecture, routing protocols, simulation and modeling tools, localization, security issues and some novel trends. CiiT Int. J. Network. Commun. Eng. 7(8), 335–354 (2015)

136

F. Cuzme-Rodr´ıguez et al.

4. Han, G., Jiang, J., Sun, N., Shu, L.: Secure communication for underwater acoustic sensor networks. IEEE Commun. Mag. 53(8), 54–60 (2015). https://doi.org/10. 1109/MCOM.2015.7180508 5. Haque, K.F., Kabir, K.H., Abdelgawad, A.: Advancement of routing protocols and applications of underwater wireless sensor network (UWSN)-a survey. J. Sens. Actuator Netw. 9(2), 1–31 (2020). https://doi.org/10.3390/jsan9020019 6. Ibrahim, D.M., Hussein, D.: Modelling and performance enhancement of underwater wireless sensor networks by Petri nets. Ph.D. thesis, Tanta University (2014). https://www.researchgate.net/publication/280609377 7. ISO: Systems and software Quality Requirements and Evaluation (SQuaRE) (ISO/IEC 25010) (2011). https://www.iso.org/obp/ui/#iso:std:iso-iec:25010:ed-1: v1:en 8. Kao, C.C., Lin, Y.S., Wu, G.D., Huang, C.J.: A comprehensive study on the internet of underwater things: applications, challenges, and channel models. Sens. (Switz.) 17(7) (2017). https://doi.org/10.3390/s17071477 9. Patel, R., Patel, R.L.: Survey on network simulators (2018). https://www. researchgate.net/publication/333262623 10. Prokopets, E.: The SDLC process explained: key phases and methodologies (2020). https://www.edvantis.com/blog/software-development-process/ 11. Tan, H.P., Seah, W.K.G., Diamant, R., Waldmeyer, M.: A survey of techniques and challenges in underwater localization. Ph.D. thesis, Singapore Management University (2011)

e-Learning

Learning Performance Indicators a Statistical Analysis on the Subject of Natural Sciences During the COVID-19 Pandemic at the Tulcán District Marcela Aza-Espinosa1 , Laura Guerra Torrealba2 , Erick Herrera-Granda3 , María Aza-Espinosa4(B) , Marco Burbano-Pulles3 , and Javier Pozo-Burgos5 1 Instituto de Posgrados, Pontificia Universidad Católica del Ecuador Sede Ibarra, Av. Jorge

Guzmán Rueda, 100-112, y Padre Aurelio Espinosa Polit, Ibarra, Ecuador 2 Escuela de Ingeniería, Pontificia Universidad Católica del Ecuador Sede Ibarra, Av. Jorge

Guzmán Rueda, 100-112, y Padre Aurelio Espinosa Polit, Ibarra, Ecuador 3 Facultad de Industrias Agropecuarias y Ciencias Ambientales, Universidad Politécnica Estatal

del Carchi, Calle Antisana y Av. Universitaria, Tulcán, Ecuador 4 Centro de Posgrado, Universidad Politécnica Estatal del Carchi, Calle Antisana y Av.

Universitaria, Tulcán, Ecuador [email protected] 5 Facultad de Comercio Internacional, Administración y Economía Empresarial, Universidad Politécnica Estatal del Carchi, Calle Antisana y Av. Universitaria, Tulcán, Ecuador

Abstract. Learning systems during the COVID-19 period has been modified in terms of methodology strategies as well as teachers’ remote teaching emergency approach at primary education and higher education institutions. As a consequence, educators had to limitedly teach the basics from prioritized academic curriculums during the health emergency. Natural Sciences was not an exception, and the majority of educators in this field of study have notably identified low-academic performance during the COVID-19 pandemic. In Ecuador, learning expected results was obtained through the evaluation of performance indicators, so in this research project a statistical analysis was performed using scores for these indicators obtained from Middle School samples of students of the Carchi province, with the aim of identifying significantly affected population strata by the application of remote learning and characteristics leading to low-academic performance. Data gathered was statistically evaluated and the test was calibrated using the Item Response Theory; significative difference among variables and performance indicators were analyzed via students’ scores using ANOVA, Pairwise T-Tests, and T-tests. Difference tests were carried out using the weighted score of each student for each indicator as continuous variables and the categorical variables were the internet availability, students’ residence location and quintile they belong to. Results proved that there exist significant differences in the student scores depending on the internet availability and the zone where they live, where the academic performance was significantly higher on those students that had stable internet connection in their homes and resided in urban zones during the pandemic.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 139–154, 2023. https://doi.org/10.1007/978-3-031-25942-5_12

140

M. Aza-Espinosa et al. Keywords: Item response theory · Learning performance indicators · Validity · Education COVID-19

1 Introduction Project performance indicators correspond to performance criteria skills developed in itself. When educators implement their own strategic methodology in the learning process, the use of this instrument is suggested so that the students’ performance and progress within a project is evident [1, 2]. Learning systems during the COVID-19 period have been modified in terms of methodology strategies as well as teachers’ remote teaching emergency approach at primary education and higher education institutions [3]. As a consequence, educators had the need to limitedly teach the basics from prioritized academic curriculums during the health emergency [4]. Commonly, students low-academic performance occurs because of changes in learning processes among other hindering factors resulting in the lack of content-assimilation [5]. Natural Sciences is no exception since the majority of educators in this field have notably identified low-academic performance in their students. Research work begins by the need of confirming whether knowledge acquired during remote learning have yielded significant results. A performance indicator evaluation instrument was developed which included over 10 indicators applicable to the virtual education approach during the COVID-19 pandemic. Additionally, questions that aided to raise categorical variables defining students’ characteristics –in the Carchi province– significatively had an impact on their academic performance. With the aim of performing a learning performance indicator statistical analysis on Social Sciences during the COVID-19 pandemic, the following specific objectives were applied to 584 first-year Middle School (EGB)- student-sample from the Carchi province 2020–2021 academic period: • Analysing statistical techniques in scientific literature and learning performance indicators methodology evaluation. • Designing an instrument to determine learning performance indicators, characteristics and the extent to which students respond to virtual emergency learning. • Calibrating the instrument through statistical testing guaranteeing its usability during the research project. • Determining implicit differences among students’ performance indicators with respect to Internet accessibility variables, student’s quintile and residence location. In order to perform a statistical study appropriately, the aim of the same and expected results should be defined. Furthermore, a set of hypotheses formulation must be proven by means of a variable-correlation inferential analysis. Therefore, a sampling process should be duly run, data processing and assumption verification so as to select the most adequate test. When data gathered is used in a direct way they may be biased or there may be atypical observations hindering assertive conclusions. That is the reason why in this

Learning Performance Indicators a Statistical Analysis

141

study statistics was used as data processing tool and effective metrics from inferential processes were obtained along with probability indexes for result-verification providing reliability extending sample results to the whole student population. The study was developed in four sections. Section 1-A brief introduction of the work performed and state-of-the art related work. Section 2-Materials and methods mainly comprised by procedures and statistical techniques used in the analysis. After that, in Sect. 3-Results obtained by indexes that guarantee the instrument validity by its calibration as well as probability indexes that ratify the existence of differences for each performance indicator and categorical variables comprising the sample. Finally, Sect. 4-details conclusions and future lines of work. 1.1 Related Work The pandemic has transformed learning methods at academic institutions modifying and making the necessary adjustments to get students across educational continuity using a variety of tools in order to fulfill expected learning performance during the academic year [6]. “It is necessary to make a series of decisions and have resources that challenge school systems, academic centers and educators” [7]. To continue with the learning process, educators and students used online resources, such as the Internet which offers a vast number of pedagogical resources and knowledge to include communication tools like digital platforms. It is worth noting that Ecuador already had a digital infrastructure education system becoming priority in previous government terms with the aim of improving learning quality in academic institutions despite having utilization disadvantages in urban and rural areas. Although at present digital-access gaps have decreased because of wirelss connectivity, there are still considerable digital-access gaps becoming a social inequality for present and future generations. On the other hand, socio-economic and cultural differences are latent, seen in IT devices at home considering that such access levels mean that it is highly likely that several members of a single family require Access to the same device in order to work or have school lessons [7]. Moreover, it is vital to mention that Internet access should be strengthened in disadvantaged populations since wireless internet is a prepaid process providing limited air-time access for web-surfing and learning platforms resulting in inequality to access information and knowledge. Another differentiating factor is adolescent age regarding online-activity performance, as online teenage activity significatively increases because socialization and entertainment takes place online thus children are at disadvantage when trying to study online [8, 9]. The work of [7] states that nowadays the student population does not have either specific knowledge nor aptitudes to put into practice self-care strategies and that there is a high level of unfamiliarity with learning opportunities that a good use of the Internet provides. Additionally, a great number of educators have not been properly trained to lead and promote virtual learning continuity despite the existence of such standards that

142

M. Aza-Espinosa et al.

have only been adopted by a number of academic institutions to modify initial teachertraining processes aiming to train future educator generations in terms of having the necessary skills for XXI century education. Evaluation Process Adaptation. A relevant academic aspect is the evaluation and monitoring learning process and feedback with the purpose of becoming familiarized with students learning processes and taking pedagogical action to enhance them as they are being adjusted to an educational reality. Such evaluations are vital since they provide information and usability to educational systems so the ideal conditions are required in order for them to be reliable and equitable when being applied. According to [10] y [11] it is fundamental to analyze several formative evaluation methods in a virtual academic context considering different means used satisfying educational services. Several regions of the country were affected by the pandemic requiring prompt adequate and innovative responses by the Ministry of Education with a view to solve have learning continuity, curriculum management and independent evaluation processes. Factors Related to Learning Performance. Learning performance are results obtained from 4 knowledge stages: Maths, Language and Literature, Natural Sciences and Social Studies. Evaluated with the programs “Ser Bachiller” and “Ser Estudiante” 2017–2018 where potential relations in global average within the 4 knowledge fields are explored. Results are presented around five analysis categories: inclusive environments, learning time, instruction quality, family support and physical resources [12]. Evaluation Instruments. In the study by [13] an educator’s learning-efficacy measuring instrument was developed with the view to provide support and validate this measurement and examine the relationship between efficacy and observable educator performance. Two substantial self-efficiency factors were identified in this study, corresponding to the [14] study theoretical model. Moreover, a convergent validity multiaspect -multi-method discriminatory analysis based on educator’s efficacy, verbal skills and flexibility was developed. Finally, feedback performance was observed, comparing teaching time from educators that scored high and low in terms of efficacy. Such analysis was based on obtained variability factors found in a 90 educator-group. Results proved that educator’s efficacy is multidimensional and consists of at least two dimensions according to the Bandura self-efficiency model. Furthermore, educator’s efficacy measures converge through different teaching methods. Item response theory measures latent risks in multi-variant data analysis unobservable responses with the aim of measuring academic performance, aptitudes, skills and test construction. In the research work “Design and validation through the Item Response Theory from the instrument to evaluate psychological capital at IPSICAP” organizations, causal relationships, mediation and individual, group and organizational correlations [15]. Consequently, the theory presented results in true score estimates and at the same time analysis validity and reliability is indisputable. Finally, in applied studies regarding educational evaluation like [16] and ANOVA test implemented to analyze data obtained from evaluation instruments. This study was applied to 34 schools and 7 regions in Turkey, to 208 educators and 1830 students. The survey consisted of 30 items, obtaining a 0,86 Cronbach Alfa Coefficient in the global

Learning Performance Indicators a Statistical Analysis

143

survey with 0.89 to 0.94 variations in the educators group. Data multivariate normality was examined through Q-Q plots and after that, the ANOVA single-track test was used for learning area and learning experience data analysing.

2 Materials and Methods The research presented is that of difference finding scope “This type of study aims to identify significative differences between or within the groups defined by one or more categorical variables” [17] (p. 207) while for the mixed approach, deductive and inductive methods were applied. The following three phases were contemplated for its development. 2.1 Evaluation Statistics Techniques A bibliographical overview was performed aiming to add different points of view and scientific evidence regarding Item Response Theory, T statistical testing and Anova. During the process, three steps were executed, starting by research-question formulation. Next, the search and article selection from Scopus, Scince Direct, Springer, Scielo and Latindex, databases took place. Finally, document reading allowed us to extract important information for the investigation construction. In step one, three research questions are formulated as basis for information search (See Table 1) –Above-mentioned data bases. Table 1. Information search questions Code

Question

P1

Which statistical tests can be applied for processing student evaluation information?

P2

Which statistical tests are used to calibrate a test instrument?

P3

Which statistical tests are used to identify significative score differences?

Therefore, literature review enabled us to delimitate cited studies in the previous section, select statistical tests and understand what is required to design the instrument as follows. 2.2 Evaluation Instrument This instrument is based on determination of learning, so three variables characterizing a student’s learning environment such as internet accessibility, quintile and location. Then, 73 multiple-choice questions quantitatively evaluated acquired information by first-year middle school students focused on prioritized curriculum during the COVID-19 health emergency in Ecuador. (See Table 2). In the research process, the instrument was applied to the target population equal to sample n = 583 students.

144

M. Aza-Espinosa et al.

Table 2. COVID-19 prioritized curriculum—natural sciences performance indicators for firstyear middle school students. 1. ICN 4.2.1, “Determines cell complexity according to its structural functional characteristics, and identifies technological tools contributing to cytology knowledge” 2. I.CN.4.13.1. “Determines biochemical cycles in an ecosystem, from the observation of models and diverse information sources construing the impact human activity would cause in such spaces. (J.3) I.CN.0. “Classify live beings according to provided taxonomic criteria (Domain and kingdom) establishing a relationship between the taxonomic group and organization levels in its diversity. (J.3., I.2.)” 3. I.CN.0. “Classify live beings according to provided taxonomic criteria (Domain and kingdom) establishing a relationship between the taxonomic group and organization levels in its diversity. (J.3., I.2.)” 4. I.CN.4.2.4. “Difference between sexual and asexual and determines the importance for the survival of diverse species. (J.3., S.1.)” 5. “Elaborates the representation of a food web in which food chains are identified composed by producers, consumers and decomposer organisms. (J.3., J.4.) (Ref. I.CN.4.3.1.)” 6. I.CN.4.5.1. “Analyses processes and evolutive changes in human beings as a result of natural selection and geological events, through descriptive evidence: records, fossils, continental drift and species massive extinction. (J.3.)” 7. I.CN.4.6.1. “Understands premature maternity/paternity risks according to a life plan from human reproduction stage analysis, prenatal care prominence and breastfeeding. (J.3., J.4., S.1.)” 8. “Identifies from the observation of diverse sources, Ecuador’s ecosystems according to relevance, geographical location, climate and biodiversity. (J.3., J.1.) (Ref.I.CN.4.4.1.)” 9. I.CN.4.7.1. “Proposes prevention measures—forms of infection, bacterial propagation and antibiotic-resistance; structure, evolution, immune system function, immunological barriersprimary, secondary and tertiary and immunity systems: natural, artificial active and passive. (J.3., I.1.)” 10. I.CN.4.4.2. “Argues from research of several sources, the standing of protected areas as wildlife conservation, research and educational mechanism diminishing the impact of human activity on habitats and ecosystems. Proposes protective and conservation measures. (J.1., J.3., I.1.)”

In this research, the statistical analysis was used to validate the evaluation instrument and to corroborate study sample performance indicators. The instrument was comprised by 70 single-selection questions divided among 10 performance indicators and 3 categorical variables such as Internet connectivity availability, quintile and student residence location. To determine whether such variables had an impact on performance indicators, data was processed for atypical observations through Mahalanobis distance [18]. After that, the Item Response Theory was applied to adequately weigh each question according to difficulty level so as to detect and eliminate problematic questions applied to the students’ sample.

Learning Performance Indicators a Statistical Analysis

145

2.3 Significative Difference Analysis By weighting scores treated through TRI, each student’s scores were taken for each learning performance indicator. Parametric assumption verification tests were executed in numerical variations representing students’ scores thus normality, linearity, homogeneity and homoscedasticity were verified. After that, T, ANOVA and pairwise T-tests were executed using three categorical variables. Consequently, hypotheses were evaluated considering variables as Internet connectivity, quintile, student residence location and grades provided by learning performance indicators results in Natural Sciences aiming to establish population strata affected significatively by virtual learning in addition to identifying characteristics that triggered low-performance levels, therefore hypothesis raised: • H1. - There are implicit differences in performance indicators academic results in the subject of Natural Sciences with respect to Internet connectivity availability. • H2. - There are suggestive differences in performance indicators academic results in the subject of Natural Sciences with respect to quintile or social class a student belongs to. • H3. - There exist differences in Natural Sciences performance indicators with respect to student’s residence location.

3 Results 3.1 Instrument Validation The instrument used for this study managed to obtain a 583-observation data base processed through R statistical programming language in R Studio Interface. At first, it was verified that there were no missing data or atypical observations. For atypical observation detection Mahalanobis distance was used in each observation so χ 2 distribution quantile was used for 0.999 interval considering as atypical 0.1% from those observations which were sufficiently far apart from distribution. Therefore, through this composition a 112.3169 cut score was established so no atypical observations were found. In this way, database was comprised by 583 evaluated students. As pre-detailed the evaluation instrument was applied to a total of 583 students from Carchi province chosen from a 27 participating academic institutions sample from which 51.3% were male and 47.5% female while 1.2% were LGBTI. Ethnically speaking, 91.4% of students were mestizo, 2.9% were self-defined as white, 2.7% as indigenous, 2.7% self-proclaimed afro-descendants and 0.3% mulatto. Similarly, 48.4% of students were aged 13, 42.4% 14, 8.7% were older than 14 years of age and 0.5% were younger than 12 years old. Moreover, 57.6% of students stated that they had a computer at home to take online lessons while 42.4% did not have a computer during the pandemic. Also, 71.9% had a stable internet connection during the pandemic while 28.1% did not have internet connection. Thirty-point five percent of participating students are positioned in the first poverty quintile, 24.7% are positioned in the second quintile, 19.2% positioned in third quintile, 15.4% in fourth quintile and 10.1% in the fifth quintile of poverty.

146

M. Aza-Espinosa et al.

The next step was to execute the instrument evaluation calibration using the Item Response Theory (IRT) through the Rasch parameter logistic model to evaluate the probability that a student reacts to the instrument in a certain way. This is a latent feature unidimensional model assuming that all discrimination parameters are the same particularly in educational tests. In this way, the model estimates discrimination parameters ζ while ξ is the parameter of observed difficulty in a set of individuals. For each conditional probability Pi (θ ) of answering an item correctly, for n reagents instrument [19] Pi (θ ) =

1 1 + e−ξi (θ−ζi )

i = 1, 2, . . . , n

(1)

As instance, probability associated to each item represented by its own χ 2 statistic corresponding to an associated probability which estimates each item difficulty level. Characteristic curves obtained through the application of a Rasch logistic model parameter seen in in Fig. 1.

Fig. 1. Characteristic curves obtained through the application of a Rasch logistic model parameter.

According to [20] considered criteria for reagent selection in line with its parameters are: Item should have a difficulty level ξ located in the interval −2.5 ≤ ξ ≤ 2.5: It should also have a discrimination level ζ in the 0.5 ≤ ζ ≤ 2 interval. For this reason, 3 items were eliminated since, question 21 (See Table 3) presented an exorbitant difficulty level while questions 25 and 33 were too easy for student’s sample. Table 3. Eliminated items Code

χ2

  Pi > χ 2

P21

35.7738

4.3482e−05

2.7208

0.6631

P25

50.6003

8.3048e−08

−2.5271

0.6631

P33

32.7678

1.4656e−04

−3.0532

0.6631

Difficulty ξ

Discrimination ζ

Learning Performance Indicators a Statistical Analysis

147

According to [21] categorizing items in line with its difficulty level, the following should be considered: 5% of lesser value items within the difficulty parameter corresponds to the “easy” level, the following 20% are positioned at the “moderately easy” category, 50% to average difficulty, 20% to the moderately difficult category and 5% having higher values correspond to “difficult”. Results for item categorization in line with its difficulty level—Table 4. Table 4. Items classified by difficulty level. Code

χ2

  Pi > χ 2

Difficulty ξ

Discrimination ζ

Difficulty level – Easy p3

10.9284317

0.28064652

1.7972185

0.66309353

p42

27.4016199

0.00119998

1.79717605

0.66309353

p20

70.5018645

1.21E−11

1.72426169

0.66309353

Difficulty level – Moderately easy p64

150.192512

8.05E−28

1.70981133

0.66309353

p14

18.7140978

0.02773663

1.56803081

0.66309353

p43

8.01085293

0.53305287

1.55415161

0.66309353

p38

27.375437

0.00121206

1.32527777

0.66309353

p53

23.467635

0.00522726

1.28610504

0.66309353

p65

22.8960585

0.00643402

1.27310416

0.66309353

p11

15.3839817

0.08091437

1.09414085

0.66309353

p12

41.2374502

4.53E−06

1.06905546

0.66309353

p61

8.87659229

0.44874303

1.03166148

0.66309353

p45

28.0080801

0.00095091

0.94530265

0.66309353

p16

12.4903002

0.18705778

0.87234123

0.66309353

Difficulty level – Average p36

28.799798

0.00070039

0.82399464

0.66309353

p47

130.977891

7.46E−24

0.77612628

0.66309353

p63

10.2313785

0.33208073

0.71625194

0.66309353

p50

18.2824614

0.03203436

0.53964708

0.66309353

p32

23.7751892

0.00467125

0.53957752

0.66309353

p60

13.3582926

0.14705271

0.49316383

0.66309353

p46

12.8733325

0.1684221

0.42359838

0.66309353

p10

30.4704158

0.00036475

0.37742712

0.66309353

p19

7.9789574

0.5362682

0.35428575

0.66309353 (continued)

148

M. Aza-Espinosa et al. Table 4. (continued)

Code

χ2

  Pi > χ 2

Difficulty ξ

Discrimination ζ

p5

17.2718031

0.04462585

0.35426406

0.66309353

p51

20.5800549

0.01465136

0.30819476

0.66309353

p66

28.4697786

0.00079582

0.273889

0.66309353

p15

9.71420507

0.37411354

0.26237835

0.66309353

p69

2.78050796

0.9723651

0.1935691

0.66309353

p52

8.78389718

0.45745695

0.14780852

0.66309353

p1

23.2685789

0.00562049

0.0792759

0.66309353

p18

20.8387436

0.01338599

0.06791593

0.66309353

p23

1.44220113

0.99755327

−0.04631393

0.66309353

p13

5.60259608

0.7789384

−0.0577801

0.66309353

p17

38.4499189

1.45E−05

−0.10338115

0.66309353

p6

18.0934807

0.03410436

−0.21795878

0.66309353

p7

13.5413722

0.13960026

−0.34439314

0.66309353

p40

17.0699473

0.04763181

−0.37905001

0.66309353

p57

27.4711287

0.00116849

−0.41378787

0.66309353

p2

24.9567176

0.00301943

−0.44862969

0.66309353

p29

14.3714073

0.10971336

−0.51862849

0.66309353

p59

16.8799751

0.05062907

−0.5303341

0.66309353

p58

7.28602502

0.60736695

−0.53033438

0.66309353

p35

6.99924942

0.63719756

−0.6244284

0.66309353

p62

39.8629352

8.05E−06

−0.67176406

0.66309353

Difficulty level – Moderately difficult p26

51.5597314

5.47E−08

−0.96223669

0.66309353

p27

35.2415419

5.40E−05

−1.11224368

0.66309353

p68

34.5713557

7.09E−05

−1.13766267

0.66309353

p4

12.1385224

0.20561227

−1.13771795

0.66309353

p31

37.0309721

2.60E−05

−1.29277878

0.66309353

p39

77.7602402

4.50E−13

−1.31935802

0.66309353

p55

39.8900773

7.95E−06

−1.33245427

0.66309353

p34

10.352203

0.32273187

−1.42600185

0.66309353

p48

67.5540655

4.58E−11

−1.50808837

0.66309353 (continued)

Learning Performance Indicators a Statistical Analysis

149

Table 4. (continued) Code

χ2

  Pi > χ 2

Difficulty ξ

Discrimination ζ

p28

20.7845232

0.0136423

−1.52195814

0.66309353

−2.13831378

0.66309353

Difficulty level – Difficult p37

25.1173378

0.00284409

p49

59.2432006

1.88E−09

−2.17153991

0.66309353

p24

19.2697861

0.02299422

−2.18837311

0.66309353

p41

57.4373746

4.17E−09

−2.20539173

0.66309353

Next, difficulty levels established in the instrument were used to assign different scores for each item according to estimated difficulty levels. Thus, items corresponding to the “Easy” category, moderately easy, moderately difficult and difficult scored 1, 2, 3, 4 and 5 points respectively. Having obtained the 70 -question instrument calibrated conforming to difficulty levels, student performance levels were weighted, considering each achievement reached for different learning performance indicators. It should be noted that, this weighted score in accordance with difficulty levels constitute a quantitative measure more real to students’ performance since these metric values the representing effort when answering higher difficulty items hence, sample counter-productive questions were removed. Next, scores obtained for each student converted in a 100-point scale to assess contrasting student economic status conforming to performance indicators. Average score descriptive statistics obtained for the calibrated instrument shown in Table 5. Table 5. Each student weighted average descriptive statistics for calibrated instrument Minimum

First quartile

Average

Mean

Third quartile

Maximum

Standard deviation

15.96

37.34

48.84

48.02

58.04

79.99

13.71

3.2 Data Gathering Analysis In order to determine the type of difference-tests to be applied, parametric- assumption verification testes were run to numerical variables through the instrument thus representing students’ scores. First, normality, linearity homogeneity and homoscedasticity assumptions were verified. Because the data base comprises a 70-question multivariate sample, a false regression analysis was used as a parametric-assumption verification strategy, then a random quantile set based on χ 2 was generated for each of the 583 multivariate observations to verify its performance with respect to sample quantiles.

150

M. Aza-Espinosa et al.

To verify normality assumptions, a standardized residual histogram was obtained from the random quantile set lineal regression process χ 2 with respect to the sample data. When verifying the linearity assumption, Q-Q Plot-plotting theoretical quantile standard residues obtained from χ 2 . Finally, homogeneity and homoscedasticity were analyzed through sample quantile scale values Scatterplot obtained from the lineal regression model regarding standardized residues—Assumption verification results in Fig. 2.

Fig. 2. Parametric assumption results obtained for through false-regression analysis

Figure 2 results obtained from four assumption false-regression analysis a) As displayed, standardized residues had a close-to- normal distribution, therefore it was accepted as normality assumption; b) As displayed, results from Q-Q Plot standardized residues are closely located to a lineal tendency in the −2 to 2 tendency so the linearity assumption was accepted; c) Plotting adjusted standardized residues, it is evident that observations are similarly distributed among the four Scatterplot quadrants therefore, homogeneity assumption is accepted. There are no patterns in represented values and are distributed in a random-splashing way—resulting in a homoscedasticity assumption. Next, difference-tests where run among measures for obtained scores by 583 students for each indicator. As previously displayed, the test is parametric so T-test and ANOVA were used to determine if there is implicit difference for each indicator using as categorical variables Internet connectivity, quantile and student residence location. Pairwise T-test was used as matching test to determine which groups possess such difference. These tests were carried out over the whole 10 indicators, so here we present the results obtained for indicators 6th and 8th as examples of the applied methodology. Indicator 6th —I.CN.4.5.1. “Analyses processes and evolutive changes in human beings as a result of natural selection and geological events, through descriptive evidence: records, fossils, continental drift and species massive extinction. (J.3.)”. Results presented in Fig. 3 and Table 6. As can be seen in Table 6, comparing obtained values for ICN.4.5.1 performance indicator, it was established that students that had Internet connectivity during the pandemic had better performance with 0.02667 p-value. Comparing level performance according to quantile, significance level was reached on ANOVA test with 0.0497 p-value. Similarly, a matching T test was run as posteriori test which found that quintile 5 students’ performance was significantly higher to quintile 1 students scoring 0.042 p-value. Moreover, there was no significant difference in performance levels between urban and rural students.

Learning Performance Indicators a Statistical Analysis

151

Fig. 3. Box diagram and result intensity for ICN.4.5.1 indicator

Table 6. Statistic test results performed for ICN.4.5.1 indicator

Indicator 8th . “Identifies from the observation of diverse sources, Ecuador’s ecosystems according to relevance, geographical location, climate and biodiversity. (J.3., J.1.) (Ref.I.CN.4.4.1.)” –Results presented in Fig. 4 and Table 7. Table 7—compares student values for ICN.4.2.1 indicator. It was established that students who had Internet accessibility during the pandemic scored better performance 0.02934 p-value. To contrast student’s performance in line with quintile, significance level was reached in ANOVA test scoring 0.0497 p-value so a Pairwise T-test was run as posteriori test where student performance in regards of quintile 5 was substantially higher compared do quintile 1 students scoring 0.043 p-value. Besides, student’s resident location results delivered a significance level since urban students had a better performance compared to rural students scoring 0.04153 p-value.

152

M. Aza-Espinosa et al.

Fig. 4. Box diagram and result intensity for ICN.4.4.1 indicator

Table 7. Statistic test results performed for ICN.4.4.1 indicator.

4 Discussion The process described for indicators 6th and 8th was replicated over the 10 indicators previously described, where: indicators 2, 3, 4, 5, 6 and 10, didn’t reach the significance level in the Kruskal-Wallis test, so significant differences weren’t found for those indicators. In a similar way, indicators 1 and 9 achieved the significance level in the Kruskal-Wallis test, but in the post-hoc pairwise test none of the comparisons reached the significance level. Among findings, it was evident that students who had Internet access during the pandemic had a better learning performance. Results exposed learning conditions during the COVID-19 emergency as well as factors that benefited urban sectors like access to connectivity, the use of IT devices, family structure and learning environment. Evidently, pandemic’s effect at present and online learning solved academic community members interaction [22]. However, as far as the rural sector particularly in vulnerable areas a new learning crisis took place hindering the learning continuity process.

Learning Performance Indicators a Statistical Analysis

153

Likewise, a family’s financial condition aggravates student’s motivation for learning continuity since a potential difference between urban and rural areas was found, demonstrating that urban areas where students have enhanced living conditions thus superior academic performance as evidenced by analyzed performance indicators.

5 Conclusions In the designed instrument, student-environment characterizing aspects had to be considered according to Internet accessibility variables, quintile and residence location where a digital gap increases the risk of learning continuity in rural areas, quintile and residence location make it clear that limit student online and even blended learning development. Seventy-three multiple-choice questions quantitatively evaluated acquired learning from first-year Middle School students focusing on prioritized curriculum performance indicators during Covid-19 pandemic in Ecuador. Data gathering analysis detecting atypical observations was performed using Mahalanobis distance. Then Item Response Theory was used to adequately weigh each question in line with its own difficulty level, T and ANOVA tests were run to determine indicator significative difference. Pairwise T-test was used as matching test defining groups representing such difference. Applying T and ANOVA to treatments, significative difference for each performance indicator is noted regarding Internet access and residence location in comparison with quintiles, where there a few significant values. Also, whereas determining groups representing such difference by means of the Pairwise T-test not all the pairs of groups present significant differences. Student values obtained for different performance indicators indicate that students with internet access during the pandemic had better performance. Attaining performance levels regarding quintile, there is significative difference only in some treatments. Above all, there are no similar studies related to statistical analysis in line with provided testing. The contribution of this study has made hypothesis raised where H1 is approved. - There is significative difference in performance indicators academic results regarding the subject of Natural Sciences in terms of Internet access and H3. – Also, significative difference was found regarding student’s residence location. Regarding H2. - Most of the indicators did not present significant difference in terms of quintile or economic conditions, therefore the hypothesis is declined.

References 1. Ministerio de Educación, Instructivo para la Evaluación Estudiantil. Plan Educativo Aprendemos juntos en casa 2020–2021. Ministerio de Educación, Quito (2020) 2. Tang, Y.M., et al.: Comparative analysis of Student’s live online learning readiness during the coronavirus (COVID-19) pandemic in the higher education sector. Comput. Educ. 168, 104211 (2021) 3. Párraga-Párraga, K.L., Escobar-Delgado, G.R.: Estrés laboral en docentes de educación básicaporelcambio de modalidad de estudiopresencial a virtual. Rev. Científica Multidisc. Arbitrada YACHASUN 4(7), 142–155 (2020). https://doi.org/10.46296/yc.v4i7edesp.0067

154

M. Aza-Espinosa et al.

4. Pantoja Burbano, M.J., et al.: Educación y pandemia: desafío para los docentes de educación básica superior y bachillerato de la ciudad de Ibarra, Ecuador. Conrado 17(81), 307–313 (2021). http://scielo.sld.cu/scielo.php?script=sci_arttext&pid=S1990-864420210 00400307&lng=es&nrm=iso&tlng=. Accessed 21 Aug 2022 5. del R. Aguilar Gordón, F.: Del aprendizaje en escenarios presenciales al aprendizaje virtual en tiempos de pandemia. Estud. Pedagóg. (Valdivia) 46(3), 213–223 (2020). https://doi.org/ 10.4067/S0718-07052020000300213 6. Yates, S.J., Carmi, E., Lockley, E., Pawluczuk, A., French, T., Vincent, S.: Who are the limited users of digital systems and media? An examination of UK evidence. First Monday 25(7), 10847 (2020). https://doi.org/10.5210/fm.v25i7 7. CEPAL – UNESCO: La educación en tiempos de la pandemia de Covid 19 (2020). https:// repositorio.cepal.org/bitstream/handle/11362/45905/1/S2000509_en.pdf 8. Trucco, D., Palma, A.: Infancia y adolescencia en la era digital. Un informe comparativo de los estudios de Kids Online delBrasil, Chile, Costa Rica y el Uruguay, Santiago (2020) 9. Chou, H.L., Chou, C.: A quantitative analysis of factors related to Taiwan teenagers’ smartphone addiction tendency using a random sample of parent-child dyads. Comput. Hum. Behav. 99, 335–344 (2019) 10. García, E., Weiss, E.: COVID-19 and student performance, equity, and US education policy: lessons from pre-pandemic research to inform relief, recovery, and rebuilding. Washington D.C. (2020) 11. Gikandi, J.W., Morrow, D., Davis, N.E.: Online formative assessment in higher education: a review of the literature. Comput. Educ. 57(4), 233–2351 (2011) 12. Instituto Nacional de Evaluación Educativa: La Educación en Ecuador: logros alcanzados y nuevos desafíos (2017) 13. Gibson, S., Dembo, M.: Teacher efficacy: a construct validation. J. Educ. Psychol. 76(4), 569–582 (1984) 14. Bandura, A.: Self-efficacy: toward a unifying theory of behavioral change. Psychol. Rev. 84(2), 191–215 (1977) 15. Delgado-Abella, L.E., Mañas, M.Á.: Propiedades psicométricas del Instrumento para evaluar capital psicológico en las Organizaciones Ipsicap-24. Universitas Psychol. 18(5), 1–15 (2019) 16. Irik, Í., Çolak, E., Kaya, D.: Constructivist learning environments: the teachers and students’ perspectives. Int. J. New Trends Educ. Their Implicat. 6(3), 30–44 (2015) 17. Hernández-Sampieri, R., Fernández Collado, C., Baptista Lucio, P.: Metodología de la Investigación. McGraw-Hill, México (2018) 18. Jácome Ortega, A.E., Caraguay Procel, J.A., Herrera-Granda, E.P., Herrera Granda, I.D.: Confirmatory factorial analysis applied on teacher evaluation processes in higher education institutions of Ecuador. In: Basantes-Andrade, A., Naranjo-Toro, M., Zambrano Vizuete, M., Botto-Tobar, M. (eds.) TSIE 2019. AISC, vol. 1110, pp. 157–170. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-37221-7_14 19. Attorresi, H.F., Lozzia, G.S., Abal, F.J.P., Galibert, M.S., Aguerri, M.E.: Teoría de respuesta al ítem. Conceptos básicos y aplicaciones para la medición de constructos psicológicos. Rev. Argentina Clínica Psicol. 18(2), 179–188 (2009) 20. Chávez Álvarez, C., Saade Hazin, A.: Procedimientos básicos para el análisis de reactivos. CENEVAL, México (2010) 21. Romero, G.M.O., Rojas, P.A.D., Domínguez, O.R.L., Pérez, S.M.P., Sapsin, K.G.: Dificultad y discriminación de los ítems del examen de Metodología de la Investigación y Estadística. EDUMECENTRO 7(2), 19–35 (2015) 22. Aguirre Baique, N., Cieza-Delgado, A.H., Reátegui-Del Águila, K.: Rendimiento académico en pandemia: estudiantes con lengua originaria en la Universidad Intercultural de la Amazonía. Maestro Sociedad 8(4), 1379–1385 (2021)

Virtual Physics Learning for Basic Education Carmen Cecilia Ausay1 , Santiago Alejandro Acurio Maldonado2(B) Daniel Marcelo Acurio Maldonado2 , Pablo Israel Amancha Proaño2 and Francisco Javier Echeverría Tamayo2

, ,

1 Departamento de Posgrados, Pontificia Universidad Católica del Ecuador Sede Ambato,

Ambato, Ecuador 2 Escuela de Ingenierías, Pontificia Universidad Católica del Ecuador Sede Ambato, Ambato,

Ecuador [email protected]

Abstract. This project shows an interactive virtual laboratory for teaching Physics, in the first year of BGU (Bachillerato General Unificado), adapted to the new Ecuadorian National Curriculum. The virtual laboratory was developed using JAVA language and was implemented under an opened software system. Additionally, it used a pedagogical work proposal for the integration of a virtual laboratory as a complementary activity to teach Physics. Finally, teachers and students were able to work online or offline, so that it can be used anywhere and anytime. The main result after implementing the interactive virtual laboratory was an increment in performance of students that domain the learning from 2% to 18% and the students that achieved the learning increased from 68% to 80%. Also, a pedagogical and didactic reinforcement for the students of physics in the first year of the New Curriculum at the BGU level was achieved. Finally, a technical procedure to implement complete courses for physics at all levels for BGU was established. The interactive virtual laboratory to teach Physics has been a pedagogical and didactic tool for students in the BGU, adapting to the new curriculum issued by MINEDUC. Keywords: Virtual laboratory · Physics · Simulator · Teaching · Learning

1 Introduction The educational paradigm is undergoing a profound change process, mainly due to emergence of new information technologies. In this context, the new Ecuadorian highschool curriculum has been benefited from the technological progress, with tools such as the virtual laboratories, becoming supplementary activities in subjects like Physics, that to your understanding, manipulation or object manipulation is needed, and due your nature, there are no real or complete experimentation spaces. The present work has as its goal the execution of an interactive virtual laboratory for the teaching of Physics in the first year of BGU, adapted to the new curriculum of the BGU (Bachillerato General Unificado), applying new software tools that allow interaction and observing the physical phenomena being studied. The laboratories simulate phenomena © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 155–166, 2023. https://doi.org/10.1007/978-3-031-25942-5_13

156

C. Cecilia Ausay et al.

for main content of this subject, such as kinematics (distance and displacement, speed and velocity, acceleration). Projectile movement and two-dimensional trajectory movement. Themes in which there are certain limitations when it comes to teaching their concepts in the classroom to understand them. Manipulation or object visualization is needed, and because of their abstract, scientific, or space nature, they are not within reach to reproduce or experiment. The investigation method used was the empiric investigation and the specific method was the students’ performance measurement with statistic procedures. The interactive virtual lab was developed by applying the “Cascada Methodology” using HTML (Hyper Text Markup Language), CSS (Cascading Style Sheets), and JavaScript to its development and implementing it under a methodological proposal for the integration of the virtual lab as a complementary activity to the Physics subject in an open code learning management system, Moodle. The educational paradigm is undergoing a profound change process, mainly due to the new information technologies. In this context, the new Ecuadorian high-school curriculum has benefited from the technological progress, with tools such as virtual laboratories, becoming supplementary activities in subjects like Physics due to object manipulation is needed for their easy comprehension, and there are no real or complete experimentation spaces in government schools. The main goal of this work is the building of an interactive virtual laboratory for the teaching of Physics in the first year of the new curriculum of the BGU (Bachillerato General Unificado), applying new software tools that allow interaction and visualization of the physical phenomena. The laboratory simulates phenomena for main physics contents, such as kinematics (distance and displacement, speed and velocity, acceleration). Projectile movement and two-dimensional movement. Subjects in which there are problems understanding concepts in the classroom need object manipulation or visualization, and because of their abstract, scientific, or space nature, they are not within reach to reproduce as experiments. The investigation method used is the empiric investigation and the specific method is the student’s performance measurement with statistic procedures. The interactive virtual lab is developed by applying “Cascada Methodology” using HTML (HyperText Markup Language), CSS (Cascading Style Sheets), and JavaScript and implemented under a methodological proposal for the integration of the virtual lab as a complementary activity to the Physics in an open code learning management system, Moodle.

2 State of the Art The virtual lab is considered a new educational paradigm that has changed the way of learning science, becoming relevant instruments to the achievement of significant learning. It means that the use of new technologies as didactic tools would serve as a way to do activities in a virtual lab environment, becoming a new way to develop practical work on experimental science, without losing sight of their basic’s objectives of them. The virtual laboratories were developed for space program simulations and military tactics around the’80s in USA and United Kingdom but they have become the best option for those whit small budgets or extreme limitations. The experiences with virtual labs have a different context because their priority is focusing on the student’s protection from

Virtual Physics Learning for Basic Education

157

chemical products as well as mechanical devices and implicitly equipment is protecting from damages that they can suffer during the teaching-learning process [1–3]. Currently, the concept of the virtual lab has taken another approach because it is considered a kind of collaboration that allows the consecution of creative goals and helps the right decision-making. It is applied to every intellectual environment of human activity [4–7]. This way, the virtual labs do not only offer a creative, didactic, and cheap option to create the availability of spaces to perform experiences or practices to a subject or specific area but also let each student has a learning environment to simulate several times the same phenomena without suffering or provoking accidents, increasing the developing their skills [8]. According to this approach, there are many applications in the technological market such as Matlab, LabView, and Simulink among others, that simulate space real scenarios for teachers and students from the virtual, however, these proposals do not center on specific issues and the needs of a specific user group. It is important to stand out the successful experiences of the virtual labs on different levels of the education system. The virtual laboratory service through a platform in which students can recreate several experiments related to their, the use of virtual laboratories in the learning of Engineering allowed them to experiment with several variables related to physics phenomenon and compare it to real elements, and by implementing several remote academic virtual laboratories using diverse Java applets and its architecture allowed the administration of different processes of learning, modules, and laboratory strategies [4, 6, 7].

3 Methodology The following work is based on empiric investigation because the data of the students performance from the first year of BGU in the “María Auxiliadora High School” from Riobamba - Ecuador are studied. Data is got from the current Institutional Educational Project during the diagnosis stage. The objective of this method is to get numeric information that would allow it to compare the students’ performance in Physics subject with statistical procedures, before and after the implementation of the proposed software tool. 3.1 Methodology for the Software Development For the interactive virtual laboratory for Physics teaching development, Cascade Methodology is used (see Fig. 1), which ensures a functional, usable, and reliable product [9].

158

C. Cecilia Ausay et al.

Fig. 1. Cascade methodology

3.2 Methodological Proposal for the Integration of the Virtual Laboratory as a Complementary Activity As a result of analysis and through the experience application of the ICTs during the teaching-learning process, a methodological proposal of the interactive virtual laboratory integration as a complementary activity to teach Physics at BGU first year is presented (see Fig. 2) [10, 11].

Fig. 2. Methodological proposal for the integration of the virtual laboratory as a complementary activity to teach Physics.

Virtual Physics Learning for Basic Education

159

Implement the Physics virtual laboratory into a virtual classroom, let to integrate the resources, activities, and simulator for the proposed themes as they are: kinematics, twodimensional movement, and projectile movement. Each laboratory was implemented on the basis of the methodological proposal, and it is described in the Fig. 2 and consists in 5 detailed stages at the end: Stage 1. According to the problem approach it has support material and student guides for each theme that has been designed which have been implemented using several resources that Moodle facilitates (see Fig. 3).

Fig. 3. Implemented activities for the virtual Physics laboratory

Stage 2. The simulation activity allows users to perform the same experiment several times assigning different values to the variables of studied phenomena. The achieved results can be analyzed from the graphics generated by the virtual laboratory (see Fig. 4).

Fig. 4. Kinematics simulator

160

C. Cecilia Ausay et al.

Stage 3. The report is made at the end of the simulation activity. It includes the analysis of the gotten data in the virtual lab, as well as the used strategies, the logic and the scientific foundation, the solution alternatives, and the conclusions. Finally, it has a section, to upload the students’ tasks after doing the proposed exercises (see Fig. 5).

Fig. 5. Tasks in the virtual labs

Stage 4. The evaluation activity includes the achieved experiences in the virtual laboratory, that will allow teachers to check the effectiveness of the tool in the student’s academic performance. For that, it has a questionnaire activity, which will allow teachers to evaluate the student’s academic development (see Fig. 6).

Fig. 6. Evaluation activity

Stage 5. The feedback section will allow teachers to make a consensus on the gotten experiences, before and after the application of the virtual lab by clarifying and joining the concepts of the studied phenomena as well as the adaptations to the new curriculum needs for the BGU first level (see Fig. 7).

Fig. 7. Feedback

4 Preliminary Evaluation For this section, a diagnostic test was applied to 40 students in the first year of BGU in “María Auxiliadora High School”, in Riobamba City. The tests were designed based on two structures: the first one was implemented on a personal computer and the second one was implemented on the Moodle Platform.

Virtual Physics Learning for Basic Education

Fig. 8. Kinematics simulation

Fig. 9. Projectile simulation

Fig. 10. Two-dimensional movement simulation

161

162

C. Cecilia Ausay et al.

First Scenery. Test Operation on a Personal Computer (see Fig. 8, 9 and 10): Second Scenery. High School’s Moodle Platform (see Fig. 11 and 12).

Fig. 11. Physics course interface

Fig. 12. Moodle physics laboratory

One of the most important things about this project is that it was created based on two operating software systems: Ubuntu Version 14.04 and Windows 7 Home Premium of 32 bits. Additionally, the tests submitted some recommendations showing in Table 1.

5 Arguments and Results In this work student’s performance during a scholar year in a complete group (40 students) of BGU first year in María Auxiliadora School in Riobamba is evaluated, for preliminary evaluation an online survey has been applied to recognize the satisfaction

Virtual Physics Learning for Basic Education

163

Table 1. Recommendations Recommendations Further explanation of the proposed exercises A stopped function in a specific time for the acceleration graphics A lab report format of the virtual practices

levels in the use of the virtual physics learning tools. Next some of the questions and the results is shown. Question 1: The use of the virtual laboratory is easy and consistent?

Strongly disagree In disagreement Neither agree nor disagree In agreement Totally agree

Question 2: The virtual laboratory shows enough information to understand the theme?

Strongly disagree In disagreement Neither agree nor disagree In agreement Totally agree

Question 3: The virtual laboratory has clearly instructions about the use and functionality?

164

C. Cecilia Ausay et al.

Strongly disagree In disagreement Neither agree nor disagree In agreement Totally agree

Question 4: The virtual laboratory allows the retro alimentation after the practice?

Strongly disagree In disagreement Neither agree nor disagree In agreement Totally agree

According to the application of the current Student’s Evaluation Guidelines, issued by the “Subsecretaría de Apoyo, Seguimiento y Regulación de la Educación” by Ecuador [12], which values the qualitative and the quantitative evaluation scale (Table 2), the student’s performance on the different levels and sublevels of the National Education System. Table 2. Student’s performance scale by the SNE Qualitative scale

Quantitative scale

1. Exceeds the required learning

10

2. Dominates the required learning

9

3. Achieves the required apprenticeships

7–8

4. It is close to achieving the required learning

5–6

5. Does not achieve the required learning

≤4

Virtual Physics Learning for Basic Education

165

For a preliminary evaluation of the virtual Physic Laboratories, a quantitative quasi experimental investigation has been made. The comparative analysis of the academic result before process and after of applying the virtual laboratory, showing in Fig. 13. In this image has been represented the result of the measurements takin across two partial applications, during a complete scholar year, covering thematic related to kinematic lineal movements in one and two dimensions.

First paral (before). Average=7.1

Second paral (aer). Average=8.0 80%

68%

25% 18% 5%

2%

2%

0%

0%

0%

Pass the required Master the learning Reach the required Is close to achieving Does not achieve learning required learning the required the required learning learning

Fig. 13. Comparative chart of the students’ performance on the first year of BGU School year: 2015–2016.

These results are determined to implement the interactive virtual laboratory. It also establishes a pedagogical and didactic reinforcement for the Physics classes for the students in first year of BGU, adapted to the New Curriculum of the BGU level.

6 Conclusions The basis for a theory of the development of the virtual laboratories was analyzed in work as well as Moodle and JavaScript, considering that the use of the ICTs is one of main factors accomplish this work. The implementation the interactive virtual laboratory to teach Physics has been created as pedagogic and didactic support for students in the first year of BGU, adapting to the new curriculum issued by the MINEDUC. The pedagogic proposal to integrate a virtual laboratory as a complementary activity for the Physics classes has been important factor acquire and develop meaningful students’ knowledge as well as students’ academic performance according to the first year of BGU in “María Auxiliadora High School” in Riobamba. It is worth saying that the correct use of the ICTs in education allows the rational internalization of abstract concepts and complements

166

C. Cecilia Ausay et al.

the experimental activities in the classroom or in the laboratory. The appropriate use of the interactive virtual laboratory by the students in the first year of BGU allows students to reinforce studied and learned Physics contents.

References 1. Lynch, T., Lynch, T., Ghergulescu, I.: Innovative pedagogies and personalisation in STEM education with NEWTON atomic structure virtual lab. In: EdMedia + Innovate Learning 2018, pp. 1483–1491 (2018) 2. Darling-Hammond, L., Flook, L., Cook-Harvey, C., et al.: Implications for educational practice of the science of learning and development. Appl. Dev. Sci. 24, 97–140 (2019). https:// doi.org/10.1080/10888691.2018.1537791 3. Williamson, B., Gulson, K., Perrotta, C., Witzenberger, K.: Amazon and the new global connective architectures of education governance. Harv. Educ. Rev. 92, 231–256 (2022) 4. Kapilan, N., Vidhya, P., Gao, X.Z.: Virtual laboratory: a boon to the mechanical engineering education during Covid-19 pandemic. High. Educ. Future 8, 31–46 (2020). https://doi.org/ 10.1177/2347631120970757 5. Radhamani, R., Kumar, D., Nizar, N., et al.: What virtual laboratory usage tells us about laboratory skill education pre- and post-COVID-19: focus on usage, behavior, intention and adoption. Educ. Inf. Technol. 26, 7477–7495 (2021). https://doi.org/10.1007/S10639-02110583-3/TABLES/2 6. Solikhin, F., Sugiyarto, K.H., Ikhsan, J.: The impact of virtual laboratory integrated into hybrid learning use on students’ achievement. J. Ilmiah Peuradeun 7, 81–94 (2019). https:// doi.org/10.26811/PEURADEUN.V7I1.268 7. Sypsas, A., Kalles, D.: Virtual laboratories in biology, biotechnology and chemistry education: a literature review. In: ACM International Conference Proceeding Series, pp. 70–75 (2018). https://doi.org/10.1145/3291533.3291560 8. Achuthan, K., Nedungadi, P., Kolil, V.K., et al.: Innovation adoption and diffusion of virtual laboratories. Int. J. Online Biomed. Eng. 16, 4–25 (2020). https://doi.org/10.3991/IJOE.V16 I09.11685 9. Saeed, S., Jhanjhi, N.Z., Naqvi, M., Humayun, M.: Analysis of software development methodologies. Int. J. Comput. Digit. Syst. 8, 445–460 (2019). https://doi.org/10.12785/IJCDS/ 080502 10. Foutsitzi, S., Caridakis, G.: ICT in education: benefits, challenges and new directions. In: 10th International Conference on Information, Intelligence, Systems and Applications, IISA 2019 (2019). https://doi.org/10.1109/IISA.2019.8900666 11. Fernández-Gutiérrez, M., Gimenez, G., Calero, J.: Is the use of ICT in education leading to higher student outcomes? Analysis from the Spanish autonomous communities. Comput. Educ. 157, 103969 (2020). https://doi.org/10.1016/J.COMPEDU.2020.103969 12. Ministerio de Educación del Ecuador: Lineamientos para el inicio de clases en el régimen Sierra-Amazonía, año lectivo 2020–2021, en las instituciones educativas fiscales, municipales, fiscomisionales y particulares (2020). https://www.fedepal.ec/wp-content/uploads/ 2020/08/Inicio2020.pdf. Accessed 21 Jan 2022

Bibliometric Mapping of Scientific Literature Located in Scopus on Teaching Digital Competence in Higher Education Andrés Cisneros-Barahona1(B) , Luis Marqués Molías2 , Gonzalo Samaniego Erazo1 , María Isabel Uvidia-Fassler1 , and Gabriela de la Cruz-Fernández1 1 Universidad Nacional de Chimborazo, Riobamba 060150, Ecuador

[email protected] 2 Universitat Rovira I Virgili, 43007 Tarragona, Spain

Abstract. By using the PRISMA methodology, a systematic literature review (SLR) was carried out, supported by the meta-analysis (MA) on teaching digital competence (TDC) in universities, in the Scopus scientific base. The research is delimited through Eric’s thesauri and research questions related to the metaanalysis of the data of the scientific productions located were established, as follows: 1. In which years are there a greater number of publications related to the topic? 2. To which journals do the works belong? 3. Who are the most outstanding authors? 4. What affiliation do the publications have? 5. In which countries do the investigations come from? 6. What kind of documents are they? 7. In which areas is the subject investigated? 8. What is the predominant language? 9. Which are the most relevant works according to the number of citations?. It is discovered that worldwide authors and entities from Spain are more common in the theme, most of the selected works are in Spanish. Since 2019, the number of publications related to the subject increases, this may be since during the pandemic period many universities adopted virtual education in an emerging way; related relevant areas are social sciences and computer sciences. The study constitutes the starting point to establish future research in other scientific databases with the inclusion of educational data mining and business intelligence. Keywords: Systematic literature review · Digital literacy · Mapping scientific · Bibliometrics

1 Introduction Competence is understood as the dynamic combination of motivations, attitudes, knowledge, skills, abilities, emotions, values and other social and behavioral elements [14, 20, 26]; the competences are an integration of interrelated practical and cognitive skills, which allow responding to individual and collective demands in an analytical, communicative and intelligent manner [19]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 167–180, 2023. https://doi.org/10.1007/978-3-031-25942-5_14

168

A. Cisneros-Barahona et al.

Digital competence is the ability, attitude and awareness of knowing how to use technological tools in an efficient, appropriate, safe, critical and responsible manner [2, 13, 30]; to identify, access, critically evaluate, manage, synthesize digital resources, build new knowledge, express themselves in multiple media and formats, and communicate in a regular and simple way [2, 22]; these characteristics allow the citizen of the 21st century to improve in all aspects of daily life [30] and therefore they must be fully developed [16, 29, 38]. The teaching digital competence (TDC) is a compendium of skills, attitudes and knowledge, which allow an educator to support the learning of their students through technology, by the design and transformation of classroom practices [16]; thus, it is undeniable that teachers require these skills to face effectively to the complex demands of society [28]; consequently, in a globalized world, the professional profile of higher education teachers is characterized by TDCs [8]. The research variables detailed below allow to establish the inclusion criteria for the scientific production found, as follows 1. Year of publication of the works; 2. Sources of publications; 3. Highlighted authors; 4. Affiliation of publications; 5. Countries where the investigations are carried out; 6. Documentary typology; 7. Research areas; 8. Language of production; 9. Most cited works. The study constitutes the starting point to establish future research in other scientific databases with the inclusion of educational data mining and business intelligence.

2 Method Based on the Preferred Reporting Items for Systematic Reviews publication guide and Meta-Analyses – PRISMA, a systematic review of the literature (SLR) supported by the meta-analysis (MA) [6, 17, 35] was carried out, since, at present, these designs are the ones that provide the highest level of evidence [17, 25] and also allow improving comprehensiveness [17]. The objective was to analyze the scientific production from the Scopus database, on the digital competence of university teachers. And to provide rigor to the systematic review process, research questions related to the meta-analysis of the data of the scientific productions located were established, as follows: 1. In which years are there a greater number of publications related to the topic? 2. To which journals do the works belong? 3. Who are the most outstanding authors? 4. What affiliation do the publications have? 5. In which countries do the investigations come from? 6. What kind of documents are they? 7. In which areas is the subject investigated? 8. What is the predominant language? 9. Which are the most relevant works according to the number of citations? A sequential and rigorous revision work was developed; at the beginning, the research was delimited through key concepts, using thesauri or ERIC approaches, intended to have a controlled vocabulary of descriptors [3, 12, 23, 24, 34]. To define the context of the research, the following descriptors were selected: “digital competences” as a fundamental aspect, “higher education”, “university teachers” and “teaching”; the search was carried out in English language and with the help of the operator “and”, the procedure is described in Table 1. In selecting the data, inclusion criteria were established for each variable [7, 11, 17, 21, 25, 30].

Bibliometric Mapping of Scientific Literature

169

Table 1. Search procedure in the Scopus database Descriptors

Digital competences; Higher education; University teachers; Teaching

Operator

And

Search

“Digital competences” and “higher education” and “university teachers” and teaching

Table 2. Inclusion criteria of the scientific production for the meta-analysis. No. variable Variable

Inclusion criteria

1

Year of publication

2012–Agosto 2022

2

Origin

They are selected from the fact that three or more investigations converge in the same resource (magazine, book, etc.)

3

Authors

Investigations must be possessed, at least, three references about the topic

4

Filiation

Investigations must be possessed, at least, three references about the topic

5

Countries where it originates Investigations must be possessed, at least, three references about the topic

6

Documentary typology

All references are taken account

7

Investigation area

Investigations must be possessed, at least, three references about the topic

8

Languages

All references are taken account

9

Keywords

Investigations must be possessed, at least, six references about the topic

10

Most cited publications

Investigations must have, at least, sixteen citations

3 Results The result of the extraction, based on the determined search criteria, allowed us to obtain 109 references in the Scopus database for the meta-analysis process. Results are presented as follows. 3.1 Analysis of the Scientific Productions Extracted According to the Year of Publication Based on the previously defined inclusion criteria, the search was carried out from the year 2012, including October 2021, this means, ten years prior to the investigation. Results obtained are presented in Table 3.

170

A. Cisneros-Barahona et al. Table 3. Contribution of scientific productions per year.

Year of publication

Number of scientific productions

% of 109

2021

34

31,20%

2020

35

32,11%

2019

16

14,68%

2018

4

3,67%

2017

7

6,42%

2016

1

0,92%

2015

3

2,75%

2014

3

2,75%

2013

3

2,75%

2012

3

2,75%

Total

109

100,00%

3.2 Analysis of the Scientific Productions Extracted According to Their Origin Table 4 shows the titles of the sources that host the scientific publications on the subject cited and based on the inclusion criteria described above. Table 4. Contribution of the scientific productions per publication titles Source

Number of scientific productions

% of 109

ACM International Conference Proceeding Series

6

5,50%

Sustainability Switzerland

4

3,67%

Advances in Intelligent Systems and Computing

3

2,75%

Campus Virtuales

3

2,75%

Ceur Workshop Proceedings

3

2,75%

Communications in Computer and Information Science

3

2,75%

International Journal of Learning Teaching and Educational Research

3

2,75%

84

77,06%

Other

Bibliometric Mapping of Scientific Literature

171

3.3 Analysis of the Scientific Productions Extracted According to Their Authors In order to know which are the most relevant authors who investigate TDC in higher education (Table 5), it was stablished at least the number of three scientific productions as inclusion criterion. Table 5. Contribution of scientific production per author. Authors

Number of scientific productions

% of 109

Cabero-Almenara, J.

6

5,50%

Palacios-Rodríguez, A.

6

5,50%

Barroso-Osuna, J.

5

4,59%

Gutiérrez-Castillo, J.J.

4

3,67%

Guillén-Gámez, F.D.

3

2,75%

Gisbert-Cervera Mercè

3

2,75%

Zhao, Y.

3

2,75%

79

72,48%

109

100,00%

Others Total

3.4 Analysis of the Scientific Productions Extracted Based on Their Filiation The quantitative relationship that exists between the authors and the filiation that they grant to the entities to which they belong worldwide is shown in Table 6. Table 6. Contribution of the scientific production per filiation Filiation University of Seville

Number of scientific productions

% of 109

10

9,17%

University of Salamanca

7

6,42%

University of Cordoba

4

3,67%

Antonio de Nebrija University

4

3,67%

University of Jaen

3

2,75%

Rovira i Virgili University

3

2.75%

Complutense University of Madrid

3

2,75%

National University of Distance Education

3

2,75% (continued)

172

A. Cisneros-Barahona et al. Table 6. (continued)

Filiation

Number of scientific productions

Internacional University of La Rioja National University of Chimborazo

% of 109

3

2,75%

3

2,75%

Other

66

60.55%

Total

109

100,00%

3.5 Analysis of the Scientific Productions Extracted According to the Country Where It Originates In relation to the inclusion criteria, the concentration of the number of scientific productions by country, according to the topic in this research, is presented in Table 7. Table 7. Contribution of the scientific productions according to the origin. Country

Number of scientific productions

% of 109

Spain

49

44,95%

Russia

9

8,26%

Ecuador

6

5,50%

Colombia

4

3,67%

Germany

4

3,67%

Peru

4

3,67%

Portugal

4

3,67%

Sweden

4

3,67%

China

3

2,75%

Mexico

3

2,75%

Norway

3

2,75%

Ukraine

3

2,75%

32

29,36%

Other

3.6 Analysis of the Scientific Productions Extracted According to the Type of Document All types of existing documents have been considered in the search (Table 8).

Bibliometric Mapping of Scientific Literature

173

Table 8. Contribution of the scientific productions according to the type of document Type of documents

Number of scientific productions

% of 109

Article

70

64,22%

Document from conference

29

26.61%

Conference revision

6

5,50%

Revision

4

3.67%

Chapter of the book

3

2,75%

3.7 Analysis of the Scientific Productions Extracted According to the Research Area Table 9 shows the research areas obtained according to the inclusion criteria. Table 9. Contribution of scientific productions according to the investigation area. Investigation area

Number of scientific productions

% of 109

Social Science

75

68,81%

Computer Science

52

47,71%

Engineering

17

15,60%

Medicine and Psychology

10

9.98%

Arts and Humanities

7

6,42%

Environment

7

6,42%

Mathematics

6

5,50%

Business, management and accounting

5

4,59%

Energy

5

4,59%

Decision Science

4

3,67%

3.8 Analysis of the Scientific Productions Extracted According to the Language of the Publication All scientific writing has been considered according to the language for the analysis (Table 10).

174

A. Cisneros-Barahona et al.

Table 10. Contribution of the scientific production according to the language of publication. Language

Number of scientific productions

% of 109

Spanish

88

80,73%

English

23

21,10%

German

1

0,92%

3.9 Analysis of the Scientific Productions Extracted Based on the Keywords of the Publication The keywords related to teaching digital competence at the university level, according to the established search and inclusion criteria, are shown in Fig. 1, based on the semantic analysis with the Bibliometrix software of RStudio.

Fig. 1. Keywords most used by the authors in the scientific production located through the word cloud scheme.

3.10 Analysis of the Scientific Productions Extracted Based on the Citations The most relevant scientific productions, in reference to the digital competences of university teachers, are specified in Table 11.

Bibliometric Mapping of Scientific Literature

175

Table 11. Contribution of the scientific productions according to the citations. Title

Year

Source

Number of citations

Digital transformation in German higher education: student and teacher perceptions and usage of digital media

[5]

International Journal of Educational Technology in Higher Education

79

Assessing Teacher Digital Competence: The Construction of an Instrument for Measuring the Knowledge of Pre-Service Teachers

[18]

Journal of New Approaches 41 in Educational Research

Teacher Educators’ Use of Digital Tools and Needs for Digital Competence in Higher Education

[1]

Journal of Digital Learning in Teacher Education

31

Collaborative construction of a project as a methodology for acquiring digital competences

[27]

Comunicar

27

Fostering teacher’s digital competence at university: The perception of students and teachers

[10]

Revista de Investigación Educativa

24

Digital literacy and higher education during COVID-19 lockdown: Spain, Italy, and Ecuador

[33]

Publications

21

Surveying digital competencies of university students and professors in Ukraine for fully online collaborative learning

[4]

Technology, Pedagogy and Education

20

Informational literacy and digital [31] competence in teacher education students

Profesorado

20

Design and validation of an [15] instrument for evaluation of digital competence of University student

Espacios

18

University teachers’ training: [32] The Digital Competence, Formación del profesorado Universitario en la Competencia Digital

Pixel-Bit, Revista de Medios y Educación

16

176

A. Cisneros-Barahona et al.

4 Discussion According to the metadata related to the year of publication of the scientific production, the average is almost 11 productions per year; it can be seen that 77.98% of the production obtained as a result of the search were focused in the years 2019, 2020 and 2021, with a total of 85 scientific productions; 2020 is the most representative year with a percentage of 32.11% of the total, that means, 35 works; the year of least relevance is 2016 with 0.92%, equivalent to 1 scientific production. Depending on the variable related to the origin of the scientific production, it can be seen that 22.94% of the studies are concentrated in 7 publication titles: ACM International Conference Proceeding Series, Sustainability Switzerland, Advances in Intelligent Systems and Computing, Virtual Campuses, CEUR Workshop Proceedings, Communications in Computer and Information Science and International Journal of Learning Teaching and Educational Research, that is, 25 productions; being the type of indexed journals which presents a higher percentage of concentration, equivalent to 14.68% or 16 scientific productions. Moreover, productions from important conferences and congresses equivalent to 8.26% are observed, that means 9 scientific products; however, the ACM International Conference Proceeding Series is the conference that provides the most scientific products in the search, with 5.50% of the total, equivalent to 6 relevant works, even higher than the indexed journals. It can be estimated that 27.52% of the articles are concentrated in 7 writers: Cabero-Almenara, J.; Palacios-Rodríguez, A.; Barroso-Osuna, J.; Gutierrez-Castillo, J.J.; Guillén-Gámez, Gisbert-Cervera Mercè, and F.D. and Zhao, Y.; from which the first 2 authors represent 11% of the total with 6 works each; the first 5 carry out their research activities in Spanish universities. In the Ibero-American context, 9 Spanish universities stand out, and one Ecuadorian university, which accomplish the inclusion criteria specified in the filiation variable: the University of Seville; the University of Salamanca; the University of Córdoba; the Antonio de Nebrija University; the University of Jaen; the Rovira i Virgili University; the Complutense University of Madrid; the National University of Distance Education; the International University of La Rioja and the National University of Chimborazo; which ones correspond to 36.70% of the total of sponsoring entities, or to 43 productions of the totality. Spanish universities show a marked interest in the object of study, a fact that is consistent with the origin of the authors on whom the production of scientific articles described above is concentrated (Spain); it is distinguished that outside the European context, the National University of Chimborazo has generated research related to the subject in question. Of all the localized products, 80.73% of the works, that is, 88, are in Spanish; as expected, publication titles are related to the nationality of the authors and the country of origin of the publications. The types of documents found in the search are composed by articles, conference papers, conference reviews, reviews, and book chapters; 64.22% of the production found corresponds to the first group, that means, 70 articles; while the minority group corresponds to the book chapter type, with 3 works equivalent to 2.75% of the total; a considerable portion of works are presented as conference documents, which denotes the validity of the proposed research topic.

Bibliometric Mapping of Scientific Literature

177

The scientific productions resulting from this search are concentrated in 17 research areas, fundamentally highlighting the area of social sciences with 75 publications, equivalent to 68.81% of the totality; computer science with 52 publications representing 47.71% in the exploration and engineering with 17 works equivalent to 15.60% of production; among other. The scientific production that presents a greater number of bibliographical references is filiated with the University of the Andes in Colombia and the Universidad Oberta of Catalunya in Spain, written by: Bond et al. (2018), entitled “Digital transformation in German higher education: student and teacher perceptions and usage of digital media”, it is published in English and belongs to the publication titled “International Journal of Educational Technology in Higher Education”; it has 79 citations, and addresses educational technological innovations at the University of Oldenburg in Germany, where two data sets on the use and perceptions of students and teachers about the use of digital tools were examined; it can be seen that the predominant language of the most cited works is English. In reference to the variable related to country of origin; 96 publications, equivalent to 88.07% of the total, concentrated in twelve countries: Spain; Russia; Ecuador; Colombia, Germany, Peru, Portugal, Sweden, China; Mexico, Norway, and Ukraine; Spain dominates the study of the subject with 49 publications, which represent 44.95% of all the works analyzed. This study constitutes the starting point to establish future research to undertake similar efforts in the study of the subject in other scientific databases with the inclusion of educational data mining and data business intelligence [9, 36, 37].

5 Conclusions The study of TDC has become in an emerging topic, which arouses great interest in the scientific community, this fact is related to the need for online education at all levels by educational institutions throughout the world [9, 10]. An atypical bias can be seen in the growth of publications since the year 2019; when visualizing the central tendency, the calculated value of the median is closed to 4 productions per year, which shows that the variation from this year is due to the appearance of an important phenomenon (Pandemic Stage). The keywords used by the authors in each of the scientific productions allow us to corroborate a correct choice of the descriptors used in the present investigation, taking into consideration that the keyword with a greater number of appearances is “higher education”, followed by “digital competence” and “teaching”; keywords such as “elearning”, “education”, “student”, “teacher training” and “ICT” can be seen; you can also see the descriptor “university teacher” and the keyword “Spain”, data that allows us to confirm again that researchers and entities from this country have concentrated great efforts on studying the subject of TDC of teachers of high education. The study of this subject stands out in the European and Latin American context, scientific productions with filiation to Spanish, Russian, German, Portuguese, Swedish, Ecuadorian, Colombian, Peruvian, Mexican universities, among others, are located. It is also evident that there is a correlation between the authors of the scientific productions’

178

A. Cisneros-Barahona et al.

nationalities, the countries of origin and the filiations of the works, with the language of the publications.

References 1. Amhag, L., et al.: Teacher educators’ use of digital tools and needs for digital competence in higher education. J. Digit. Learn. Teach. Educ. 35(4), 203–220 (2019). https://doi.org/10. 1080/21532974.2019.1646169 2. Bawden, D.: Digital literacies. ELT J. 66(1), 108–112 (2012). https://doi.org/10.1093/elt/ ccr077 3. Blanco, S., Martín Álvarez, R.: Tesauros: ¡menuda palabrota! No todo es clínica. Actual. Med. Fam. 15(fifura 3), 509–515 (2019) 4. Blayone, T.J.B., et al.: Surveying digital competencies of university students and professors in Ukraine for fully online collaborative learning. Technol. Pedagog. Educ. 27(3), 279–296 (2018). https://doi.org/10.1080/1475939X.2017.1391871 5. Bond, M., Marín, V.I., Dolch, C., Bedenlier, S., Zawacki-Richter, O.: Digital transformation in German higher education: student and teacher perceptions and usage of digital media. Int. J. Educ. Technol. High. Educ. 15(1), 1–20 (2018). https://doi.org/10.1186/s41239-018-0130-1 6. Cardona Arias, J.A., Higuita Gutiérrez, L.F., Ríos Osorio, L.A.: Ejecución de revisiones sistemáticas y metaanálisis Performance of systematic reviews and meta-analysis. EDICIONES Univ. Coop. Colomb. 2016, 25–40 (2016) 7. Cardona Arias, J.A., et al.: Revisiones sistemáticas de la literatura científica: La investigación teórica como principio para el desarrollo de la ciencia básica y aplicada (2016). https://doi. org/10.16925/9789587600377 8. Cateriano-Chavez, T.J., et al.: Digital skills, methodology and evaluation in teacher trainers. Campus Virtuales 10(1), 153–162 (2021) 9. Cisneros-Barahona, A. et al.: Digital competence of university teachers. An overview of the state of the art. Hum. Rev. Int. Humanit. Rev./Int. J. Humanit. 11 (2022). https://doi.org/10. 37467/revhuman.v11.4355 10. Cisneros-Barahona, A., Marqués Molías, L., Samaniego Erazo, G., Uvidia-Fassler, M., de la Cruz-Fernández, G., Castro-Ortiz, W.: Teaching digital competence in higher education. a comprehensive scientific mapping analysis with Rstudio. In: Abad, K., Berrezueta, S. (eds.) DSICT 2022. CCIS, vol 1647, pp. 14–31. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-031-18347-8_2 11. Esteve-Mon, F.M., et al.: Digital teaching competence of university teachers: a systematic review of the literature. IEEE Rev. Iberoam. Tecnol. Aprendiz. 15(4), 399–406 (2020). https:// doi.org/10.1109/RITA.2020.3033225 12. Fernández Quijada, D.: El uso de tesauros para el análisis temático de la producción científica: apuntes metodológicos desde una experiencia práctica. BiD Textos Univ. Bibliotecon. Doc. 29(29), 9 (2012). https://doi.org/10.1344/BiD2012.29.15 13. Ferrari, A.: Digital Competence in practice: an analysis of frameworks. Jt. Res. Cent. Eur. Comm. 91 (2013). https://doi.org/10.2791/82116 14. González, J., Wagenaar, R.: Tuning educational structures in Europe (2003) 15. Gutiérrez-Castillo, J.J., et al.: Design and validation of an instrument for evaluation of digital competence of University student. Espacios 38(10), 16 (2017) 16. Hall, R., et al.: Defining a self-evaluation digital literacy framework for secondary educators: the DigiLit Leicester project. Res. Learn. Technol. 22(1063519), 1–17 (2014) 17. Hutton, B., et al.: La extensión de la declaración PRISMA para revisiones sistemáticas que incorporan metaanálisis en red: PRISMA-NMA. Med. Clin. (Barc) 147(6), 262–266 (2016). https://doi.org/10.1016/j.medcli.2016.02.025

Bibliometric Mapping of Scientific Literature

179

18. Lázaro-Cantabrana, J.L., et al.: Assessing teacher digital competence: the construction of an instrument for measuring the knowledge of pre-service teachers. J. New Approaches Educ. Res. 8(1), 73–78 (2019). https://doi.org/10.7821/naer.2019.1.370 19. Arrufat, M.J.G., Sánchez, V.G., Santiuste, E.G.: Vista del futuro docente ante las conformación y comunicación para enseñar. Edutec. Rev. Electrónica Tecnol. Educ. (2010) 20. de Miguel Díaz, M.: Modalidades de enseñanza centradas en el desarrollo de competencias (2005) 21. Mark, C., et al.: Systematic review and meta-analysis methodology. Blood. 116, 3140–3146 (2010) 22. Martin, A.: DigEuLit – a European framework for digital literacy: a progress report. J. Elit. 2, 130–136 (2005). https://doi.org/10.18261/ISSN1891-943X-2006-02-06 23. Martínez Tamayo, A.M., Mendes, P.V.: Diseño y desarrollo de tesauros (2015) 24. Montoya, C.A.: Descripción del tesauro del Sistema tesauro de información de la literatura colombiana. Redalyc. 32(B-learnig), 123–146 (2010) 25. Moraga C, J., Cartes-Velásquez, R.: Pautas de chequeo, parte Ii: quorom y prisma. Rev. Chil. Cirugía. 67(3), 325–330 (2015). https://doi.org/10.4067/s0718-40262015000300015 26. OCDE: La definición y selección de competencias clave (2005) 27. Pérez-Mateo-Subirà, M., et al.: Collaborative construction of a project as a methodology for acquiring digital competences. Comunicar 21(42), 15–24 (2014). https://doi.org/10.3916/ C42-2014-01 28. Pérez, A.: Alfabetización y competencias digitales en el marco de la evaluación educativa: Estudio en docentes y alumnos de Educación Primaria en Castilla y León. Universidad de Salamanca (2015) 29. Rangel, A.: Propuesta de un perfil digital teaching skills: a profile. Píxel-Bit. Revista de Medios y Educación. 46 (2015). ISSN 1133-8482. e-ISSN 2171-7966 30. Rodríguez-García, A.-M., et al.: Competencia digital, educación superior y formación del profesorado: un estudio de meta-análisis en la web of science. Pixel-Bit Rev. Medios Educ. 54, 65–82 (2019). https://doi.org/10.12795/pixelbit.2019.i54.04 31. Rodríguez, M.D.M., et al.: Informational literacy and digital competence in teacher education students. Profesorado. 22(3), 253–270 (2018). https://doi.org/10.30827/profesorado.v22i3. 8001 32. Ruiz Cabezas, A., et al.: University teachers’ training: the digital competence. Pixel-Bit Rev. Medios y Educ. 58, 181–215 (2020). https://doi.org/10.12795/pixelbit.74676 33. Tejedor, S., et al.: Digital literacy and higher education during COVID-19 lockdown: Spain, Italy, and Ecuador. Publications 8(4), 1–17 (2020). https://doi.org/10.3390/publications804 0048 34. Torres, Á.: Thesaurus: Palabra clave. https://www.revistacomunicar.com/wp/escuela-de-aut ores/thesaurus-palabra-clave/ 35. Urrutia, G., Bonfill, X.: PRISMA declaration: a proposal to improve the publication oy systematic reviews and meta-analyses (2010). http://es.cochrane.org/sites/es.cochrane.org/files/ public/uploads/PRISMA_Spanish.pdf 36. Uvidia Fassler, M.I., Cisneros Barahona, A.S., Dumancela Nina, G.J., Samaniego Erazo, G.N., Villacrés Cevallos, E.P.: Application of knowledge discovery in data bases analysis to predict the academic performance of university students based on their admissions test. In: Botto-Tobar, M., León-Acurio, J., Díaz Cadena, A., Montiel Díaz, P. (eds.) ICAETT 2019. AISC, vol. 1066, pp. 485–497. Springer, Cham (2020). https://doi.org/10.1007/978-3-03032022-5_45

180

A. Cisneros-Barahona et al.

37. Uvidia Fassler, M.I., Cisneros Barahona, A.S., Ávila-Pesántez, D.F., Rodríguez Flores, I.E.: Moving towards a methodology employing knowledge discovery in databases to assist in decision making regarding academic placement and student admissions for universities. In: Botto-Tobar, M., Esparza-Cruz, N., León-Acurio, J., Crespo-Torres, N., Beltrán-Mora, M. (eds.) CITT 2017. CCIS, vol. 798, pp. 215–229. Springer, Cham (2018). https://doi.org/10. 1007/978-3-319-72727-1_16 38. Zepeda, H., et al.: Evaluación de la competencia digital en profesores de educación superior de la costa Norte de Jalisco. Rev. Iberoam. Prod. Acad. Gestión Educ. 6(11) (2019)

Authentic Evaluation for the Improvement of the Argumentative Written Essay in Virtual University Environments Soratna Verónica Navas(B) , Eduardo Jesús Garcés , María Cristina Pecho Rivera , Frank Luis Guanipa , and Felix Colina Ysea Universidad Autónoma del Perú, Lima, Peru {snavasgo,egarces,fguanipa,fcolina}@autonoma.edu.pe

Abstract. The present study sought to determine the incidence of authentic evaluation in the improvement of argumentative written composition in virtual environments of a private university in Lima. It was carried out under a quantitative approach, with a level of explanatory research. The sample consisted of 40 first-year students from a private university in Lima. They worked through the construction of argumentative texts in a collaborative way and they had to apply the writing threads proposed by Flowrer and Hayes (1996). Booklets were used and students were given a controversial question, from which the construction of their text would start. The results allow asserting that the authentic evaluation enables the student to become aware of the aspects on which he will be evaluated prior to carrying out the activities, in this case, the rubric represents a successful instrument for achieving this objective. Keywords: Written composition · Authentic evaluation · Argumentative text · Virtual university context

1 Introduction Generally, students starting in a university context have only had assessment experiences based on midterm tests that require core knowledge, in which learning tends to be measured through a grade. For this reason, they do not have self-regulated knowledge that allows them to be aware of their mistakes and how much they are progressing. In relation to these premises, it has been possible to corroborate those students, especially in the first cycles, construct argumentative texts, in which the following indicators predominate: lack of coherence and cohesion between the ideas presented; lack of an adequate segmentation of the main, secondary and tertiary ideas which do not follow an adequate thematic progression, among other deficiencies. This may be due to the predominance of a traditional teaching method of contents and a biased evaluation based on answering tests that only require the literal reproduction of knowledge. From this point, the following question arises: What is the incidence of authentic evaluation in the improvement of the argumentative written essay in the virtual university environments of the Autonomous University of Peru? © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 181–192, 2023. https://doi.org/10.1007/978-3-031-25942-5_15

182

S. V. Navas et al.

1.1 Goals To determine the incidence of the authentic evaluation in the improvement of the argumentative written essay in the virtual environments of the academic writing courses of the Autonomous University of Peru.

2 Background of the Investigation 2.1 Authentic Assessment Authentic Assessment is conceived as a process or procedure to assess how students have progressed in their learning process. Regarding this, Castillo and Cabrerizo (2010) state that evaluation contributes both to the optimal development of learning and teaching processes, as well as to the development and personal promotion of students. The role of teachers is not only the development of instruction or transmission of knowledge, but, above all, the intellectual training in content and cognitive strategies, the achievement of skills, education in values and attitudes of students as citizens of our society. It is worth remembering that previously the evaluation consisted fundamentally in the verification of the achievement of objectives, with a marked Taylorist bias, but at present it has become a procedure of advice, regulation, reorientation and organization of learning, so that through it teaching and learning processes are improved. This new approach gives evaluation a marked pedagogical character that, going beyond the merely instructive nature, is fully installed in the educational and training aspects, to the point of being one of the quality elements of an educational system (Castillo and Cabrerizo 2010). Authentic assessment (AE) proposes new ways of conceiving assessment strategies and procedures that denote a marked difference between those that have prevailed in our educational systems to date. It is an evaluation focused mainly on processes rather than results, and interested in the student being the one who assumes responsibility for their own learning and therefore, uses the evaluation as a means that allows them to achieve the knowledge proposed in the different disciplines. of a formal education (Ahumada 2005). For many, perhaps evaluation is an inescapable process of learning, but authentic evaluation offers greater objectivity, recognizes mistakes and learns from them, since it is based on a constructivist conception of teaching and learning (Condemarin and Medina 2000), because this type of evaluation has to do with the student’s participation in it and refers to the fact of assessing authentic learning situations and that these are significant for the student (Sanmartí, 2007); evaluates contextualized learning, relevant issues of real life, in short, it refers to the contextualization of learning. In this order of ideas, Zabala and Arnau (2007) state that for authentic evaluation what is important is the learning process, its assessment and not the qualification that it certifies. Hence, this is an evaluation that takes advantage of errors as milestones of training and growth, creating from it a learning opportunity. Therefore, the AE is considered an evaluation by competencies that favors autonomy in learning and metacognition; It is consistent with the current pedagogical currents that empower the student, it is an evaluation that breaks that link of association that still exists between the evaluation and the

Authentic Evaluation for the Improvement

183

qualification, since it does not pursue the accumulation of core or repetitive knowledge, but is oriented in the being, feeling, doing, creating, building or arguing; everything that allows the student to acquire competencies, skills and abilities that were not achieved in the traditional way (Acaso 2013; Muñoz 2014). 2.2 Authentic Assessment in Virtual Environments Bogantes (2015) states that evaluative praxis in the university context has great implications in the lives of students. If the purpose is to favor the integral formation of the students, the evaluation must be conceived and designed in such a way that it responds to this intention, considering the conceptual, procedural and attitudinal contents in order to duly achieve its objectives, in this case the development of the argumentative written competence of the students of the first cycle of a private university in Lima. In relation to this, Chacín (2014) expresses that the evaluation of learning is conceived as: “… The process of searching for evidence on knowledge, skills or attitudes (knowing, being and doing) of the participants, in order to carry out evaluation, when comparing them with a previously defined ideal…” (p. 139.) Iturrioz and González (2015) are in this same line, who point out that: “the evaluation of learning is a systematic process of gathering information that allows the teacher to make a value judgment about the acquisitions or learning that their students achieve as a result of their participation in the teaching activities” (p. 133). From this perspective and in correspondence with the aforementioned authors, evaluations in university virtual training programs should be oriented from authentic evaluation, making use of more versatile instruments, such as: rubrics, portfolios, learning diaries, checklists. Regarding the above, Constantino and Llull (2010) state that in regard to online programs and courses in higher education they propose a migration from conventional evaluation systems and those typical of face-to-face training to online environments, despite the fact that there is a widespread intuition that such systems are not entirely consistent with the nature of teaching and learning in virtual environments. Based on the above, in this study evaluation is presented as a continuous and permanent process, which allows the verification of the progress and development of learning, both for teachers and for the participants themselves. In this way, the evaluative activity must be conceived as one more action of the didactic process and not as a practice of control and sanction for the students. It is necessary to point out that, during the construction process of argumentative written essays, not only the formative process will be actively favored, but also the quality of the elaborated text will improve, and it is here where the relevance of the authentic evaluation in the different educational modalities: face-to-face, distance learning, semi-face-to-face or remote is found (Del Moral and Villalustre 2013, 2009). 2.3 Argument and Essay Argumentation can be defined as the type of discourse that uses language to justify or refute a point of view regarding a topic. The aim is then to persuade an audience to provoke or increase their adherence to the proposed arguments. To achieve this, the

184

S. V. Navas et al.

arguments can be chosen strategically, so that they are pleasing or acceptable to the addressee (Lo Cascio 1998; Martínez 2005). This textual base, according to Calsamiglia and Tusón (2002), has some fundamental characteristics. Firstly, there is the object, which refers to any controversial, doubtful or problematic issue that admits different ways of dealing with it; secondly, there is the speaker who has to express a way of seeing and interpreting reality, that is, a position taken; thirdly, the character is presented, which is based on the opposition of two or more positions, being controversial and markedly dialogical; Finally, there is the objective, a feature that aims to provoke adhesion and thus persuade an interlocutor or even the public about an idea. Texts with an argumentative structure appear in many of the characteristic discursive activities of public and private social life. Likewise, there is a large number of genres in which this textual typology is immersed. In this sense, genres are understood as all those communicative events carried out by the members of a particular discursive community with related communicative purposes, aimed at a specifiable audience and characterized by a typical rhetorical structure, styles and content (Swales 2004). In this way, the opinion column, the article, the editorial, the essay, among others, can be considered argumentative genres. The argumentative essay is defined by Zambrano in Cano et al (2021) as: “a discursive genre of the argumentative textual typology, in which the author fulfills the fundamental objective of defending a thesis to achieve the adherence of the audience to it” (page, 24). These authors conceive it as a type of structured and unified text around a thesis that is supported in various ways such as reasons or illustrations. In this order of ideas, the essay is approached as a type of argumentative text, because its statements are structured around the communicative need to support a thesis with arguments that establish a position and enter into dialogue with other positions. A type of text such as the argumentative essay is cohesive by articulating its statements with linguistic resources, among which can be recognized, for example, the connectors (because, although, therefore…) that show logical relationships between them. Their coherence depends on the relationship that their thesis and arguments establish with the aspects of reality and the situation to which they refer.

3 Method The present investigation was framed within an experimental investigation, specifically in a quasi- experimental design, defined by Hernández, Fernández and Baptista (2006:186) as “one where variables are deliberately manipulated, but the subjects are not randomly assigned to groups or are paired, but these groups are already formed before the experiment: they are intact groups since the way they were formed is independent or apart from the experiment. This study was framed in this research design, since it seeks to determine the incidence of authentic evaluation in the improvement of the argumentative written composition in the virtual environments of the Academic Writing course of the Autonomous University of Peru. A pre-test, post-test and control group design was applied, which, according to Hernández, Fernández and Baptista (2010), incorporated the administration of pre-tests

Authentic Evaluation for the Improvement

185

to the groups that make up the experiment. The subjects are randomly assigned to the groups, after which the pretest was applied simultaneously; one group will receive the experimental treatment (the paraphrase) and the other will not (it is the control group); lastly; a posttest was administered to both groups. That is, before applying the treatment based on the use of the rubric, a test will be applied to diagnose the quality of the argumentative essays written by the students of the academic writing course of the Autonomous University of Peru and after the treatment another test will be carried out again to test both groups, with the purpose of determining the incidence of the treatment once the action has been applied. One of the most relevant reasons why this design was selected is that the aforementioned procedure (posttest) controls all sources of internal invalidation (Ysea 2009). It should be noted that this procedure will be guided through the analysis and reflection of the evaluative rubrics. 3.1 Population and Sample The subjects under study were students belonging to the second cycle of the subject of Academic Writing of the Autonomous University of Peru, which make up a group of 1192 students. It should be noted that this university is located in the district of Villa El Salvador. It is important to mention that the procedure for selecting the sample will be non-probabilistic (Hernández, Fernández and Baptista 2006), since they all had similar characteristics (age, sex, academic and socioeconomic level). For the purposes of this study, the sample consisted of forty-five (45) students of the second cycle of the Autonomous University of Peru. It is necessary to point out that these students range between 17 and 21 years of age, they are in the stage of formal operations proposed by Piaget (1976), with a medium-low socioeconomic status. 3.2 Techniques and Instrument The technique used was production analysis, which leads to determining a task of any kind that ultimately leads to the procedural demonstration of knowledge, either through writing or orality, and in any area of knowledge. knowledge. Cassany (2005). As a collection instrument, we worked with the textual record booklet, which will be made up of a controversial question and a rubric attached to it, from which the free and spontaneous construction of the essay would start. To do this, they must apply the writing threads and consider the criteria and indicators expressed in a rubric. It should be noted that the rubric helps to outline consistent evaluation criteria. In turn, it allowed teachers and students alike to evaluate complex and objective criteria, as well as providing a framework for self-assessment, reflection, and cooperative work. For Torres and Perera (2010) the rubric has a double value in the use we give to it when we work with it in educational practice. On the one hand, it is considered an evaluation tool in a different context than conventional evaluation. This tool not only evaluates the students’ knowledge, but also serves as a reflection tool that allows them to become aware of what they learn. On the other hand, it guides the student to complement the parts in which an activity is structured. This last function is precisely the one that supports the teacher’s tutorial action (p., 148).

186

S. V. Navas et al.

3.3 Data Processing The booklets that will be used for the development of said composition are accompanied by the evaluation rubric, which facilitates learning and guides students regarding what is expected of them, how to do a good job, what is the most relevant and, therefore, where should they emphasize (Blanco 2011, 2008). It is necessary to highlight that it is also favorable for the teacher by facilitating the guiding task. It should be noted that the analytical rubrics provide in detail the progress of the development of the written competence of the students.

4 Results 4.1 Results of the Questionnaire Applied to Teachers

Fig. 1. Evaluation strategies

Figure 1 shows the percentage pertinent to the description of the strategies used by teachers to evaluate the writing process from virtual environments in the strategies dimension. It should be noted that in this dimension, 51% of teachers state that they would use the rubric as an evaluation strategy. 20% of the sample opts for oral participation in the classroom. 14.5% for the self-assessment of the products and 14.5% consider other options. Reflecting on these results, it can be said that most teachers consider the rubric as one of the main evaluative strategies of the argumentative essay, however, this is more than a strategy, it is the evaluation instrument, which reflects a confusion in the management of terms on the part of the teachers, since when speaking of evaluative strategies reference is made to the way in which said activity will be developed, as Ahumada (2005) puts it.

Authentic Evaluation for the Improvement

187

Fig. 2. Evaluation instruments

Figure 2 presents the results of the instruments indicator, in which 52% of the sample uses the rubric for the evaluation of the written argumentative essay. 28% choose the checklist. 12% use the portfolio and the remaining 8% consider other options. These results reveal that most teachers consider the rubric as the main instrument for evaluating the argumentative essay, which can become a strength for the proper development of the evaluation in virtual contexts, since they make students aware of what they must produce, and therefore they are self-demanding and thus, constant progress and continuous improvement of the product that must be generated is achieved. (Anijovich and Cappelletti 2017).

Fig. 3. Evaluation criteria

Figure 3 presents the results of the criteria indicator, in which it can be seen that 50% consider that the development of arguments criterion should be considered in the evaluation. 25% elements that make up each paragraph. 15% the grammatical aspects

188

S. V. Navas et al.

and the remaining 10% consider other options. Based on the results obtained in this dimension, it can be said that half of the teachers interviewed place greater emphasis on the exposition of arguments in the essay, thus ignoring elements of great relevance such as the contextualization of the topic, the approach of the thesis, the problem, among other aspects that affect the foundation of the essay. It is necessary to point out that teachers consider the basic elements that make up the structure of the argumentative essay, as illustrated by illustrations (Ordoñez 2001). 4.2 Results of the Rubric Used with Students Figure 4 shows the percentage relevant to the diagnosis of the quality of the argumentative written essay carried out by the students of the academic writing course of the Autonomous University of Peru. It should be noted that during the diagnostic phase, students were asked to construct an argumentative essay considering all the parts that make it up. It should be noted that in this phase they were not presented with the rubric that would be considered for its evaluation, since they wanted to appreciate their spontaneous development during the composition of argumentative texts, but it should be clarified that, by the beginning of this, the students had already had a masterful explanation and practical exercise regarding: what is the argumentative essay, its parts (introduction, body and conclusions), argumentative strategies.

Fig. 4. Quality of argumentative texts before treatment

In the introduction of the essay, it can be seen that only 40% of the students managed to contextualize the topic addressed. Likewise, 60% raised a clear research objective, while 50% assumed a position or thesis. Next, in relation to the argumentative body, 40% of the sample justified their position with valid arguments and only 20% of the students used some argumentative strategy to explain or extend their arguments. This is reflected in the fact that 30% of the texts show a logical hierarchy of ideas. In general, the essays did not comply with the number of paragraphs requested, they were only limited to presenting points of view without establishing a hierarchical order and without giving solidity to them, they did not present inter- and intra -paragraph coherence, nor a good use of connectors logic and referents. Finally, in the conclusion, 45% of the sample reaffirmed their position on the subject and 35% appropriately synthesized the arguments presented in the body of the essay.

Authentic Evaluation for the Improvement

189

On the other hand, 70% of the texts analyzed presented flaws related to formal aspects of the language, including punctuation marks, spelling errors and lack of relationship between cohesive elements, which negatively influences the construction of sentences. Taking these results into account, the position assumed by Stobart (2010) and Cano (2015) is ratified when they say that the elements and instruments linked to the evaluation have changed, since they have gone from being something reserved to considering the convenience of making them known, transparent and diffused. Therefore, the deficiencies presented are probably linked to the written competence, but they could be made aware and overcome with the proper orientation and reflection of a rubric that would guide them in the process. As stated by Brown (2015) when saying that the evaluation must be clearly articulated with the expected learning outcomes, since this helps students to advance in the development of their skills and knowledge of the subject. In the development phase, the rubric with which they would be evaluated in their essay was socialized to the students and, after the review, they were given group feedback on the results to make them aware of the positive and negative aspects. Taking this into consideration, the results are presented after the application of the treatment.

Fig. 5. Quality of argumentative texts after treatment

After the treatment phase with the rubric, a significant improvement was seen in the construction of the essays. Figure 5 showed that 70% of the students managed to contextualize the theme, posing a problematic core by writing possible causes and consequences, 90% of the sample presented a research objective and 85% wrote their thesis or position. Continuing with the body of the essay, the presence of arguments was evidenced in 85% and the use of some argumentative strategy was observed in 75%. In this sense, 70% of the analyzed texts show hierarchy in the ideas raised. For its part, the conclusions show that 90% of the writers reaffirmed their thesis and 85% made a synthesis of the arguments described above. To conclude, it is important to highlight that marked failures or errors prevailed in 25% of the texts. In summary, the students applied some of the requested argumentative strategies, there was a better foundation for their ideas, referring to the sources consulted to give it greater validity, which reveals that they considered the indicators required by the rubric (macrostructure, thematic progression, syntax, vocabulary and regulations in general). However, details of coherence and cohesion between his ideas are still present, as well as certain lacks of syntactic and normative order.

190

S. V. Navas et al.

These results reflect a significant improvement in the composition of the texts, because although there are still deficiencies in the quality of the argumentative essay, an improvement is observed in terms of the dimensions: introduction, argumentative body and conclusions. These results reaffirm what Valverde (2014) stated when he states what authentic evaluation is for learning and that it is oriented towards the assessment of competencies, in which the student plays an active role. Likewise, it states that in a virtual context, the rubric is an advantageous instrument for carrying out an evaluation process appropriate to current educational needs and demands. Regarding the 45% of the sample that still does not reflect progress, it may be due to a lack of organization in the team, they did not use sources, they did not check the rubric and they did not follow the planning phase of their ideas during the textualization of the essay. In the pair work phase, it was observed that 75% of the students significantly improved the quality of the written argumentative essay. This may be due to many reasons: the constant and continuous practice of written argumentative writing, the followup of the criteria required in the rubric, the reflection of the observations provided and the constant socialization of the dimensions and indicators of the evaluation instrument. In this regard, Ruiz (2017) states that “rubrics are especially beneficial for promoting objectivity, self-assessment and unity of action” (p. 107). This shows that one of the advantages of authentic assessment is the fact of expressing to students the mastery that they are expected to gain from the learning that is being assessed, and this can be done through the use of rubrics. In addition, this type of evaluation allows the student to recognize the error and stimulates him to overcome it, because it requires him to build knowledge, making use of higher-order cognitive skills and allows them to generate problem-solving, knowledge application and decision-making processes. That correspond to the development of cognitive and metacognitive skills, and not only to the reproduction of declarative or conceptual knowledge (Villarroel and Bruna 2019).

5 Results During authentic assessment, students become aware of their mistakes, and can regulate their learning processes to improve weak aspects and strengthen those they master more effectively (Valverde 2014). Although traditional evaluation is faster, and even automated, the benefit of saving time in the construction of a rapid evaluation instrument, such as exams, and in their correction, vanishes when the learning obtained by the students is quickly forgotten, which derives in having to reinforce the same contents in higher courses, spending time not contemplated for it, as it is explained by Cano (2015). These statements can be seen reflected in the results, that indicate that most teachers prefer to apply evaluation rubrics, understanding that the evaluation process goes beyond simply grading, but must guide students on the path they are following. in their learning process (Villarroel and Bruna 2019). Thus, it has been seen that the traditional type of evaluation turns students into passive learners, promoting a surface learning over deep and active learning, which is what promotes authentic assessment. A process of deep and active learning, is related to the number of connections of concepts and ideas, the level of reasoning and the use of self-monitoring processes that students need to deploy and use to give an answer (Anijovich and Cappelletti 2017).

Authentic Evaluation for the Improvement

191

That is, students can see how their writing process changes when receiving effective feedback through authentic assessment, as reflected by Valverde (2014), and as can be seen in the results after treatment. Thus, Parentelli (2020) reinforces this, by saying that introducing changes in the way of evaluating implicitly and indirectly impacts teaching. When teachers implement assessments for the learning, motivating the construction of knowledge, they begin also to shift their focus to deep and active learning, using motivating practices where the students take a more active role.

6 Conclusions In the first place, the results of this study allow us to assert that the authentic evaluation enables the student to become aware of the aspects on which he is going to be evaluated prior to carrying out the activities, in this case, the rubric represents a successful instrument for the achievement of this goal. In addition, this type of tool helps to improve the student’s teaching process, as long as they are offered learning strategies that allow them to go to the next level and progress in the development of their written skills, for this, the application of the planning, textualization, review and editing threads proposed by Flower and Hayes (1996), because they allow the student to self- control, self-direct and self-evaluate their written product. Second, Anijovich’s position in Parentelli (2020) is reaffirmed when he refers to the fact that evaluation is the most vulnerable aspect of teachers’ work, since it is an inseparable part of the teaching and learning processes. Reason why it should be understood as an opportunity for students to put their knowledge into play, make their achievements visible, learn to recognize their weaknesses and strengths, in addition to the “classic” function of approving, promoting, certifying (Anijovich and Cappelletti 2017). When the student has control over what is evaluated and is aware of what he must produce, he demands of himself and with it, constant progress and continuous improvement of the product that must be generated is achieved. To conclude, it can be stated that authentic assessment offers students opportunities to learn through the assessment process itself, when it is designed in an articulated and clear manner with respect to the expected learning outcomes. In addition, it encourages student participation and helps them progress in the development of their skills and knowledge of the subject. Although the creation and management of authentic assessment tasks can require a lot of time and resources, this research confirms what was stated by Martín et al. (2015), who argue that the benefits in terms of improving learning are much higher than those factors.

References Acaso, M.: Reduvolution. Make the revolution in Education. Paidos, Barcelona (2013) Smoked, A.: Authentic assessment: a system for obtaining evidence and learning experiences [Authentic evaluation: a system for obtaining evidence and learning experiences]. Redalic 45, 11–24 (2005). https://www.redalyc.org/pdf/3333/333329100002.pdf Albornoz, J., Chacano, D., Vara, M.: Effects on learning and satisfaction of using the flipped classroom and authentic assessment in college mathematics for first-year engineering students. Education 30(58) 23–24 (2021). https://doi.org/10.18800/education.202101.010

192

S. V. Navas et al.

Alarcon, L., Balderrama, J., Navarro, R.: Content validity by expert judgment: proposal of a virtual tool. Opening 9(2), 2–3 (2017). https://doi.org/10.32870/ap.v9n2.993 De Amaro, C.H., Brioli, R.C., Garcia, I.: Competence of the university teacher for teaching in virtual environments. Chapter 6 of the book Theory and Practice of Virtual Learning Communities (2013). Editions of the CDCH-UCV.151-184_Caracas, Venezuela. ISBN. 987-980-00-2746 Barrientos-Hernán, E.J., López-Pastor, V.M., Pérez-Brunicardi, D.: Authentic assessment and learning-oriented assessment in higher education. A review in international databases. IberoAm. J. Educ. Eval. 13(2), 67–83 (2020). https://doi.org/10.15366/riee2020.13.2.00 Bogantes Pessoa, J.: Strategies for evaluation in distance education: an analysis of the options used in the basic general education program. [Strategies for evaluation in distance education: an analysis of the options used in the basic general education program]. Educ. Innov. Mag. 17(22) (2015). Distance State University (UNED). Costa Rica Cano, E., Cuba, E., Puma, G., Venegas, R.: Proposal for a rhetorical-discursive model of the argumentative essay genre. New Pac. Mag. 75(1), 23–24 (2021).https://doi.org/10.4067/S071951762021000200128 Calsamiglia, H., Tuson, A.: The things of saying. Barcelona. Editorial Ariel. Cassany, et, al. (1998). Teach Language. Editorial GRAO. Barcelona, Spain (2002) Castillo, S., Cabrerizo, J.: Educational Assessment of Learning and Skills. Pearson Education, SA Madrid (2010) Chao Chao, K.-W., Durand, M.-J.: The use of the rubric as an evaluation and feedback tool for written expression in French. Res. News Educ. 19(3) (2019). https://doi.org/10.15517/aie. v19i3.38638 Chacín, R.: How to evaluate in the virtual classroom? In: Amaro, R., Martínez, A. (eds.) Design and Virtual Tutoring, pp. 137–158. Caracas, Venezuela: Editions of the Editorial Fund of the FHE of the UCV (2014) Constantino, G., Llull, L.: Evaluation and quality in online programs and courses in higher education. Philosophical and Cultural Anthropology Research Center (CIAFIC). VIII – number 1/2. Argentina: National Council for Scientific and Technical Research (CONICET) (2010) Condemarin, M., Medina, S.: Authentic Assessment of Learning. Andres Bello, Chile (2000) Del Moral, M., Vallalustre, L.: E-Assessment in virtual environments: tools and strategies Paper presented at the IV International Conference on virtual campuses. Palma, Spain (2013) Diaz Barriga Arceo, F.: Evaluation of competencies in higher education: experiences in the Mexican context. Ibero-Am. J. Educ. Eval. 12(2), 49 (2019). https://doi.org/10.15366/riee2019.12. 2.003 Iturrioz, G., González, I.: Evaluate in virtuality. [Evaluate in virtuality]. Univ. Signs 2(2), 133–144 (2015) Lo Cascio, V.: Grammar of Argument. Publisher Alliance, Madrid (1998). https://p3.usal.edu.ar/ index.php/signos/article/download/3212/3958 Martinez, M.: The construction of the argumentative process in the speech. Cali (2005) UNESCO Chair Munoz, S.: Competence teaching in primary school: a study focused on the area of knowledge of the environment. [Master’s work]. Madrid: Camilo José Cela University (2014) Ordonez, C.: Instructions for writing an argumentative essay. Unpublished manuscript, Master of Education, Universidad de los Andes, Colombia (2001) Ruiz, C.: Instruments and Educational Research. CIDEG, Barquisimeto (2002) Swales, J.: Research Genres. Explorations and Applications. Cambridge University Press, Cambridge (2004)

Computational Thinking as Instrument to Evaluate Student Difficulties in Higher Education: Before and During Pandemic Analysis Ana-Lucía Pérez-Suasnavas1(B) , Bayardo Salgado-Proaño2 and Jorge L. Santamaría1

, Karina Cela3

,

1 Universidad Central del Ecuador, Quito, Ecuador

[email protected] 2 Agencia de Regulación y Control de Energía y Recursos Naturales No Renovables, Quito,

Ecuador 3 Universidad de las Fuerzas Armadas ESPE, Sangolquí, Ecuador

Abstract. Computational Thinking is a subject that has motivated different researches, especially to those searching a more specific definition. Regarding higher education, there are studies only from teacher’s standpoint without considering students’ perspectives. The main goal of this study is to describe the results obtained from computational thinking implementation in order to identify and compare student’s difficulties in programming subjects. The methodology applied was through a nonexperimental longitudinal study with the participation of 1.112 students of second semester of the engineering school from an Ecuadorian public university, during the period between September/2019 and October/2021. Findings in the period before pandemic showed that the main difficulty was related to the use the tool and language syntax; while for the periods along the pandemic, the main difficulties were associated to institutional and personal factors. Also, it was verified that the percentage of difficulties in virtual mode is higher than in face-to-face modality. With those results, it is pretended that teachers have enough means to identify representative difficulties from students during a class and can offer an opportune help. Keywords: Thinking · Computer programming · University · Learning disabilities · Pandemic

1 Introduction Since the first definition of computational thinking (CT) given by Wing [60], there have been published several studies related to the difficulty for stablishing a more specific concept of CT and determining its constituent elements [1, 5, 22]; as well as, the possibility of being included in the curricula [9, 15, 61]. Other studies, however, focused on providing educational strategies of CT [6, 45], by conducting experimental research on middle and high school students. Nevertheless, according to Angeli and Giannakos [5], © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 193–207, 2023. https://doi.org/10.1007/978-3-031-25942-5_16

194

A.-L. Pérez-Suasnavas et al.

Czerkawski & Lyman [17] and Román-Gonzalez et al. [48], a few numbers of studies included skill evaluations of CT carried on higher education. Rojas-López & GarcíaPeñalvo [47] and Sánchez Román et al. [52], for instance, measured the level of CT in a computing school through a test of competence. It is important to point out that Rojas-López & García-Peñalvo [47] used international questionnaire items to measure the suitability of CT, while Sánchez Román et al. [52] adapted Bloom’s Taxonomy to the PC test to measure algorithm thinking. Although these studies are significant, they did not concentrate on students’ perspectives but on teachers’, which prevented students’ opinions and reduce the diversity of difficulties generated in the classroom [25, 37]. The CT appeared in the fifties with the first commercial computers, but Wing [60] was who conceptualized and developed the supporting theory. Although CT lacked of a clear and consistent empirical support [40], it consisted of a series of cognitive steps or skills to solve problems [5, 55] such as decomposition, abstraction, generalization patterns, and algorithm design [35, 45]. Rojas-López & García-Peñalvo [47] mentioned that CT does not mean computer programming, but teaching computer programming was related to CT [5, 61], and dids not limit to schools or computer science courses since CT was a transversal skill in the world. According to Rojas-López & García-Peñalvo [47] and Ribeiro et al., [46], it should have been conceived as a skill to solve problems and could be applied on other fields such as mathematics, physics, chemistry, sociology, philosophy, among others. In this context, there are some research related to problem identification in teaching and learning of programming courses that revealed students’ difficulties on computational solve – problem process [8, 20, 56]. However, this research is isolated and does not include the CT as the base for finding difficulties in this field of study. For identifying the difficulties in teaching and learning programming courses, the foundation of the life cycle of information systems (IS) was considered, taking into account four phases, namely, problem analysis, algorithm design, algorithm implementation, and evaluation [28, 32, 50], although Insuasti [25] and Molina Izurieta et al. [37] suggested it was possible to include intermediate phases to detail the process. Given the limited number of references related to CT application in the Latin American countries [45], in order to identify the difficulties related to the learning process of programming courses in higher education level, it is imperative to perform a situational analysis to show the main student difficulties during a class session, considering their perspective without limiting the study to other related computer careers and analyzing before and after pandemic results.

2 Methodology The present research utilized a non-experimental and longitudinal methodology throughout 4 academic periods. A qualitative and quantitative analysis of stated student difficulties in learning Programming 1 course were performed according to CT processes [40, 61]. A group of authors made the phases of the life cycle of IS particular [25, 37, 38], while others considered the complementary abilities of PC [7, 26, 57]. The analysis of the method revealed that there were similar abilities and this was why the phases of the life cycle of IS were considered as well as the CT processes in general. Also, the

Computational Thinking as Instrument

195

results of different academic periods were compared since the pandemic caused changes at every academic level. 2.1 Participants Initial population was made of 1.112 students taking Programming 1, of second semester at the School of Civil Engineering of a public Ecuadorian university, whose ages ranged between 19 and 22 years. Regarding the gender, 72.49% were male and 27.51% were female. Students of the two teaching modalities (e.g., online and face to face) were included regardless gender and academic performance. Although, some students quit their participation in the study, especially those taking the online class due to pandemic. 2.2 Instruments Two variables were taken into account: a) student difficulties, and b) academic periods. The following two instruments were used for obtaining the values of such variables: (1) Social media Twitter: Allowed the teacher to send several tweets to students during Sep./2019 – Feb./2020, in order to ask students for experienced difficulties and their learning results related to face-to-face Programming 1 classes. Each student had the option of commenting or not teacher’s tweets, with the chance of responding for more than once to collect as much as tweets as possible, and (2) MS Office forms: Three open questions were created to collect students’ thoughts regarding the difficulties faced during online classes between June/2021 and Oct./2021.

3 Results The results of applying CT processes are discussed as follows, as well as intervention descriptions and the qualitative and quantitative analysis before and after the pandemic. 3.1 CT Applied to Problem Solving Based on the studies by Kao [29] and Michaelson [35], the application of CT was conducted as a process followed by the teacher including the following steps: (1) Decomposition of the problem: Pérez-Suasnavas et al. [44] analyzed, during the initial phase of the research, student tweets coming from followers by direct reading from the social media, which meant a lot of work load to the professor, leading to find optimization mechanisms by splitting the problem in several stages. (2) Abstraction: Even though abstraction was considered a phase, it was used at the beginning to find the most effective tool to extract tweets, as well as, at the end of the next phase to identify comments that denoted any king of difficulty (CDIF), those with no difficulty (SDIF), and those producing irregularities. (3) Pattern generalization: Tweets were classified by difficulty according to the habitual phases of the life cycle of IS, also, the concept of generalization was applied to a group of tweets involving the same idea. (4) Algorithm design: Fig. 1 depicts a step-by-step problem-solving process.

196

A.-L. Pérez-Suasnavas et al.

3.2 Intervention Description First Intervention. The first intervention was applied from Sep./2019 to Feb./2020. The data collection coming from 144 student - follower accounts was made by applying the 4 steps described in the following subsections: Design of the Application for Data Extraction. By using API Rest of Twitter and Python programming language, it was developed an application named API Issues to extract data from teacher’s student – follower. The post_docente and post_estudiante fields were used. API Rest of Twitter was chosen since Python could get historical data and it had libraries that worked directly with Twitter such as Tweepy. Due to data characteristics, a CSV file was used to store the extracted data. Through API Issues, 1.049 tweets coming from teacher’s account related to the course were extracted. There were other tools to extract data from Twitter such as Radian, Nvivo, Atlas.ti, among others [43]; but these applications were paid and this research used only free software. Steps described as follows, are an introduction of preprocessing phase of Knowledge Discovery in Databases (KDD) process:

Fig. 1. Algorithm design process

Register Depuration. Once the CSV file was created with the available data, it was depurated by following the next steps: (1) Tweets that made at least one comment on the teacher’s tweet were extracted from non-protected student – follower accounts. (2) The incomplete data from post_estudiante field was completed manually with the text of post_docente, so that the tweet kept semantic coherence. (3) A superficial manual analysis was performed on the 1.049 tweets collected to divide the text in semantic units [36, 54]; so that each tweet belonged to only one topic, resulting in 1.996 tweets. (4) The API Issues allowed to eliminate row jumps, car returns, special characters, emoticons, and special characters from web data. The generated CSV file became an approximation of the construction of Corpus Specialized ad-hoc [2, 21], with social media data [33], allowing to satisfy a tangible necessity or as a pedagogic recourse to develop didactic material [51].

Computational Thinking as Instrument

197

Tweets Analysis. The text of each of the 1.996 tweets was analyzed with the purpose of choosing the most suitable method to identify the most representative ideas expressed by students. Tweets Classification. After the unique registers were set, each tweet was labeled manually and by using AntConc App, free software for Corpus concordance and text analyses [31], allowing to extract a list of the most relevant words as a function of their frequencies. The classification process was performed in two stages: (1) Difficulty identification: 1.103 CDIF and 893 tweets SDIF were cataloged, representing 55,26% and 44,74% respectively, and (2) Subclassification by type: A class planification could arise in difficulties that were not necessarily related to academic questions [10]; thus, the CDIF tweets were subclassified according to their type of difficulty (see Table 1).

Table 1. Categorization of difficulties. Cod

Category

Description

D001

Learning

Student learning difficulties (not academic topics)

D002

Analisys_Disign_Logic

Problem understanding difficulties and logic to solve a problem through algorithms

D003

Tool_Codification

Difficulties dealing with a computer program, syntaxis and command use

D004

Execution_Validation_Results

Difficulties to visualize the results, fields validation

D005

Institutional_Personal

Difficulties related to institutional planning, teachers, technology team, family and personal aspects

With this subclassification, the CSV file was structured in the following fields: sequency number, the tweet, class (CDIF/SDIF), and type of difficulty. Due to its structure, it was called DataSet. The classification by type of difficulty was done by 3 experts (2 teachers and 1 computer specialist) in order to avoid researcher bias. The DataSet was split and given in equal parts so that the experts did not have to catalogue all tweets. They classified them according to their experience and the categories identified as shown in Table 1. Those tweets representing conflict were discussed as a group to determine their final category (see Table 2). Second, Third and Fourth Intervention. Since June 2020, learning modality changed from face-to-face to online classes for 3 academic periods, which made the instrument for collecting data modify due to specific institutional policies described as follows: (a) The use of social media as a means of communication between teachers and students was restricted, and (b) Institutional email, learning platform, Microsoft Teams or Office forms were established as communication and evaluation tools. Office forms was chosen from all the possibilities to create a questionnaire and collect data at the end of each academic period. The three online mode periods were compiled in one XLS file due to the number of responses. Then a depuration, analysis

198

A.-L. Pérez-Suasnavas et al. Table 2. Difficulties detected before the pandemic.

Cod.

Main difficulties

D001

Noise in class; search in the internet; concept differentiation; managing load of information; concentrating; grading; lack of reading; understanding new topics; exercise and question statements; remembering memories

D002

Programming logic; definition and use of variables; constants and types of data; programming concepts; flowchart; problem analysis; mental maps; algorithm development

D003

Use of tool; commands and programming syntaxis; procedure and functions with parameters; mathematic formulas; conditional control structures, repetitive and animated; vectors; operation with arrays; counting; file backup

D004

Solve problem; time managing; computer program execution; program validation; result display; error identification; program optimization

D005

Difficulty using a personal computer; mobilization; sleeping, feeding; being on time; lack of preparation; not motivation; class schedule; teacher’s methodology, homework

and classification process of the students’ responses was performed in the same way as the first intervention. Initially, the XLS file was made of 154 responses and after its depuration and analysis, there were 468 unique comments. 367 CDIF and 101 SDIF comments were cataloged. To avoid researcher bias, the services of one expert who participated in the first intervention was required with the same classification process for stated students’ difficulties (see Table 3) according to Table 1.

Table 3. Difficulties detected during the pandemic. Cod.

Main difficulties

D001

Follow the class; adaptation to virtuality; equality of knowledge; avoid interrogations; previous knowledge; English language

D002

Programming logic; definition and use of variables; constants and type of data; programming concepts; flowcharts

D003

Use of the tool and programming commands; procedures; functions; mathematical formulas; conditional control structures; repetitive and animated; macros; arrays

D004

Time administration

D005(*)

Difficulty using the computer; team work; power outage; connectivity; class schedule; submitting homework; time to study; personal organization; online teaching methodology; relation with teacher; health, personal and/or family difficulties; study and work; training

(*) It is considered the study place for online classes.

Computational Thinking as Instrument

199

3.3 Quantitative Analysis Between Interventions Because of the research objective, it was focused on student comments expressing any difficulty so that the teacher could take any action on time during adverse conflicts. It was not considered SDIF comments. Table 4 shows the comments per each category. Table 4. Quantitative analysis by period. No

Period

Modality

D001

D002

D003

D004

D005

1

Sep./2019 – Feb./2020

Face-to-Face

22.21%

20.13%

30.10%

13.33%

14.23%

2

June/2020 – Oct./2020

Online

15.69%

0.00%

16.67%

6.86%

60.78%

3

Nov./2020 April/2021

Online

6.96%

2.61%

19.13%

3.48%

67.83%

4

July/2021 – Oct./2021

Online

8.00%

5.33%

18.00%

6.00%

62.67%

AVERAGE

13,21%

7,02%

20,97%

7,42%

51,38%

It can be inferred from Table 4 that in the first intervention, type D003 had the higher and type D004 had the lower percentage of all types of difficulties. In the second, third and fourth interventions, the difficulties with the higher percentage were type D005. In the face-to-face modality, type D001, D002, D003 and D004 had the higher percentage. Type D002 had the lowest and type D005 had the highest average compared to the others. Lastly, difficulties with higher percentages were related to D003 and D005 categories. 3.4 Qualitative Analysis for Learning Modality The difficulties with the higher percentage in each learning modality were considered. Face-to-face Learning. AntConc [31] tool was used to identify the most frequent words in the difficulty type D003. There are 5 difficulties reported by students that stand out: 1) arrays, 2) mathematical operations, 3) computer commands, 4) repetitive – structures and 5) functions – parameters. Online Learning. As in the face-to-face learning, the main difficulties were identified and are related to the type D005. The difficulties with high scores are: 1) the Internet, 2) equipment, 3) team work, 4) online learning, and 5) personal/family issues.

3.5 Qualitative Analysis Before and After the Pandemic This part of the analysis contrasts similarities and differences of predominant difficulties in both learning modalities (see Table 5).

200

A.-L. Pérez-Suasnavas et al. Table 5. Qualitative analysis before and after the pandemic.

Teaching modalities

D003

D005

Face-to-face and Online

Use of the tool; Use of computer commands; Procedures, functions with parameters; Conditional and repetitive structures; Macros; Arrays; Mathematical operations

Problems with the computer; Lack of preparation; Class schedule; Teacher’s methodology; Submitting homework

Face-to-face

Programming syntaxis; Vectors; Counting; Validations; File backup; Module

Mobilization; Sleeping; Feeding; Being on time; Demotivation

Online

Place of study; Team work; Power outage; Connectivity; Relation with teacher; Health; Personal organization; Personal and/or family problems; Combine study and work

From Table 5, according to difficulty type D003 and type D005, students who took Programming 1 class in both teaching modalities expressed similar difficulties. Besides, there were some difficulties identified only in face-to-face modality but not in online modality and vice versa.

4 Discussion It was evident from the literature review that there was a continuous interest for including computational thinking as part of the curricula through different pedagogic strategies for problem solving such as developing a critical digital competence and cognitive abilities [1, 26], and transferring knowledge in programming and robotics teaching [40, 45]; however, in higher education, especially in the Latin American countries, there was a limited number of studies [9, 45]. This research included, as part of its methodology, the most general processes of computational thinking [27, 35, 47] in order to identify the difficulties of teaching programming at the higher education level in a Latin American university by developing the process of computational thinking based on a new computational logic [6]. Different finds related to evaluation methods of computational thinking capabilities showed the use of tests, questionnaire items, and topics from teachers’ perspective even though they included internationally recognized tools and taxonomies [47, 52], while this study considered students’ opinions and perceptions when applying CT. The application of different CT processes allowed at first the decomposition of the general problem in subproblems such as getting data from Twitter for analysis and classification, which might be contrasted against Rojas-López & García-Peñalvo [47] when applying different exercises in each stage for evaluating CT. Even though the CT processes were not set boundaries, and they lacked of a specific order [61], according to Selby [53] decomposition was observed to be the most important phase for problem

Computational Thinking as Instrument

201

solving and the most difficult [35]. For this reason, the composition was completed as the first phase among the process, contradicting of what stated by Ortega-Ruipérez & Asensio Brouard [40]. The representation of a problem was prioritized for a later application of computational strategy and algorithm implementation, which was supported by Salgado Castillo et al. [50]. It was determined that API Rest of Twitter was the appropriate tool for extracting tweets because of applying the extracting process at the first phase by determining relevant elements of the study. This was in concordance with Diaz-Mendivelso & SuarezBaron [19], Patil & Kulkarni [42] and Veletsianos [59] who used the same tool during the experimentation phase. In the second part of the abstraction process, it was performed the first tweet classification to identify those that expressed or not any type of difficulty, which could be compared to who identified Chen et al. [12] and Kimmons et al. [30] who identified even irrelevant tweets. The second classification of tweets allowed to establish the main difficulties that students went through when learning Programming 1, and generalize the type of difficulty according to the life cycle of IS as mentioned by Fuentes-Rosado & Moo-Medina [20] in order to determine and classify difficulties of students of the second semester of the school of engineering when creating software routines for problem solving. However, difficulty identification was enclosed only by the academic field without considering that a student is a person with his/her own limitations and interact depending on the surrounding environment and available resources [3, 34]. The algorithm design was the last phase of CT application and allowed to describe graphically, step by step, the problem-solving process in the same way as Rojas-López & García-Peñalvo [47] who placed this phase as part of the last topic of the Programming Methodology (PM) course. During the first intervention, the extraction of data from social media Twitter through API Issue allowed to have relevant data for later analysis as per Diaz-Mendivelso & Suarez-Baron [19] and Veletsianos [59]. CSV file had Twitter data and helped teachers generate a first approximation of a Specialized Corpus in the topic, serving as a complementary didactic material during phase 3 of the JiTTwT methodology proposed by Pérez-Suasnavas et al. [44]. In the second, third and fourth interventions, the questionnaire items did not limit student opinions in contrast to other processes described in other studies focused on specific questions and topics [32, 47, 52]. Regarding the quantitative analysis of data, there were 1.996 valid registers extracted from Twitter, face-to-face modality, and 55,26% of them expressed difficulty while the 44,74% did not. During the online periods, 468 student responses (21,58%) did not imply difficulty and 78,42% did. These results allowed to infer that the online modality as a consequence of confinement caused by COVID-19 generated more academic difficulties for students, which is in concordance to [14, 23, 24, 39, 49]. Type D003 difficulty was the most representative during the first intervention. This correlates the use of tool and syntaxis language with a 30,10% and it was comparable to other studies by Fuentes-Rosado & Moo-Medina [20], Molina Izurieta et al. [37], and Salgado Castillo et al. [50] who pointed out that students had problems dealing with programming languages, syntaxis, managing conditional control structures and/or repetition, sentences of validation, aspects related to tool management and program

202

A.-L. Pérez-Suasnavas et al.

coding. According to Rojas-López & García-Peñalvo [47] there was weakness in the student self-evaluation due to learning perception versus learning acquired. The authors mentioned that the topics related to structures of selection revealed more difficulty, as well as, those related to evaluation of arithmetic, logic and relating expressions, which were difficulties identified in this study but with less standing. Fuentes-Rosado & MooMedina [20], Insuasti [25], Oviedo Galdeano & Ortiz Uribe [41], and Téllez Ramírez [56] pointed out other difficulties when learning programming such as lacking of knowledge of integral calculus, not willing to investigate, phobia to deal with complex problems, demotivation. Some of these aspects also formed part of the finding of this study but were not as important as other difficulties. During the online interventions, it was noticed that the predominant difficulty was related to institutional and personal factors, which was in concordance with Condori Melendez et al. [16], Ordorika [39], Rondón Morales [49] and Valverde Kikut [58], who highlighted the problems on virtual education due to pandemic and were linked to the use of obsolete or not enough computers available, poor internet connection, inappropriate environment for studying [24]. Cáceres Piñaloza [11] stated, however, that the help that a teacher could provide was limited and this is why teaching should be emphatic and motivating to make sure the continuity of learning process. Also, Denning [18] mentioned that teachers should focus on helping students learn and solving on time their problems, in the same manner that Pérez-Suasnavas et al. [44] reported in JiTTwT methodology. The study was mainly focused on inquiring certain topics of the course that needed to be addressed in more detail by teachers and these include the following topics: 1) arrays, 2) mathematical operations, 3) use of computer commands, 4) repetitive structures, 5) functions with parameters. These finds were in concordance with Molina Izurieta et al. [37], who manifested that the majority of problems identified in an Introduction to Programming Language class were due to the complexity of the concepts of variables, structures of repetition, arrays and functions. The sample was made of students having the same education and similar ages as of registered by Fuentes-Rosado & Moo-Medina [20], but pursuing different engineering degrees. With respect to gender, males were considerably more than females, matching the study of Sánchez Román et al. [52]. In the first studies done by Cho & Cho [13] and Altrabsheh et al. [4], 1.252 and 1.522 tweets were used respectively, while Chen et al. [12] and Patil & Kulkarni [42] used 2.785 in their research. In this study, after applying the processes of computational thinking, 1.996 tweets were obtained, considered acceptable compared to the previous studies. The social media Twitter used for collecting data from engineering students was considered appropriate for this study according to Chen et al. [12], Patil & Kulkarni [42] and Rojas-López & García-Peñalvo [47]. 4.1 Limitations and Future Work The limitation of recuperable data during a “time rate” was among the drawbacks found when using API Rest of Twitter and download time incremented. During the fourth interventions, it was evident that the higher percentage of comments denoting any difficulty were related to reaching learning goals. Although this value facilitated register classification, it could be derived for future research the causes associated to these finds. During

Computational Thinking as Instrument

203

online interventions, it was manifest a reduced number of students who responded the questionnaire, which may be associated to technological issues because of changing learning modality drastically. These factors could hide other difficulties that students have when taking classes preventing teachers from giving help satisfactory. Due to institutional policies regarding online learning, and the impossibility of access the social media Twitter for the learning process by students, they may be ignored specific relevant aspects for carrying on their research.

5 Conclusions This study put on the spotlight the processes of computational thinking such as decomposition, abstraction, generalization and algorithm design for problem solving. Although the course chosen for this research belongs to computer science, it is important to mention that the process implemented is replicable to educational institutions where the programming course or related courses are being taught so that scientific community validate the usefulness and efficiency of the applied process. Teaching programming courses is considered complex and even more difficult to understand for students belonging to not related courses. For this reason, the presented process allows to identify the main difficulties that a student face during a face-to-face or online class session, which should be considered relevant so that teachers deal with critical learning aspects on time. The longitudinal study applied throughout this research could be used as a guide for inferring about experimented change in specific variables, possible cause and effect causes although the trigger of the evident variation was because of confinement due to the pandemic. The development of API Issues for extracting data from social media Twitter becomes the first stage of the automation of tweet analysis and classification of student tweets becoming a supporting tool for teachers. Even tough API requires an initial configuration with selected data from a Twitter user account such as passwords and tokens, this app is reusable for extracting tweets, where comments require depuration, analysis and classification.

References 1. Adell Segura, J., Llopis Nebot, M.Á., Esteve Mon, F., Valdeolivas Novella, M.G.: El debate sobre el pensamiento computacional en educación. RIED. Revista Iberoamericana de Educación a Distancia 22(1), 171–186 (2019). https://doi.org/10.5944/ried.22.1.22303 2. Aguado de Cea, G., Bernardos Galindo, M.D.S.: Método para la elaboración de un corpus para la GLN, vol. 26, pp. 19–26 (2000). https://bit.ly/3uKClK1 3. Almarghani, E.M., Mijatovic, I.: Factors affecting student engagement in HEIs-it is all about good teaching. Teach. High. Educ. 22(8), 940–956 (2017). https://doi.org/10.1080/13562517. 2017.1319808 4. Altrabsheh, N., Cocea, M., Fallahkhair, S.: Predicting learning-related emotions from students’ textual classroom feedback via Twitter. In: Proceedings of the 8th International Conference on Educational Data Mining, pp. 446–440 (2015). https://bit.ly/3OF4rOu 5. Angeli, C., Giannakos, M.N.: Computational thinking education: issues and challenges. Comput. Hum. Behav. 105 (2019).https://doi.org/10.1016/j.chb.2019.106185

204

A.-L. Pérez-Suasnavas et al.

6. Balladares Burgos, J.A., Avilés Salvador, M.R., Pérez Narváez, H.O.: Del pensamiento complejo al pensamiento computacional: Retos para la educación contemporánea. Sophia, colección de Filosofía de la Educación 21(1), 143–159 (2016). https://doi.org/10.17163/soph.n21. 2016.06 7. Barr, V., Stephenson, C.: Bringing computational thinking to K-12: What is Involved and what is the role of the computer science education community? In: ACM Inroads, vol. 2, Número 1, pp. 48–54. Association for Computing Machinery (2011). https://doi.org/10.1145/1929887. 1929905 8. Beltrán, J., Sánchez, H., Rico, M.: Análisis cuantitativo y cualitativo del aprendizaje de Programación I en la Universidad Central del Ecuador. Revista Tecnológica-ESPOL 28(5), 194–210 (2015). https://bit.ly/3vkUXxl 9. Brackmann, C., Barone, D., Casali, A., Boucinha, R., Muñoz-Hernandez, S.: Computa-tional thinking: Panorama of the Americas. In: 2016 International Symposium on Computers in Education (SIIE), pp. 1–6 (2016). https://doi.org/10.1109/SIIE.2016.7751839 10. Cáceres Cruz, M.M., Rivera Gavilano, P.: El docente universitario y su rol en la planificación de la sesión de enseñanza—Aprendizaje. En Blanco y Negro 8(1), 15–27 (2017). https://bit. ly/34AyzbA 11. Cáceres Piñaloza, K.F.: Educación virtual: Creando espacios afectivos, de convivencia y aprendizaje en tiempos de COVID-19. CienciAmérica 9(2) (2020). https://doi.org/10.33210/ ca.v9i2.284 12. Chen, X., Vorvoreanu, M., Madhavan, K.: Mining social media data for understanding students’ learning experiences. IEEE Trans. Learn. Technol. 7(3), 246–259 (2014). https://doi. org/10.1109/TLT.2013.2296520 13. Cho, K., Cho, M.-H.: Training of self-regulated learning skills on a social network system. Soc. Psychol. Educ. 16(4), 617–634 (2013). https://doi.org/10.1007/s11218-013-9229-3 14. Coman, C., T, îru, L. G., Meses, an-Schmitz, L., Stanciu, C., Bularca, M.C.: Online teaching and learning in higher education during the coronavirus pandemic: students’ perspective. Sustainability 12(24), 10367 (2020).https://doi.org/10.3390/su122410367 15. Compañ-Rosique, P., Satorre-Cuerda, R., Llorens-Largo, F., Molina-Carmona, R.: Enseñando a programar: Un camino directo para desarrollar el pensamiento computacional. Revista de Educación a Distancia 46 (2015). http://bit.ly/2TaGbHK 16. Condori Melendez, H., Borja Villanueva, C.A., Saravia Alviar, R.A., Barzola Loayza, M.G., Rodríguez Ruiz, J.R.: Efectos de la pandemia por coronavirus en la educación superior universitaria. Revista Conrado 17(82), 286–292 (2021). https://bit.ly/3JuFRMv 17. Czerkawski, B.C., Lyman, E.W.: Exploring issues about computational thinking in higher education. TechTrends 59(2), 57–65 (2015). https://doi.org/10.1007/s11528-015-0840-3 18. Denning, P.J.: Remaining trouble spots with computational thinking. Commun. ACM 60(6), 33–39 (2017). https://doi.org/10.1145/2998438 19. Diaz-Mendivelso, J.D., Suarez-Baron, M.J.: Análisis social aplicando técnicas de lenguaje natural a información extraída de Twitter. Scientia et technica 24(3), 496–503 (2019). https:// doi.org/10.22517/23447214.21731 20. Fuentes-Rosado, J.I., Moo-Medina, M.: Dificultades de aprender a programar. Revista Educación en Ingeniería 12(24), 76–82 (2017). https://doi.org/10.26507/rei.v12n24.728 21. García Ferrer, M.: Diseño y construcción de un corpus de referencia de latín. Methodos 3, 93–105 (2016). https://bit.ly/3GOYvNm 22. García-Peñalvo, F.J.: What computational thinking is. J. Inf. Technol. Res. (JITR) 9(3) (2016). https://bit.ly/3uM7Tiv 23. Gazzo, M.F.: La educación en tiempos del COVID-19: Nuevas prácticas docentes, ¿nuevos estudiantes? Red Sociales, Revista del Departamento de Ciencias Sociales 7(2), 58–63 (2020). https://bit.ly/3gMnyGC

Computational Thinking as Instrument

205

24. IISUE. Educación y pandemia. Una visión académica (Primera). IISUE, UNAM (2020). https://bit.ly/34Gbt31 25. Insuasti, J.: Problemas de enseñanza y aprendizaje de los fundamentos de programación. Revista educación y desarrollo social 10(2), 234–246 (2016). https://bit.ly/3ZRfNV5 26. Ioannou, A., Makridou, E.: Exploring the potentials of educational robotics in the development of computational thinking: a summary of current research and practical proposal for future work. Educ. Inf. Technol. 23(6), 2531–2544 (2018). https://doi.org/10.1007/s10639018-9729-z 27. ISTE. (2022). Computational Thinking. Preparing the next generation of problem-solvers. International Society for Technology in Education (ISTE). https://www.iste.org/ 28. Joyanes Aguilar, L.: Fundamentos de Programación: Algoritmos, estructura de datos y objetos (4.a ed.). McGRAW-HILL (2008) 29. Kao, E.: Exploring Computational Thinking [Educativo]. Google AI Blog (2010). https://bit. ly/3HP03Zm 30. Kimmons, R., Veletsianos, G., Woodward, S.: Institutional uses of twitter in U.S. higher education. Innov. High. Educ. 42(2), 97–111 (2016). https://doi.org/10.1007/s10755-0169375-6 31. Laurence, A.: AntConc (Version 4.0.3) [Computer Software]. Waseda University, Tokyo, Japan (2022). https://www.laurenceanthony.net/software/antconc/ 32. Mac Gaul, M., López, M.F., Del Olmo, P.: Resolución de problemas computacionales: Análisis del proceso de aprendizaje. III Congreso de Tecnología en Educación y Educación en Tecnología, Argentina (2008). https://bit.ly/3JJuvET 33. Mancera Rueda, A., Pano Alamán, A.: Las redes sociales como corpus de estudio para el Análisis del discurso mediado por ordenador. Humanidades digitales: desafíos, logros y perspectivas de futuro, pp. 305–315 (2014). http://hdl.handle.net/2183/13559 34. Mandefro, E.: Analysis of the determinants of classroom participation of students’: perceptions of university student. J. Humanit. Soc. Sci. 24(11), 4–12 (2019). https://bit.ly/3XP F7ZA 35. Michaelson, G.: Teaching programming with computational and informational thinking. J. Pedag. Dev. 5(1) (2015). http://hdl.handle.net/10547/346506 36. Molina, A., Pla, F.: Shallow parsing using specialized HMMs. J. Mach. Learn. Res. 2(4), 595–613 (2002). https://bit.ly/3sBm2fX 37. Molina Izurieta, R.E., Padilla Gómez, R.R., Leyva Vázquez, M.Y.: Estudio y propuesta metodológica para la enseñanza-aprendizaje de la programación informática en la educación superior. Revista Dilemas Contemporáneos: Educación, Política y Valores VII(8) (2019). https://doi.org/10.46377/dilemas.v30i1.1294 38. Moore, H.: Acerca de MATLAB. In: Matlab para Ingenieros, p. 5. Prentice Hall (2007). https://bit.ly/3LwhtvX 39. Ordorika, I.: Pandemia y educación superior. Revista de la educación superior 49(194), 1–8 (2020). https://bit.ly/3KjHVqG 40. Ortega-Ruipérez, B., Asensio Brouard, M.: Robótica DIY: pensamiento computacional para mejorar la resolución de problemas. Revista Latinoamericana de Tecnología Educativa RELATEC 17(2) (2018). https://doi.org/10.17398/1695-288X.17.2.129 41. Oviedo Galdeano, M., Ortiz Uribe, F.G.: La enseñanza de la Programación. Silo.Tips (2002). https://bit.ly/367xyYJ 42. Patil, S., Kulkarni, S.: Mining social media data for understanding students’ learning experiences using Memetic algorithm. In: International Conference on Processing of Materials, Minerals and Energy July 29th–30th 2016, Ongole, Andhra Pradesh, India, vol. 5, no. 1, Part 1, pp. 693–699 (2018). https://doi.org/10.1016/j.matpr.2017.11.135

206

A.-L. Pérez-Suasnavas et al.

43. Pérez-Suasnavas, A.-L., Cela, K., Hasperué, W.: Beneficios del uso de técnicas de minería de datos para extraer y analizar datos de twitter aplicados en la educación superior: Una revisión sistemática de la literatura. Universidad de Salamanca 32(2), 1–38 (2020a). https://doi.org/ 10.14201/teri.22171 44. Pérez-Suasnavas, A.-L., Cela, K., Hasperué, W.: Propuesta de Estrategia Educativa, para Fomentar la Participación Estudiantil Universitaria. In: Archundia Sierra, E., León Chávez, M.Á., Cerón Garnica, C. (Eds.), Redes de aprendizaje digital en nodos colaborativos, pp. 240– 261. Benemérita Universidad Autónoma de Puebla (2020b). http://bit.ly/3WqpVB7 45. Quiroz-Vallejo, D.A., Carmona-Mesa, J.A., Castrillón-Yepes, A., Villa-Ochoa, J.A.: Integración del Pensamiento Computacional en la educación primaria y secundaria en Latinoamérica: Una revisión sistemática de literatura. Revista de Educación a Distancia (RED) 21(68) (2021). https://doi.org/10.6018/red.485321 46. Ribeiro, L., Nunes, D.J., da Cruz, M.K., de Souza Matos, E.: Computational thinking: Possibilities and challenges. In: 2013 2nd Workshop-School on Theoretical Computer Science, pp. 22–25 (2013). https://doi.org/10.1109/WEIT.2013.32 47. Rojas-López, A., García-Peñalvo, F.J.: Evaluación del pensamiento computacional para el aprendizaje de programación de computadoras en educación superior. Revista de Educación a Distancia (RED) 20(63), 1–39 (2020). https://doi.org/10.6018/red.409991 48. Román-Gonzalez, M., Pérez-González, J.C., Jiménez-Fernández, C.: Test de Pensamiento Computacional: Diseño y psicometría general. III Congreso Internacional sobre Aprendizaje, Innovación y Competitividad (CINAIC 2015), España (2015). https://doi.org/10.13140/RG. 2.1.3056.5521 49. Rondón Morales, R.: Educación universitaria en tiempos de pandemia. Administración Educacional Anuario del Sistema de Educación de Venezuela, Especial (8), 139–147 (2020). https://bit.ly/3GMdIif 50. Salgado Castillo, A., Alonso Berenguer, I., Gorina Sánchez, G., Tardo Fernández, Y.: Lógica algorítmica para la resolución de problemas de programación computacional: Una propuesta didáctica. Didasc@lia: Didáctica y Educación IV(1), 57–76 (2013). https://bit.ly/39n69E1 51. Ramos, S., del Mar M.: Compilación y análisis de un corpus ad hoc como herramienta de documentación electrónica en Traducción e Interpretación en los Servicios Públicos. Estudios de Traducción 7, 177–190 (2017).https://doi.org/10.5209/ESTR.57455 52. Sánchez Román, G., Guerrero García, J., Martínez Mirón, E.A.: Perfil del alumno de Computación para el diseño de un sistema Tutor. Certiuni J. 5, 19–26 (2019). https://bit.ly/3gI xJvw 53. Selby, C.C.: relationships: computational thinking, pedagogy of programming, and bloom’s taxonomy. In: Proceedings of the Workshop in Primary and Secondary Computing Education, pp. 80–87 (2015).https://doi.org/10.1145/2818314.2818315 54. Sha, F., Pereira, F.: Shallow parsing with conditional random fields. 213–220 (2003). https:// doi.org/10.3115/1073445.1073473 55. Sykora, C.: Computational thinking for all. ISTE Blog (2021). https://bit.ly/34RTx5o 56. Téllez Ramírez, M.: Pensamiento computacional: Una competencia del siglo XXI. Revista Científica de Publicación del Centro Psicopedagógico y de Investigación en Educación Superior 6(1) (2019). https://bit.ly/3Xv6VCK 57. Thorson, K.: Early learning strategies for developing computational thinking skills [Educativo]. Getting Smart (2018). https://bit.ly/3oPTmyx 58. Valverde Kikut, L.: Análisis de resultados de la evaluación de la virtualización de cursos en la UCR ante la pandemia por COVID-19: perspectiva estudiantil. Centro de Evaluación Académica (CEA) (2020). http://bit.ly/3XQJ1lg 59. Veletsianos, G.: Toward a generalizable understanding of Twitter and social media use across MOOCs: who participates on MOOC hashtags and in what ways? J. Comput. High. Educ. 29(1), 65–80 (2017). https://doi.org/10.1007/s12528-017-9131-7

Computational Thinking as Instrument

207

60. Wing, J.M.: Computational thinking. Commun. ACM 49(3), 33–35 (2006). https://doi.org/ 10.1145/1118178.1118215 61. Zapata-Ros, M.: Pensamiento computacional: Una nueva alfabetización digital. Revista de Educación a Distancia (RED) (46) (2015). https://bit.ly/3oNmJ4I

Systematization of Playful Teaching Using Games Aimed at Teachers and Students Albornoz Karina(B)

, Jurado Merlis , and Maldonado Michelle

Instituto Superior Tecnológico ARGOS, Guayas, GYE, 090101 Guayaquil, Ecuador [email protected]

Abstract. The following research aims to systematize playful teaching to teachers and students of education, where, through four criteria that involve traditional education with modern teaching, the application of content is restructured to achieve significant learning with playful activities and dynamics that prepare, level, and teach students according to the curricular contents, in addition to creating evaluations by competencies that allow students to demonstrate their abilities in different contexts, thus being significant learning that motivates them, in addition to being a structure of progressive activities that develop information in phases from the simple to the complex. This research was applied virtually to teachers and students of education in general (inclusive, preschool, initial, high school, among others) to obtain results directly from the people who are active in a classroom, with data collection through surveys to be analyzed with quantitative methodology. Keywords: Systematization · Game · Playful · Teachers

1 Introduction During child development, the main skills are acquired and/or developed through games, dynamics, and stimulation. As they progress in their development, these games become more structured and organized with more analytical and specific development processes. However, in these last 3 years, they have gone through different stages in terms of education. Initially, we had a traditional education which consists of listening to the class, taking notes, or doing daily homework with some sporadic dynamics that break the routine. Later, as a result of the pandemic, it migrated to distance education, through technological tools; which caused a radical turn in the way of teaching, where there were some classes with more dynamics, others that tried to keep the traditional system alive, and others where only information was exchanged. Now today we can see how new normality is being resumed where there are dynamic classes, some with technology inside the classroom, others with structured material, and others that returned to the traditional with small changes in terms of their organization to change something as simple as the pencil for kinesthetic material or a demonstration experiment instead of theory. For Suárez [1] childhood is a stage of life in which the creative sense in children is most developed, in the same way, Rodríguez [2] adds that students need to find a © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 208–220, 2023. https://doi.org/10.1007/978-3-031-25942-5_17

Systematization of Playful Teaching Using Games Aimed

209

use for what they learn in class that goes beyond school and, of course, connects what they study with what they live. For this reason, Suárez Palacio, Vélez-Múnera, and Londoño-Vásquez [3] add that, nowadays, innovative pedagogical processes, products of the creativity of those involved in the educational process, seek new ways of teaching and learning before a complex and dynamic world. Following the same idea, Gómez-Vahos [4] infers that the main objective of education is to integrate academic content with the training of the person, so they have the option of intervening in the environment, understanding local realities and global from critical and reflective thinking; that is, give each person the opportunity to use individual skills as a source of support to achieve complete learning, where the teacher serves as a guide for these teachings, which are not limited to a classroom or a page. When learning many factors interfere that according to Monroy [5] are classified into two categories: the intrapersonal category, which refers to the internal factors of the student, and the situational category, these have to do with the variables of the environment, teachers and the situations in which learning takes place. Therefore, the simple fact of communicating a topic under a traditional education of only listening to the class and emptying this information in notebooks or the fact of making a dynamic in class to change the routine does not result in significant learning if these are not connected directly with the abilities of the students and the context in which it is presented. Now, systematizing teaching processes to generate an evolution into the current needs of students, according to Bonilla [6] systematization in science serves to establish relationships between concepts (elaboration of theories), that still give rise to a moment of formulations, hypotheses, and laws related to each other for the establishment of logical relationships where significant learning is developed that allows the appropriation of the meanings of the experiences lived, understand them theoretically and move towards the future with a transforming approach [7]. To carry out a systematization of experiences Jara [7] consider: “Starting point: having participated in the experience and having records of that experience, questions that should be asked: why do you want to systematize the experience? What experiences do you want to systematize? And what central aspects of that experience would you be interested in systematizing? Recovery of the lived process: reconstructing the history, ordering and classifying the information, in-depth reflection: analyzing, synthesizing, and critically interpreting the process, arrival points: making the conclusions, announcing and presenting the learning through different means. (p.135)”. Similarly, when teaching is systematized as such, an experience is also modified where there must be initial analysis processes, development of the practice, and conclusions that give as a final result a complete experience that evolves into meaningful and valuable real learning for the students who receive it. With this panorama, the research of this article has a general objective to create a teaching system that mixes the traditional with games following a series of logical steps that gives, as a result, a teaching-learning that charges all types of learning, needs, and that can be applied as a basis in the evolution of the current educational system. Achieving in this way an education where the student always finds motivation to learn, the teacher has a paradigm shift using what he already knows, having only to add a logical order that allows him to expand the knowledge of his students.

210

A. Karina et al.

2 Materials and Methods A quantitative, exploratory, descriptive, and explanatory methodological approach is used, intending to describe each step of the systematization together with the experiences lived by the participants, as well as analyze to what extent this systematization favors teaching in general and how teachers can Apply it according to your area and population. The survey technique has been implemented since it allows tabulating and studying the opinions received from the respondents, for their conversion, inferentially, by the researcher, into useful information to consider in the investigation. Feria Ávila, H., Matilla González, M, & Mantecón Licea, S [8]. 2.1 Participants It was applied to a group of teachers and students, in online mode, taking into account that the participating students had previously consented to their participation, resulting in a sample of 157 participants belonging to Early Childhood Education, Primary Education, Social Education, and Inclusive Education; this sample is non-probabilistic, intentional and for convenience. 120 Feminine [PERCENTAGE]

100 80 60 40 20 0

Male [PERCENTAGE] Masculino

Femenino

Fig. 1. Descriptive analysis of the sample by gender.

About Fig. 1, 97% correspond to women and only 3% to men. Figure 2 shows that more teachers participated (with 83%) concerning the number of students (17%). The aforementioned data provide knowledge of the people participating in the application of said systematization, where 3 stages were carried out: – Stage 1: Theoretical explanation of the importance of the game and how to systematize. – Stage 2: Application of the systematization through examples – Stage 3: Data collection with surveys of the participants.

Systematization of Playful Teaching Using Games Aimed

211

teachers [PERCENTAGE]

90 80 70 60 50 40 30 Students [PERCENTAGE]

20 10 0

Estudiantes

Docentes

Fig. 2. Descriptive analysis of the sample by degree of study

3 Results and Discussion The systematization of the professorial investigative results focuses its attention on the ordering, reconstruction, and explanation of the teaching-learning process, which makes it a dynamic process that integrates the logical and the contradictory, the internal and the external of the investigative results whether theoretical or practical, towards the search for new transformations that favor the cognitive processes of our students. Bonilla Tenesaca, J., Jiménez Álvarez, M., & Batista Medina, I. [9]. In this context, the development of the systematization of this research focuses on the sequence of a logical order that involves tangible media combined with the theory to achieve a teaching system with meaningful learning. This systematization is structured as follows (Table 1): Table 1. Systematization scheme by criteria Criteria

Definition

Activities

1- Preparation of students through the game

The previous content analysis focuses on preparing students and leveling out if there are deficits within the students´ group In the same way, motivate the student to be investigative and be aware of what they will learn and what this knowledge will be used for

Fun games related to the theme Tangible demonstrations with environmental elements Group work based on problems

(continued)

212

A. Karina et al. Table 1. (continued)

Criteria

Definition

Activities

2- Creation of practical and playful activities

Practices similar to theory should be considered, which, in addition to demonstrating how they can develop the topic, also offer them additional information

Individual activities Use of tangible material Use of tools if these are necessary Practical activities are similar to the topic in question

3- Application of progressive material (theory)

Replace notebooks full of text with segmented templates where the importance of the different subtopics that generate any content is displayed throughout the class

Segment by day or subtopic Segment according to the dynamic to create Develop the templates through dynamics before filling them out Create base templates for their reuse

4- Evaluate through the game

Once the students have completed the process of preparation, practice, and theory, the evaluation should focus not only on answering questions in a written record but also on the skills obtained

Creation of a special activity to dynamically evaluate or reuse what has been learned by the student Considering a written record as evidence of knowledge adds to some of the segmentation taught above Consider evaluation by competencies in which the skills and requirements necessary to have achieved the objectives of the curriculum are divided

This systematization, although simple, gives a different perspective of how teaching is currently organized or established, whereas a first step is evaluated with dynamics, exercises, or playful activities and how prepared the students are to know this topic, where according to Gómez [10]:

Systematization of Playful Teaching Using Games Aimed

213

In school, the essential thing is not to make or train researchers but to develop an investigative attitude in the students; it is about developing critical thinking in students with a scientific stance towards things so that they: problematize, ask and seek answers, develop an inquisitive attitude that enables them to build significant knowledge derived from problems posed. Subsequently, the practical activity is established as a second step, not as a result of knowledge, but as an example of the activity, skill, or topic to learn, which can be useful in different contexts, as indicated by González and Márquez [11]. Actions generate learning opportunities for students and understanding learning opportunities as an indicator that connects teaching and learning. Here these steps are established on how to collect information and then be structured with practice. Turning to theory as the third step, progressive templates are incorporated where notetaking is replaced with dynamics, an activity where each important aspect of development is delimited, in such a way that visually and more dynamically students can understand what they learn. With each segmentation, we elaborate staging with dynamics or playful development where what is wanted to be developed is explained more explicitly, as can be seen in Fig. 3, the simple idea of writing down everything is transformed into developing separate segments that provide an easy way to assimilate the information presented. CENTRAL TOPIC

SUBTOPIC 1

SUBTOPIC 2

SUBTOPIC3

Fig. 3. Content targeting example

214

A. Karina et al.

Now, we proceed to exemplify the template to illustrate the aspects to which the segmentation refers:

Fig. 4. Exemplification of preschool-level content

Fig. 5. Exemplification of the content level of basic education

Systematization of Playful Teaching Using Games Aimed

215

As seen in Figs. 4 and 5, the idea is to prepare the content in a playful way where each section of the class has a dynamic focused on previous activities that can be recorded until the desired content is reached. Likewise, this is not considered a base limited to four parameters, since it is here that the teacher demonstrates his or her ability to coordinate previous skills until reaching the content so that each teacher can have a base that allows him or her to progress according to his or her teaching style. Finally, a means of evaluation is established that is consistent with the game or dynamics carried out, where skills are measured and if a record of these is necessary, there are more options than just a written record, that is, evaluation by competencies. According to Quiñones, L., Zárate-Ruiz, G., Miranda-Aburto, E., & Sosa, P. [12], different phases must be implemented for the evaluation of competencies. The first was the competency-based approach to learning; with the subcategories: reasoning, creativity, and critical thinking. The second refers to a competency-based approach to teaching; with the subcategories: planning, learning management, and evaluation. The third was an evaluation of learning; with the subcategories: development of autonomy and increased confidence. The fourth and last was the evaluation of teaching, with the subcategories: attention to diversity and improvement of pedagogical practice. Similarly, S. Morales López, R. Hershberger del Arenal, E. Acosta Arreguín establishes that: Assessment in competency-based education requires the teacher to determine the student’s level of performance; however, the competencies are not observable by themselves, so it is necessary to infer them through specific actions that must be previously operationalized. The development of competencies in students must be verified in practice through clearly established performance criteria. The performance criteria refer to the expected learning results and represent the basis of the evaluation and the establishment of the conditions to infer the achievement of the competence. For this reason, the systematization of teaching through games goes hand in hand with the evaluation of competencies, since the main objective is for the student not only to develop the skills but also to be evaluated consistently based on them, creating educational environments. Not only novel but that values the teachings provided by the teacher and the effort of the students. On the other hand, the participants of the application have completed a survey, focused on each step of the systematization as shown in Table 2.

216

A. Karina et al. Table 2. Systematization scheme by criterio CRITERIA

1.

Preparation of students through the game 1.1 Possible for all ages 1.2 Possible for all subjects 1.3 Ease of application and preparation 1.4 Development of skills and knowledge according to the curriculum 2. Creation of practical and playful activities 2.1 Possible for all ages 2.2 Possible for all subjects 2.3 Ease of application and preparation 2.4 Development of skills and knowledge according to the curriculum 3. Aplicación de material progresivo (teoría) 3.1 Possible for all ages 3.2 Possible for all subjects 3.3 Ease of application and preparation 3.4 Using progressive templates for group activities 3.5 Favorable development for all skills 4. Evaluate through the game 4.1 Possible for all ages 4.2 Possible for all subjects 4.3 Possibility of subdividing all competencies 4.4 Ease of evaluation for the teacher

ASSESSMENT 0 slightly favorable, 1 moderately favorable, 2 very favorable. 0 1 2

The following results were obtained (Fig. 6): As shown in Fig. 7, in the first item 1.1, 84 people answered that it is possible for all ages to prepare students through games and the other 73 answered that it is moderately favorable, that is, 53% of the people support this item for all ages. Next, in item 1.2, 82 people responded that implementation through play is very favorable for any issue, while the rest of the options are divided into 65 for moderately favorable and 10 for very unfavorable, that is, 52.2% were according to this option. Next, in item 1.3, 99 people responded that it is very favorable for its preparation and application, while 25 people indicate that it is moderately favorable and 33 people that it is very unfavorable, that is, 63.05% agreed with this option. Finally, in item 1.4, 106 people indicate that it is very favorable for the development of skills and knowledge according to the curriculum, while 46 people indicate that it is moderately favorable and only 5 people that it is very unfavorable.

Systematization of Playful Teaching Using Games Aimed

217

120 100 80

0 (very unfavorable)

60

1 (moderately favorable)

40

2 (very favorable)

20 0 1.1

1.2

1.3

1.4

Fig. 6. Graph of results of criterion number 1 preparation of students through play.

160 140 120 100 80 60 40 20 0

0 (very unfavorable) 1 (moderately favorable) 2 (very favorable)

2.1

2.2

2.3

2.4

Fig. 7. Results graph of criterion number 2 creation of practical and playful activities.

In Fig. 8, the results of criterion number 2 creation of practical and playful activities are shown, where in item 2.1, 104 people indicated that it is very favorable for all ages, 41 people that it is moderately favorable, and 12 that it is very favorable. Unfavorable, that is, 66.2% of people agreed. On the other hand, in item 2.2, 61 people indicated that it is very favorable for all topics, 59 people indicated that it is moderately favorable and 37 people that it is very unfavorable for all topics. In item 2.3, 129 people indicated that the preparation and application are simple, 18 people indicated that it is moderately favorable and 10 people that it is not very favorable. Finally, in item 2.4, 145 people indicated that it is very favorable for the development of proposed knowledge, while the rest (12 people) indicated that it is very unfavorable. In Fig. 9, the results of criterion number 3 application of progressive material (theory) are observed, where, in item 3.1, 99 people indicated that it is favorable for all the topics, 54 people indicated that it is moderately favorable and 4 people that very unfavorable.

218

A. Karina et al.

140 120 100

0 (very unfavorable)

80 1 (moderately favorable)

60 40

2 (very favorable)

20 0 3.1

3.2

3.3

3.4

3.5

Fig. 8. Results graph of criterion number 3 application of progressive material (theory).

In item 3.2, 124 people indicated that it is favorable for all the topics, 12 people that it is moderately favorable and 21 people that it is very unfavorable. In item 3.3, 76 people indicated that the reuse of templates is favorable, 66 people that it is moderately favorable and 15 people that it is very unfavorable. In item 3.4, 56 people indicated that the use of progressive templates in group activities is very favorable, 56 people that it is moderately favorable and 45 people that it is very unfavorable. Finally, in item 3.5, 75 people indicated that it is very favorable for the development of activities, 72 people that it is moderately favorable and 10 people that it is very unfavorable.

160 140 120 100

0 (very unfavorable)

80

1 (moderately favorable)

60

2 (muy favorable)

40 20 0 4.1

4.2

4.3

4.4

Fig. 9. Graph of results of criterion number 4 evaluated using the game.

In Fig. 9, the results of criterion number 4, evaluated through the game, are observed, where, in item 1.1, 142 people indicated that it is very favorable for all ages and 15 people that it is moderately favorable. In item 4.2, 93 people indicated that it is favorable for all ages, 2 people indicated that it is moderately favorable, and 15 that it is very unfavorable.

Systematization of Playful Teaching Using Games Aimed

219

In item 4.3, 103 people indicated that it is very favorable to subdivide the students’ competencies, 39 people indicated that it is moderately favorable and 15 people that it is very unfavorable. Finally, in item 4.4, 76 people indicated that it is very favorable for the ease of teacher evaluation, 75 that it is moderately favorable, and 6 that it is very unfavorable.

4 Conclusions As previously mentioned, the objective of this research is to systematize teaching through games to show a structure that involves traditional education with modern teaching where the main source to acquire knowledge is practice and preparation for later be unified, organized, and classified through theory, where note-taking or extensive writing is replaced by micro activities that give tangible knowledge of each sub-topic that the main topic to work on could have, wherewith these preparations and systematization skills are achieved and knowledge of agreements to the curriculum and objectives set by each teacher, which results in evaluations based on structured competencies that cover all possible areas to analyze the performance of both the teacher’s methodology and the skills obtained by the student. Similarly, as this is an application directly proportional to teachers and students in the area, the answers obtained are a vision of how this topic could deepen and improve to provide a comprehensive evolution to current education, where the student throughout the classes investigate, collect, organize and apply the information received, being a continuous and active work and not an overload of information that is only captured on an exam sheet or notebook.

References 1. Suárez-Palacio, P.A., Vélez-Múnera, M., Londoño-Vásquez, D.A.: Las herramientas y recursos digitales para mejorar los niveles de literacidad y el rendimiento académico de los estudiantes de primaria. Revista Virtual Universidad Católica del Norte (54), 184–198 (2018). Recuperado de: http://revistavirtual.ucn.edu.co/index.php/RevistaUCN/article/view/990 2. Rodríguez-Moreno, J., Pro-Chereguini, C., Pro-Bueno, A.: ¿Qué se puede aprender «Jugando con la electricidad» en Educación Infantil? Revista Eureka sobre Enseñanza y Divulgación de las Ciencias 17(2), 2202 (2020). https://doi.org/10.25267/Rev_Eureka_ensen_divulg_cienc. 2020.v17.i2.2203 3. Suárez Ramírez, S., Mateos Núñez, M., Suárez Ramírez, M.: Gramática lúdica y creativa. Una experiencia para hacer más accesibles los contenidos gramaticales en Educación Primaria. Education Siglo XXI 39(3), 187–208 (2021). https://doi.org/10.6018/educatio.427811 4. Gómez Daza, S.: Relato sobre estrategias de enseñanza para desarrollar habilidades de pensamiento y educar en valores. Revista de la asociación colombiana de ciencias biológicas 1(33), 133–142 (2021). https://doi.org/10.47499/revistaaccb.v1i33.239 5. Monroy, A.M., Suárez, P.: Factores escolares asociados al aprendizaje de la física. Revista Temas III(12), 79–96 (2018) 6. Bonilla Tenesaca, J., Jiménez Álvarez, M., Batista Medina, I.: La Sistematización y Contextualización en la Enseñanza de Lenguas Extranjeras: Algunas Experiencias Pedagógicas. Revista Científica Hallazgos 21, 6(1), 33–46 (2021). Recuperado de http://revistas.pucese. edu.ec/hallazgos21

220

A. Karina et al.

7. Jara, O.: La sistematización de experiencias: práctica y teoría para otros mundos. Bogotá (2018) 8. Feria Ávila, H., Matilla González, M., Mantecón Licea, S.: La entrevista y la encuesta: ¿métodos o técnicas de indagación empírica? Didasc@lia: Didáctica Y educación 11(3), 62–79 (2020). ISSN 2224-2643. https://revistas.ult.edu.cu/index.php/didascalia/article/view/992 9. Vahos, L.E.G., Muñoz, L.E.M., Londoño-Vásquez, D.A.: El papel del docente para el logro de un aprendizaje significativo apoyado en las TIC. En la revista Encuentros, vol. 17–02 (2019) 10. Jara, O.: La sistematización de experiencias: práctica y teoría para otros mundos políticos. Bogotá: ©Fundación Centro Internacional y Desarrollo Humano CINDE. Javegraf (2018) 11. González Astudillo, M.T., Marques Portugal, R.F.: La práctica docente del profesor: La enseñanza de fracciones en un aula de primaria a través de situaciones-problema. Education Siglo XXI 36(3), 177–200 (2018). https://doi.org/10.6018/j/349961 12. Quiñones, L., Zárate-Ruiz, G., Miranda-Aburto, E., Sosa, P.: Enfoque por competencias (EC) y Evaluación formativa (EF). Caso: Escuela rural. Propósitos y Representaciones 9(1), e1036 (2021). https://doi.org/10.20511/pyr2021.v9n1.1036

Design of a Predictive Model to Evaluate Academic Risk Using Data Mining Shirley Alarcón-Loza(B)

, Diana Calderón-Onofre , Karen Mite-Baidal , and Mishel Macías-Plúas

Instituto Superior Tecnológico ARGOS, Daule, Ecuador [email protected]

Abstract. The impact of academic risk in college can be anticipated, through data analysis, to minimize its impact on the educational community. This article seeks to establish a predictive model that evaluates the academic risk of students of the online modality of a technological institute, from the perspective of performance, for its timely detection and early actions. The database included the registration of demographic data and grades of various subjects taken in the year 2021, of a sample of 1023 students for the 2020 academic period. For this research, the factors Attendance and General Average were considered, to evaluate the performance that affects academic risk. The Cross Industry Standard Process for Data Mining methodology and the software Waikato Environment for Knowledge Analysis were used for evaluation algorithms and search methods to determine suitable predictive attributes for each factor, with attendance records and general averages being the most significant. The results showed that, for the Attendance variable, the best classification algorithm was Random Tree, whose precision value was 99.70% and for the area, under the curve (ROC Area) it was 0.992. Regarding the general average variable, the best classification algorithm was J48, with values of 98.50% and 0.937, respectively. It is suggested to develop research related to data mining that promotes improving the academic quality and services in the study modality. Keywords: Data mining · Predictive model · Classification algorithm · Higher education · Academic risk

1 Introduction Technology has brought with it a large number of significant changes that result in the generation of large volumes of data, which are stored without treatment and then lost without having applied actions. The current trend is to convert data into useful information that facilitates timely and effective decision-making for the common good of societies. In this context, data mining emerges as an analysis tool that is constituted by a set of techniques that works the data for different purposes such as evidencing situations that, at first glance, are not detected, such as patterns and models; that make it possible © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 221–235, 2023. https://doi.org/10.1007/978-3-031-25942-5_18

222

S. Alarcón-Loza et al.

to understand behaviors and predict future actions [8, 37]. Within the field of higher education, information related to students, teachers, academic management [3, 33], and other data resulting from the execution of additional processes such as extracurricular activities, qualifications, and others are stored. The objective of the article is to establish a predictive model that evaluates the academic risk of students of the online modality of a Technological Institute in Guayaquil, Ecuador. For this purpose, the academic data of the participating sample was organized and clarified, the CRISP-DM methodology was selected, and the Waikato Environment for Knowledge Analysis (WEKA) program was aplied. The decision trees were generated using classification algorithms and the results obtained were interpreted. It should be noted that academic risk [7, 10, 15, 29] is a state of the student that occurs at any time of the student’s trajectory, placing it in any of the conditions described in Table 1. For that reason, data mining applied to higher education has significant advantages, in the prediction of possible unfavorable scenarios. On the other hand, in online higher education, there are many more possibilities to address such as evaluating the management and interactivity of the teacher [1] in the permanent search to improve educational quality. Table 1. Conditions to be at academic risk Conditions

Works

Academic backwardness

[11, 12, 42]

Poor academic performance

[9, 18–20, 22, 23, 40]

Low academic achievement

[14, 41]

Academic failure

[7, 30]

2 Materials y Methods In the framework of the development of this article, the information was the main key from which the results emerged that, by themselves, did not constitute clear evidence of any situation associated with academic risk for the group of participating students. In this sense, the source of the information, the treatment given to the data, the applied data mining methodology, the processing in the corresponding software, and the interpretation of the results obtained are explained. 2.1 Database and Resources The research was based on a higher education institution that offers the modality of online studies in technological careers. To start in this new environment, they entered into an inter-institutional agreement with the local government to grant scholarships to a significant number of its citizens. The characteristics of those awarded were: having a

Design of a Predictive Model

223

high school degree belonging to priority groups or in a situation of vulnerability, residents of the city, of low socioeconomic level, whose graduation qualification was high, who is in the age range from 16 to 45 years old and who does not have a registered university degree. Table 2 shows the research population that corresponds to the 2400 students of the 2020 academic period, who have the aforementioned characteristics. This group was registered in the 17 technological careers offered by the institute. As part of the general data of the careers, the Broad field of knowledge was located, which constitutes the structure of the codification of professional titles and academic degrees. From this aspect, the sample was selected using the non-probabilistic technique for convenience, considering the broad Administration field, which registers the largest number of students among all the careers that comprise it. Table 2. Population distribution and sample used Population 2,400 students from the 2020 academic period of the online study modality

Sample Broad Field

Technological careers

N° of students

Management

Human Talent Management

153

Foreign commerce

152

Sales

103

Accounting

181

Marketing

177

Management

181

Tourism Operations Management

76

For this research study, the database of 1023 students was considered, according to the sample described, with the characteristics reflected in Table 3. The demographic aspects resulted from the initial registration made by the students to apply for the scholarship; the selected subjects correspond to those that share all the careers in bimesters 2, 3, 4, 5, and 6, which were carried out during the year 2021; the grade records include the general activities that are recorded in the corresponding software and the attendance record includes the recurring states.

224

S. Alarcón-Loza et al. Table 3. Characteristics of the students of the 2020 academic period, online modality

Characteristics

Description

Characteristics

Description

Demographic Aspects

Name

Transcripts for Bimonthly 2–6

Forum

Career

First Project Delivery

Gender

Recovery 1

Marital status

Participation

Age

Second Project Delivery

Disability

Recovery 2

Sector

Final exam

Parish

Supplementary Exam

Graduation grade

Final average

Quintile Subjects considered for Bimester 2–6

Management Calculus l Financial Accounting Statistics

Bimesters 2–6 Attendance Record

Present Late Absent

Business Ethics

As a technological resource it was used the data mining open source software WEKA [28], based on the JAVA programming language. It supports different types of data format which includes ARFF, CSV, and LibSVM. It consists of many algorithms that can be applied to different data sets for promote future analysis. In it, the processing of the information was carried out to generate the results corresponding to data mining and due evaluation. 2.2 Methodology The CRISP-DM methodology is a standardized methodology for the data mining process [21] and knowledge discovery in databases [39]. This methodology, according to [38], consists of six phases, and its application for this article is described below:

Design of a Predictive Model

225

Business or Problem Understanding Stage The academic data that is generated in each academic period in the online modality does not receive any treatment that allows showing situations of risk in this group of students. By identifying this scenario, the present investigation was proposed to determine a predictive classification model that allows recognizing if a student could be placed at academic risk, from the perspective of performance, to detect it in time and solve the situation opportunely. For this, the data mining classification technique was used and, the ideal decision algorithms were validated, based on the data set collected from the participating sample. Data Understanding Stage In this phase, the data was collected and filtered based on the characteristics of the students, shown in Table 2, organizing 1023 records from 75 different fields. These fields were standardized and stored in Microsoft Excel for their corresponding preparation. Data Preparation Stage Academic risk variables created in the database To determine the academic risk, an edge called academic performance has been distinguished, and here, the variables Attendance [27, 35] and General Average have been selected, to evaluate the condition of the students regarding the risk. With this foundation, it is necessary to create a dependent variable, also known as a class variable, which is of dichotomous nominal type (YES/NO). From it, the predictive classification model is executed. The dependent variables for this study that directly affect academic performance [5, 6, 13, 26, 34] and, therefore, affect academic risk, are the Attendance class variable (RISK_ASSIST) and the General Average class variable (RISK_AVG). Variables considered for the training data set in WEKA To carry out the predictive model, each academic performance factor has independent variables and a dependent variable. Table 4 details them, which were entered into the WEKA program to consider the academic performance associated with the Attendance factor. Under the institutional policy, students’ attendance must be a minimum of 80% of the total subject, otherwise, they fail. Students receive 8 class sessions for each subject in an academic period and it is considered that having 3 absences within the academic period, they are at risk. Table 3 also reflects the dependent variable RISK_ASSIST and its respective formula in the database.

226

S. Alarcón-Loza et al. Table 4. Description of the variables considered for the class attendance factor Variables ADM_PRESENT CALC_ PRESENT ACCOUNT_ PRESENT STA_ PRESENT ETHICS_ PRESENT ADM_LATE CALC_LATE ACCOUNT _LATE STA_LATE ETHICS_LATE ADM_ABSENT CALC_ABSENT ACCOUNT_ABSENT STA_ABSENT ETHICS_ABSENT

RISK_ASSIST (Class variable)

Meaning The number of times the student attends the Administration (ADM), Calculus (CALC), Accounting (ACCOUNT), Statistics (STA), and Ethics (ETHICS) class.

Type Numeric

The number of times the student registered late in the Administration (ADM), Calculus (CALC), Accounting (ACCOUNT), Statistics (STA), and Ethics (ETHICS) class.

Numeric

The number of times the student does not attend the Administration (ADM), Calculus (CALC), Accounting (ACCOUNT), Statistics (STA), and Ethics (ETHICS) class.

Numeric

Academic risk =YES.SET (ADM_PRESENT=6; "NO")

Nominal dichotomous (YES/NO)

Table 5 details the independent and dependent variables that were entered into the WEKA program to consider the academic performance associated with the General Average factor. Within the framework of the scholarship, the student must maintain 80 points in the general average, including all the subjects that she is taking. You are at academic risk if your grade is below 80. Table 5 also reflects the dependent variable RISK_AVG and its respective formula in the database.

Design of a Predictive Model

227

Table 5. Description of the variables considered for the general average factor Variables

Meaning

Type

Career

Educational program of the broad field of Administration

Nominal

Age

Age at admission (2020)

Numeric

Gender

Student's gender

Nominal (Male / Female)

Disability

A disability that the student has

Quintile

Distribution of students according to monthly income

Nominal (Quintile1, Quintile2, Quintile3, Quintile4, Quintile5)

ADM_AVG

Administration final average

Numeric

CALC_AVG

Calculus final average

Numeric

ACCOUNT_AV G

Accounting final average

Numeric

Statistics final average

Numeric

Ethics final average

Numeric

Academic risk =YES((AVERAGE (ADM_AVG; CALC_AVG; ACCOUNT_AVG; STA_AVG; ETHICS_AVG))>=80; "NO"; "YES")

Nominal dichotomy (YES / NO)

Nominal (YES / NO)

STA_AVG ETHICS_AVG RISK_AVG (Class variable)

Modeling Stage Aspects considered for the test data set in WEKA Two experiments were considered about the dependent variables, RISK_ASSIST and RISK_AVG; considering before them, a selection of attributes, to then apply the classification algorithm, this will reduce the work of processing irrelevant attributes. In the process of selecting attributes within the WEKA application, evaluator algorithms and search methods were applied. One of the evaluative algorithms used was the CfsSubsetEval [4, 17] with the BestFirst search method [16] to select independent variables that most closely affect the dependent variable. Another evaluator algorithm that was used was the CorrelationAttributeEval [2] with the Ranker search method [40] which allowed evaluating the value of a variable by determining the correlation with the class variable. Table 6 reflects the results after applying the evaluator algorithms and the search methods concerning the variable RISK_ASSIST.

228

S. Alarcón-Loza et al.

Table 6. Results of the application of the evaluator algorithm for the selection of attributes of the RISK_ASSIST variable Attribute

CfsSubsetEval BestFirst

CorrelationAttributeEval/Ranker Average Merit

Average Rank

ADM_PRESENT

0%

0.373 ± 0.015

1 ± 1.04

ADM_LATE

80%

0.073 ± 0.014

3 ± 0.46

ADM_ABSENT

100%

0.375 ± 0.014

4.9 ± 0.83

CALC_ PRESENT

10%

0.237 ± 0.012

1 ± 0.3

CALC_LATE

0%

0.137 ± 0.013

±0

CALC_ABSENT

50%

0.236 ± 0.011

9 ± 0.3

ACCOUNT_ PRESENT

80%

0.366 ± 0.008

4 ± 0.92

ACCOUNT_LATE

10%

0.137 ± 0.013

±0

ACCOUNT_ABSENT

0%

0.353 ± 0.008

6 ± 0.8

STA_ PRESENT

100%

0.805 ± 0.005

±0

STA_LATE

0%

0.02 ± 0.01

±0

STA_ABSENT

0%

0.582 ± 0.011

±0

ETHICS_ PRESENT

100%

0.689 ± 0.01

±0

ETHICS_LATE

0%

0.197 ± 0.018

±0

ETHICS_ABSENT

0%

0.305 ± 0.008

±0

Table 7 reflects the results after applying the evaluator algorithms and the search methods concerning the variable RISK_AVG. Table 7. Results of the application of the evaluator algorithm for the selection of attributes of the variable RISK_AVG Attribute Career

CfsSubsetEval BestFirst

CorrelationAttributeEval/Ranker Average Merit

Average Merit

0%

0.053 ± 0.005

±0

Age

0%

0.017 ± 0.001

8.4 ± 0.49

Gender

0%

0.095 ± 0.01

±0

Disability

0%

0.013 ± 0.007

9.3 ± 0.9

Quintile

0%

0.013 ± 0.003

9.3 ± 0.64

ADM_AVG

0%

0.446 ± 0.015

±0

CALC_AVG

100%

0.524 ± 0.02

±0 (continued)

Design of a Predictive Model

229

Table 7. (continued) Attribute

CfsSubsetEval BestFirst

CorrelationAttributeEval/Ranker Average Merit

Average Merit

ACCOUNT_AVG

100%

0.582 ± 0.014

±0

STA_AVG

80%

0.865 ± 0.008

1.9 ± 0.3

ETHICS_AVG

100%

0.881 ± 0.007

1.1 ± 0.3

The classification algorithms [32] used to identify the causes of academic risk related to performance were the J48 and Random Tree decision trees. The J48 algorithm [6] shows the percentage of well-classified instances and those that are not correctly classified in the confusion matrix, and the precision detailed by the class variable. This algorithm allows to discover of specific relationships between instances and attributes, using the best attributes of the generated tree. The Random Tree classification algorithm [18, 19] builds a tree that considers a random number of attributes and instances for each node. For the attribute selection process and application of classification algorithms, Evaluation and Validation mode of models called Cross-Validation Folds [4] was applied, since it provides an average precision per variable class based on K iterations that executes with the data. Evaluation Stage According to Table 8, two measures have been used for the evaluation of the quality of the classification, which is Area Under the Curve (ROC-AUC) [24] and Accuracy [25]. According to these measurements, the J48 and Random Tree algorithms provide a point in the space under the curve ROC, as they are binary classifiers. The ROC analysis is the ratio of true positives, and information retrieval, versus the ratio or proportion of false positives. When is near to unity, the classifier behavior approaches the perfect classifier. Regarding precision, the most accurate percentage of all the positives that have classified the test is obtained. Table 8. Evaluation of the quality of the applied algorithms Algorithm

RISK_ASSIST

RISK_AVG

Accuracy

ROC Area

Accuracy

ROC Area

J48

98,50%

0,952

98,50%

0,937

Random Tree

99,70%

0,992

97,30%

0,923

230

S. Alarcón-Loza et al.

Implementation Stage The predictive model [22, 36] was generated by applying classification algorithms in the WEKA program. Several tests were carried out, in such a way that two significant algorithms were chosen. According to Table 8, the Random Tree algorithm was applied to assess the academic risk to Attendance. Eight knowledge rules were generated with the SI value (see Fig. 1), indicating that there is a risk of failing two or more subjects. The interpretation of the generated knowledge rules that declare that there is academic risk, are explained below, using literals: A) If the value of ACCOUNT_ABSENT < 1.5 & ETHICS_ABSENT >= 2, then if there is risk academic and 15 students were located; B) If the value of ACCOUNT_ABSENT < 1.5 & ETHICS_ABSENT < 2 & ETHICS_PRESENT < 6, then there is academic risk and 42 students were located; C) If the value of ACCOUNT_ABSENT < 1.5 & ETHICS_ABSENT < 2 & ETHICS_PRESENT >= 6 & STA_PRESENT < 6, then if there is academic risk and 52 were located students; D) If the value of ACCOUNT_ABSENT < 1.5 & ETHICS_ABSENT < 2 & ETHICS_PRESENT >= 6 & STA_PRESENT >= 6 & CALC_ABSENT >= 2.5, then if there is academic risk and 3 students were located; E) If the value of ACCOUNT_ABSENT < 1.5 & ETHICS_ABSENT < 2 & ETHICS_PRESENT >= 6 & STA_PRESENT >= 6 & CALC_ABSENT
= 6 & STA_PRESENT >= 6 & CALC_ABSENT < 2.5 & ADM_PRESENT >= 5.5 & ACCOUNT_PRESENT < 6, then if there is academic risk and 1 student was located; G) If the value of ACCOUNT_ABSENT >= 1.5 & ADM_PRESENT >= 0.5, then if there is academic risk and 11 students were located and, H) If the value of ACCOUNT_ABSENT >= 1.5 & ADM_ABSENT < 0.5 & ACCOUNT_PRESENT < 4.5, then if there is academic risk and 8 students were located. The J48 algorithm was applied to evaluate the academic risk to the general average of the subjects. There are a total of three knowledge rules with the SI value (see Fig. 2), indicating that there is a risk of failing one or more subjects. The interpretation of the generated knowledge rules that declare that there is an academic risk is explained below, using literal: A) If the value of ETHICS_AVG 50 & ADM_AVG .05 and L with t = −1.667 and p = .108 > .05. With this, we conclude that as a starting point, the groups have similar grades and mean scores. After implementation, a new evaluation was applied to both groups with the same procedure. According to Levene’s test, all variances are homogeneous. When using the t statistic for independent samples, it is obtained that for M t = −1.179 and p = .250 > .05, which maintains equality in the variances. However, in NS, we have t = −3.422 and p = .003 < .05; in L, we have t = −3.762 and p = .001 < .05. With this, it can be seen that there were better grades in these last two subjects, and therefore there is a statistical difference between the means of the groups. Pretest and Post-test (Intragroup). The control group was chosen, and the aim was to establish whether there were differences between the scores on the diagnostic test and the knowledge test. In compliance with the requirements for parametric tests, the related samples t-test was chosen. For M, we have t = −5.418 and p = .000 < .05; for NS, t = −2.843 and t = .015 < .05; for L, t = −3.323 and t = .006 > .05. It is determined that all ratings have improved after receiving two weeks of preparation, even without using the application performed. Similarly, the same statistic is applied to the three grades in the three subjects. For M, we have t = −6.356 and p = .000 < .05; NS t = −8.533 and t = .000 < .05, L = − 6.789 and t = .000 < .05. This corroborates an improvement in the means grade point average of the experimental group.

4 Discussions and Conclusions This application has been developed to complement the online education students maintain in the context of the COVID-19 pandemic. Considering that some students have limited access to the Internet, it is necessary to find ways to motivate them and improve the teaching processes, taking advantage of technological development. Based on the results of this experiment, it can be seen that the two groups had similar average grades. This ensures that subsequent results will have greater validity in the case of finding differences. The inter-group analysis showed that the average scores of the experimental group that used the application were higher than those of the control group. From the statistical point of view, differences were found for the NS and L grades, while for M, this was not the case. The interpretation given in conjunction with the teaching staff is that mathematics requires more excellent practical skills that in online education is limited. In contrast, the remaining subjects are more theoretical, where the presentation of three-dimensional multimedia material is an educational complement. Although at first glance, it can be seen that there were better results for those students who used the application of AR in more theoretical subjects. It is essential to mention that in the intra-group tests, there was an improvement in the average scores of the pretest and post-test. This means that those individuals who received conventional teaching also

308

J. Buele et al.

increased their knowledge without using virtual tools. This could be due to other external factors, such as using different teaching methods or better individual performance. It could also be due to the influence of other sociodemographic variables that could have an impact. Given that age and ethnic denomination were similar in the groups, we disregarded them and only focused the analysis on grades. However, age was considered in the design stage; as mentioned by Yadav et al. [24], it is recommended that kids using the application should be at least seven years old. Although there are proposals with similar approaches, the application varies since three-dimensional objects and words are used as in [19]. Compared to Kasinathan et al. [20], it uses a book as a base, where the letters are replaced by videos, which are considered more interactive. While comparing it with the study of Chang et al. [21], the subjects are different, and thus the approaches change. Our work focuses on the theoretical understanding of knowledge through audiovisual material, while in [21], physical exercises are part of physical education. In our study, only theoretical academic performance was analyzed, and therefore a comparison cannot be made, although both are based on quasi-experiments. Despite this, positive partial results have been obtained that allow us to form precedents for implementing AR in primary education. On the other hand, the proposal by Kasinathan et al. [20] is a didactic material that shows marine environments but does not seek to generate knowledge and cannot be directly compared. Finally, the studies [15] and [16] are part of university education, and their population has other needs, so they only show the uses of AR in education. The main limitation of the developed application is the size of the Smartphone storage. This occurs because the videos are multimedia material that combines images and sounds, increasing the size of the files. Another limitation is related to the population because the groups were formed conveniently according to their place of residence or by the shared use of smartphones. This could have produced a bias in the results or generated variation, so it is recommended to develop RCT experiments. Considering that this study was well-received, the authors plan to improve the design. This includes renewing the multimedia material, the user interface and training more teachers. They also propose conducting tests in other populations, changing subjects, and increasing the analysis with more variables. All this will allow us to formulate better conclusions that will serve as a guide for future research. Acknowledgments. Universidad Indoamérica for its support and financing under the “Big data analysis and its impact on society, education and industry” project.

References 1. Caboni, F., Hagberg, J.: Augmented reality in retailing: a review of features, applications and value. Int. J. Retail Distrib. Manag. 47, 1125–1140 (2019). https://doi.org/10.1108/IJRDM12-2018-0263 2. Lucero-Urresta, E., Buele, J., Córdova, P., Varela-Aldás, J.: Precision shooting training system using augmented reality. In: Gervasi, O., et al. (eds.) Computational Science and Its Applications – ICCSA 2021. LNCS, pp. 283–298. Springer, Cham (2021). https://doi.org/10. 1007/978-3-030-87013-3_22

Augmented Reality Application with Multimedia Content

309

3. Palacios-Navarro, G., Hogan, N.: Head-mounted display-based therapies for adults poststroke: a systematic review and meta-analysis. Sensors 21, 1–24 (2021). https://doi.org/10. 3390/s21041111 4. Palacios-Navarro, G., Albiol-Pérez, S., García-Magariño García, I.: Effects of sensory cueing in virtual motor rehabilitation. A review. J. Biomed. Inform. 60, 49–57 (2016). https://doi. org/10.1016/j.jbi.2016.01.006 5. Kim, J.J., Wang, Y., Wang, H., Lee, S., Yokota, T., Someya, T.: Skin electronics: nextgeneration device platform for virtual and augmented reality. Adv. Funct. Mater. 31, 2009602 (2021). https://doi.org/10.1002/adfm.202009602 6. Varela-Aldás, J., Buele, J., Amariglio, R., García-Magariño, I., Palacios-Navarro, G.: The cupboard task: an immersive virtual reality-based system for everyday memory assessment. Int. J. Hum. Comput. Stud. 167, 102885 (2022). https://doi.org/10.1016/J.IJHCS.2022.102885 7. García-Magariño, I., Gonzalez Bedia, M., Palacios-Navarro, G.: FAMAP: a framework for developing m-health apps. In: Rocha, Á., Adeli, H., Reis, L.P., Costanzo, S. (eds.) WorldCIST’18 2018. AISC, vol. 745, pp. 850–859. Springer, Cham (2018). https://doi.org/10.1007/ 978-3-319-77703-0_83 8. Mohanty, P., Hassan, A., Ekis, E.: Augmented reality for relaunching tourism post-COVID19: socially distant, virtually connected. Worldw. Hosp. Tour. Themes. 12, 753–760 (2020). https://doi.org/10.1108/WHATT-07-2020-0073 9. Rauschnabel, P.A., Babin, B.J., tom Dieck, M.C., Krey, N., Jung, T.: What is augmented reality marketing? Its definition, complexity, and future. J. Bus. Res. 142, 1140–1150 (2022). https://doi.org/10.1016/j.jbusres.2021.12.084 10. Verhey, J.T., Haglin, J.M., Verhey, E.M., Hartigan, D.E.: Virtual, augmented, and mixed reality applications in orthopedic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 16, e2067 (2020). https://doi.org/10.1002/rcs.2067 11. Wen, Y.: Augmented reality enhanced cognitive engagement: designing classroom-based collaborative learning activities for young language learners. Educ. Tech. Res. Dev. 69(2), 843–860 (2020). https://doi.org/10.1007/s11423-020-09893-z 12. Buele, P.A., Avilés-Castillo, F., Buele, J.: Repercusiones en la salud mental de los estudiantes de tercero de bachillerato: un caso de estudio. Publicare 1, 26–30 (2021). https://doi.org/10. 56931/pb.2021.11_5 13. Varela-Aldás, J., Buele, J., Lorente, P.R., García-Magariño, I., Palacios-Navarro, G.: A virtual reality-based cognitive telerehabilitation system for use in the covid-19 pandemic. Sustainability 13, 1–24 (2021). https://doi.org/10.3390/su13042183 14. Asadzadeh, A., Samad-Soltani, T., Rezaei-Hachesu, P.: Applications of virtual and augmented reality in infectious disease epidemics with a focus on the COVID-19 outbreak. Inform. Med. Unlocked 24, 100579 (2021). https://doi.org/10.1016/j.imu.2021.100579 15. Chytas, D., et al.: The role of augmented reality in anatomical education: an overview. Ann. Anat. 229, 151463 (2020). https://doi.org/10.1016/j.aanat.2020.151463 16. Tang, K.S., Cheng, D.L., Mi, E., Greenberg, P.B.: Augmented reality in medical education: a systematic review. Can. Med. Educ. J. 11, e81 (2019). https://doi.org/10.36834/cmej.61705 17. Soltani, P., Morice, A.H.P.: Augmented reality tools for sports education and training. Comput. Educ. 155, 103923 (2020). https://doi.org/10.1016/j.compedu.2020.103923 18. Das, P., Zhu, M., McLaughlin, L., Bilgrami, Z., Milanaik, R.L.: Augmented reality video games: New possibilities and implications for children and adolescents. Multimodal Technol. Interact. 1, 8 (2017). https://doi.org/10.3390/mti1020008 19. Oranç, C., Küntay, A.C.: Learning from the real and the virtual worlds: educational use of augmented reality in early childhood. Int. J. Child-Comput. Interact. 21, 104–111 (2019). https://doi.org/10.1016/j.ijcci.2019.06.002

310

J. Buele et al.

20. Kasinathan, V., Al-Sharafi, A.T.A., Zamnah, A., Appadurai, N.K., Thiruchelvam, V., Mustapha, A.: Augmented reality in ocean’s secrets: educational application with attached book for students. Linguist. Cult. Rev. 5, 1123–1137 (2021). https://doi.org/10.21744/lin gcure.v5ns1.1498 21. Chang, K.E., Zhang, J., Huang, Y.S., Liu, T.C., Sung, Y.T.: Applying augmented reality in physical education on motor skills learning. Interact. Learn. Environ. 28, 685–697 (2020). https://doi.org/10.1080/10494820.2019.1636073 22. Varela-Aldás, J., Palacios-Navarro, G., Amariglio, R., García-Magariño, I.: Head-mounted display-based application for cognitive training. Sensors 20, 1–22 (2020). https://doi.org/10. 3390/s20226552 23. Ortiz, J.S., Palacios-Navarro, G., Andaluz, V.H., Guevara, B.S.: Virtual reality-based framework to simulate control algorithms for robotic assistance and rehabilitation tasks through a standing wheelchair. Sensors 21, 5083 (2021). https://doi.org/10.3390/s21155083 24. Yadav, S., Chakraborty, P., Kochar, G., Ansari, D.: Interaction of children with an augmented reality smartphone app. Int. J. Inf. Technol. 12(3), 711–716 (2020). https://doi.org/10.1007/ s41870-020-00460-6

Storytelling as a Motivational Resource in the Therapy of Childhood Cancer Mónica Liliana Castro Pacheco(B) , Mateo Calle Loja , and Marco Segarra Chalco Instituto Superior Tecnológico Particular Sudamericano, Cuenca, Ecuador [email protected]

Abstract. Digital storytelling has proven to be an attractive and positive motivational resource in recent years, especially if it is focused on learning. The objective of the project revolved around the design and layout of an illustrated story for children suffering from childhood cancer with the use of new technologies such as Augmented Reality and Storytelling. This proposal was developed following the Design Thinking methodology with its five stages (Empathize, Define, Ideate, Prototype and Test). The result was a graphic proposal focused on the use of a children’s story that explains in a creative and dynamic way the therapy and recovery process of children with cancer, which makes them cope better with this disease in their daily lives. The idea of the proposal is center on the metaphors and stories with a positive message encourage the mood of the children to change in an effective way, in addition to the use of augmented reality as an additional technology, for both children and their families can conceive with optimism the benefit in the recovery from cancer. This research project is the result developed by students and teachers of the Graphic Design career of the Instituto Tecnológico Particular Sudamericano in the city of Cuenca, Ecuador. The proposal was implemented with 30 children of Fisiosens (therapy center). It is concluded that overcoming the disease of childhood cancer depends largely on the external motivation received by children in the therapy process and of course the resources used, so digital storytelling can be of great support. Keywords: Graphic design · Storytelling · Augmented reality · Illustrated story · Narrative therapy · Illustration

1 Introduction Childhood cancer is a lethal disease [1] and has become a real-world problem, since it does not distinguish gender, age, or race; for this reason, a high percentage of it occurs in children from five to eleven years of age, with the respective negative impact on the families, as well as their intimate social circle. The oncological patient feels the physical symptoms of the disease and its respective treatment that alters his physical part, such as vomiting, nausea, weight loss or fatigue, but is also affected by its emotional repercussions [2]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 311–324, 2023. https://doi.org/10.1007/978-3-031-25942-5_25

312

M. L. Castro Pacheco et al.

According to the Ecuadorian Society for the Fight against Cancer in 2018, reported 2611 cases of cancer with an average of 145 cases per year; Leukemia is the most frequent type, followed by neoplasms of the Central Nervous System with 13% and lymphomas 10% [3] The survival rate stood at 62% The psychic role plays a fundamental role in the integral development as people, therefore the mood in affected patients should be taken into account, because they present an important role in achieving to face the various therapeutic interventions. To reduce the effects of stress associated with the induction of cancer and the fear of the disease in the body. Activate the immune system against the disease by establishing positive beliefs. Improve motivation to change their lifestyle. It is clear the strengthening of the “desire to live” and the confrontation of despair, and the evaluation and correction of the patient’s beliefs about the disease [4]. For Sanchez [5] in his research concludes that hospitalization is harmful to patients, as fear, anxiety and emotional reactions of restlessness often caused by such procedures, along with the possibility of surgery, separation from parents, or death. For these reasons, psychological care of the patient is necessary. If the child survives the cancer, that is, if the treatment is effective, the most important thing is to achieve an adequate social reintegration. It must be considered that children may feel anxious about getting sick again, their self-esteem may be reduced due to the physical changes they receive, and their social relationships may be affected by hospitalization. Providing positive tools to parents and family members becomes paramount, this will make it easier for them to cope with this new situation and allow their children to resume their plans more successfully [6]. Children’s and young people’s literature is born verbally, and is still maintained orally, it focuses on these target audiences and preserves the essence of literature. It can also be read by any discerning reader. Today, the focus is on the addressee, unlike the earlier focus on the author, context, and historicity. Just as life is made up of stories, children’s and young adult literature is made up of stories, and stories are inventions that were once informative and moral. The important thing now is that reading should be fun [7]. The legend then constitutes a dynamic tool to feed the imagination especially for children, it must have these necessary bases for the formation of values for those to whom it is addressed, in addition to positive connotations for their future, stimulate their confidence and personal development. Through the story, children can express their feelings, fears, and worries. By listening to readings and conversations about the story, children make connections, reflect on what is happening in the story, relate it to their situation and give their experience a unique spiritual organization. Reading is more than deciphering the signs, it is a form of conversation, a way of relating to the environment, listening to it, and understanding what is going on around them. By sharing stories, children learn about human emotions and observe how others react to humor, suffering, courage, kindness and more [7]. According to Cardona et al. [8] indicates that it was found that the metaphor, in addition to providing relevant information about symptoms in family counseling facilitates co-creation and adaptation to the context of the treatment session and the uniqueness of the counseling family.

Storytelling as a Motivational Resource in the Therapy

313

Stories that have wonderful pictures and stories capture children’s attention. In other words, reading books not only enriches vocabulary, but also helps mental health. Children are much more open-minded and have a better understanding of any subject that is difficult for them to conceive [9]. In this sense it can be said that Storytelling helps to elaborate stories, in his research Freire [10] indicates that it consists of the art of being able to tell a story with the use of sensory language, presented in such a way that it transmits to the listeners the ability to internalize provoking a reaction, using interactivity through which the receiver expands in his imagination. On the other hand, in the field of graphic design, it seek the development of tools highlight the importance of the manufacture of components that help to make visual elements with the application of various resources, the use of colors allows to convey implicit messages to people who read the story, a successful example that unites the theory and psychology of tonality is represented of various emotions that manage to be reinforced with the peculiarity of the forms. According to a study conducted by Borja et al. [11] in Colombia where they specify that the game and didactics promote fun and a form of interaction, this form of interaction helps to improve the quality of life in front of the reality they have, in addition, generating playful teachings, motivate children to feed curiosity and interest. Many elements make up an illustration for children, as it plays an important role in it; shapes, lines, colors, anatomy, gestures, expressions, and behavior are in the scenarios to make illustrations for children [12]. The project had the proposal to use several methodologies, one of them is designing a specific tool that is developed within different fields: cultural, intellectual, advertising, but above all it is oriented to communication, to find a possible solution to problems given in a positive, interdisciplinary, and especially a social approach, so you must know before the information collected from the know-how. This analytical and creative process involves a person in opportunities for the incubation of innovative ideas and takes as its center the perspective of end users to experiment, model and prototype, gather feedback and redesign [12]. 1.1 Use of Graphic Design Graphic design has evolved, in this sense deep research on the subject to be treated is needed to obtain a result according to the needs of the community. Throughout its history and as a discipline it has had different definitions, one of them indicates that it is an adaptation of different graphic elements selected in the visual space [13]. According to Frascara [14], for communication to affect people’s behavior, it must be detectable, attractive, and convincing, it must be built based on a good understanding of visual perception, the psychology of knowledge and behavior of individuals, their intellectual abilities, and the cultural values of the audience. The project presented was developed with the use and implementation of various adjunct knowledge such as branding, market research (marketing), ecological design, 2D and 3D character design, sketching, editorial design, design for the community, typography, color theory, etc., which allow for graphic proposals that meet and exceed the proposed objectives. The use of counterparts helps in obtaining clearer ideas about the subject, the results in other proposals allow to focus and guide the project in a correct way, within the

314

M. L. Castro Pacheco et al.

analyzed counterparts stands out the story “Sparky’s Tail”, written by Sarah White and illustrated by Lisa Evans, it covers the problem of childhood cancer through allegories and metaphors presented by the main characters of the story. Additionally, the story “The Adventures of the Bald Princess” written and illustrated by Jesús Francisco Marcos covers the disease in a more direct way that serves as a different analog to the one mentioned above, allowing readers to observe and analyze the results obtained from each of the referents and to focus correctly on what is expected because of the project [15]. 1.2 Storytelling The digital storytelling consists in a technique to tell stories using electronic media [16]. It is currently used in communication and education as it allows building interactive experiences. This type of narrative is very versatile since, with the use of images, videos, i.e., multimedia, it allows interaction with users. The stories could communicate and excite [10]. The stories told since time immemorial, make people build a parallel and imaginary world, in which the characters learn by their own means to face their problems with resources, so the protagonist can solve their conflicts in some way, in this sense the reader is reflected in the authentic result [17]. The most important thing is to know how to connect with the child, with that is achieved to awaken the necessary motivation encouraging them to approach literature in a more creative way. It is important to consider the use of eye-catching elements such as illustrations or interactions ensuring the success of this resource. A great storytelling technique is to combine it with technology (digital storytelling). Through these digital stories, students become both content creators and consumers [18]. Design Thinking is a fundamental tool for analyzing complex problems and solving them collectively. Thanks to this methodology, it has been possible to develop projects from a triple perspective: Viable, Feasible and Desirable, so that it generates a triple positive impact both on the company or organization and on the client or user. But always starting the projects from an approach in which people are at the center of the reflection. In this way, difficulties and shortages can be detected, as well as offering safe solutions and in many cases, alternatives, for each of them [19]. It becomes a great possibility in view that the user is basically the fundamental part of the work, putting his primary needs as the beginning of the work and the solutions are born in a creative way giving solution for that problem. Because it allows to work in the process of creation, it also helps to find the failure in some cases, this is the most interesting of this method, since its interactive action makes it possible to see the failures and solve it several times while looking for the possible way to neutralize that which prevents to reach the desired result. Designers know that to identify and solve real-life problems more effectively, it is necessary to approach them from different angles and points of view [20]. Thanks to its five stages it makes the process practical because the user is always at the center of the whole procedure, understanding what their needs are to respond to the offer of new products and services. Empathize here it is necessary to be able to put yourself in people’s shoes to generate adequate solutions to their needs and realities.

Storytelling as a Motivational Resource in the Therapy

315

Listen to what they say, what they do, but also observe what they do to understand why they that. In the Define stage, it can select from the gathered information during the empathy phase and keep the information that really adds value, allow it to discover interesting opportunities and identify new challenges, the solution of which will help to obtain an innovative result. When it reaches the next stage, ideate, your goal is to generate as many options as possible. You should not stick with the first idea because it has probably occurred to your competitors as well. In this phase, you should apply expansive thinking and shy away from judgments to let creativity help you dream up new possibilities. In the Prototyping phase, ideas are turned into reality. Building prototypes helps to make ideas palpable and visualize possible solutions by highlighting elements that need to be improved or refined before reaching the result. Finally, there is phase five: Testing. All the time spent developing an idea in the office without testing it in the market is money wasted because it is not known if the proposal will work. This phase is crucial because you test the prototype with end users and that will help identify significant improvements, bugs to solve or possible shortcomings. During this phase, the idea evolves into the final solution you are looking for. The goal of the first generation of designers (those of the industrial revolution) was to satisfy people’s needs through industry. During the twentieth century, their work contributed greatly to improve the standard of living in the developed world. However, we now find ourselves in a scenario where basic needs are already met, and many others have emerged. The world has changed: society, family patterns, the way we relate to each other, the way we consume or work, even our values are different and subject to constant evolution. Design Thinking is a way to learn while creating and seeking to implement solutions that best suit the needs of users; all this while avoiding the high costs that can have other more traditional methods in which if something goes wrong there may be no way back. The psychology of color for Garcia [21] in his article Psychology of color applied to virtual courses to improve the level of learning in students indicates that, in the seventeenth century, Newton was the first who, by means of a crystal prism, decomposed light, which, when incident on a screen, appeared in the form of a band of various colors. The different path followed by the rays, is due to their wavelength; each one corresponds to a color, knowing then that the luminous radiations who constitute only a small part of the spectrum of radiations, the appreciation of colors is based on a complicated coordination of physical, physiological, and psychological processes. The psychology of color and its application offers the reader an infinite range of environments and combinations that embellish the proposed result, some colors stronger than others make the contrasts and harmonies reach the users with greater meaning, this allows to provoke reactions and concrete feelings. It can also be said that it speaks more precisely than the form itself. According with the research made by Heller [22] indicates that colors and feelings are not accidental, associations are not a matter of taste but result from experiences rooted from childhood in language and thought, thanks to the symbolic part and tradition. The use of new technologies allows to communicate the message in a more effective way, at the present time, not only contributes to automation but also to technification in the

316

M. L. Castro Pacheco et al.

new digital era [16]. The proposal has as one of its objectives to use Augmented Reality - AR, the use of this new technology allows to complement the reader’s experience by changing the perception and interaction with the real world, here the user to be in a real elevated environment makes the additional information that is generated by a device, which is more widespread in society; presents some common characteristics such as the inclusion of virtual 2D and 3D graphic models in the user’s field of vision; the main difference is that AR does not replace the real world with a virtual one, but on the contrary, it maintains the real world seen by the user by complementing it with virtual information superimposed on the real one. The user never loses contact with the real world that is within the user’s field of view, while at the same time being able to interact with the superimposed virtual information. The AR then combines the real world with the virtual world and shows them to the user with an interaction in real time using the interface of the existing environment with the support of the methods in this case can even be in three dimensions. This type of set of techniques allows, to enter games, scenes, and spaces, where the user performs actions, moves, and interacts with the fully digital content, in a complete way. Childhood cancer and its psychological impact. As mentioned above, cancer is a disease associated with a broad spectrum of infection, which does not discriminate against age or gender, where children are the most affected. The symptoms of the disease and its treatment: vomiting, nausea, weight loss or fatigue, vary from person to person, at an early age it is even stronger, especially the psychological part is also affected. When a person is diagnosed with cancer, a great number of emotions develop together: fear, loneliness, depression, anger, among others, diagnosing this disease is quite difficult [4]. The absence of information about childhood cancer in our environment prevents in warning the children and restricts the progressive improvement of treatments, medical practices, and well-focused therapy because many people prefer not to receive it. For this reason, this proposal is intended to improve the willingness to receive therapy, since it is intended to achieve a psycho-pedagogical tool that will help to break the taboo of talking about issues such as childhood cancer in this audience. It is in this instance that it is intended to help with this type of strategies that will be used to cope with the disease. Their level of development will determine the nature of the emotional impact of the cancer and the sequelae it leaves. The therapeutic story assists as an aid by creating an alternate reality, it is not necessary to be a great writer to write it, the most important thing is to be able to capture the essence of the message through the design and that it connects with the history of the child or his environment. The objectives set for this proposal are: 1. To design and illustrate a children’s story to help in the recovery process of children suffering from cancer in the city of Cuenca with the use of narrative and imaginative therapy. 2. To apply the design Thinking methodology as a basis within the proposal. 3. To implement the story with the use of Augmented Reality as a technological innovation within the therapy.

Storytelling as a Motivational Resource in the Therapy

317

2 Methods and Materials The methods used are the editorial and illustrative design to develop the story to reflect the feelings and actions of a patient diagnosed with childhood cancer, the idea was to generate empathy with the affected and their environment, promotes the importance of feelings that are created within the psychological spectrum of patients. For this, the Design Thinking methodology was taken into consideration, which has five stages: empathize, define, ideate, prototype, and evaluate, which has innovation activities with a people-centered design philosophy [23]. The development research [24] with the objective of designing a book-object to teach the Fundamentals of Design by means of two- and three-dimensional mechanisms, to provide the institution with innovative didactic resources for the training of graphic designers and to break with tradition [25]. Another result of the innovative methodology within the design is the “Morlapolys game for the dissemination of the cultural heritage of Cuenca Ecuador”, in which the objective was to design a board game for the dissemination of cultural heritage, using modern techniques and theoretical foundations of graphic design [25]. However, it can be indicated that all these methodologies are rich in exploiting innovation as a process within design research. Within the scientific research methods used in the project, the main axis was the observation method, which consists of selecting a thing, person, or object to be analyzed and obtaining information in a semi-direct way from its behavior. This methodology allows achieving first-hand research without intervening directly in the actions to be analyzed. With this, it was possible to examine the daily activities of a person diagnosed with cancer. Another of the instruments used to obtain information consisted of interviews with psychologists specialized in oncological therapies, who agreed to provide information and new knowledge, but above all how to apply it in a professional manner in the development of the project. According to the research they agreed that the story helps in therapy since there is evidence that narrative therapy is what works best with children more than anything in catastrophic pictures, so the stories work in psycho-oncology conditions, in addition to problems related to subsequent stressful situations of a child, it also works in adaptation problems. The survey was another element used in the compilation of the research, it consisted of the first step to obtain the important data within the analysis and interpretation of the results and the information obtained from the interview allowed the analysis of the variables of the interview, which refer to the specific properties of the object of study in the research, these being directly related to the objective of the project contain a significant informative value. In the following, reference is made to the activities carried out in each of the phases of Design Thinking. 2.1 First Phase. Empathize To know the level of knowledge to obtain data within the research, a survey was carried out in which 64% of people indicated that a story is the most practical method for the

318

M. L. Castro Pacheco et al.

treatment. 72% say that they believe that the creation of a didactic story is appropriate to create a positive balance about stereotypes in childhood cancer, an equal percentage indicates that they do not know of campaigns involving childhood cancer and its psychological sequels. On the other hand, 72% say that children do not have enough information about cancer. 64% say that in their opinion the best way to deal with children diagnosed with cancer is with the use of illustration. 65% say that public health authorities do not emphasize the necessary psychological treatment of patients suffering from the disease. 2.2 Second Phase. Define The main idea after having carried out the previous research is finally proposed the diagramming and realization of the children’s story focused on the age group that suffers from childhood cancer, which is the result of a previous research process such as the review of various counterparts, as well as the collection of first-hand information obtained through observation and interviews with the clinical psychologist Daniel Andrés Racines Jerves who allowed to direct correctly the information raised in the project, in order to use it as a tool for psychoeducation. 2.3 Third Phase. Creating With the information obtained, a series of sketches could be made, for which characteristics with important symbolism within the character were taken as references. One of them was the use of a band aid instead of a nose, indicating that the girl is in the process of healing as a metaphor within the story. 2.4 Fourth Phase. Prototyping Image 1 shows the story of “Violet”. It tells the story of a little girl who fantasizes about having her hair back, but after a dream that shows her hair covered with flowers, the curiosity to know what happened to her hair is born, thanks to the help of her mother who will take her on an adventure to recover it and discover that hope must always be maintained (Figs. 1 and 2). In scene 1, Violet is a very cheerful and fun-loving girl. For as long as she can remember she has never had hair, however, it never bothered her since she enjoyed playing with the other girls and she loved to draw. One night before going to sleep she wondered what her hair would be like if she had it, maybe it would be blonde, brown, long, or short, from imagining it so much she ended up falling asleep.

Storytelling as a Motivational Resource in the Therapy

319

Fig. 1. Story Cover

Fig. 2. Scene 1

In scene 2, it tells of Violet’s dreams, she was walking through a large meadow covered with flowers that she had never seen before, such a large meadow and such beautiful flowers, where red, yellow, purple and pink, each one of them was more beautiful than the previous one, she thought of pulling some of them to take them to her mother, but when she pulled the first flower she felt a little pang, she realized that all that huge meadow was her hair, which was covered with the wonderful flowers (Fig. 3). In scene 3, surprisingly, a large cloud was placed over her, and a great rain of colors began to water those flowers giving them more brightness and color, she contemplated how even new flowers began to sprout due to the rain. In scene 4, Violet, astonished at the situation she was in, began to run and play with her hair, she tried to find the end of it, but it was so big that even trying to go all over the meadow she did not know how big it was, she began to brush her hair, suddenly she noticed that several of the flowers began to lose their color, little by little each one of them was withering, scared she closed her eyes, but when she opened them she was in her room, it had all been a dream.

320

M. L. Castro Pacheco et al.

Fig. 3. Scene 3

In scene 5, at sunrise, Violet full of questions about the dream, decided to go to her mother, she told her everything that had happened the night before, she was amazed by the fantasy told by her daughter, after telling the whole story, her mother smiled and said: -But Violet, you already had flowered hair. Violet, not believing what she had just heard, began to ask her mother about her hair and why she could not remember it. In scene 6, Violet’s mother tells her about when she was a little girl, long time before she got her illness, she had a beautiful hair, if she could imagine, but like flowers which wilt and fall out, her hair had begun to fall out little by little until she had forgotten she ever had it. In scene 7, the following dialogue occurs: Intrigued by her mother’s words, Violet asked: - Mom, do you think my hair will ever grow back? Violeta’s mother, with a smile on her face answered: - Of course my girl, but to achieve it you must make an effort in everything you do and have hope, if you do that, before you know it your hair will grow back big and strong, remember that, if you do the best you can and you wish it with all your heart, it will happen. The dialogue of scene 8, Violet’s mother took out a small box and happily said: - I was thinking of giving it to you on your birthday, but I think this is the ideal moment. Violet hurriedly opened the box, it was a beautiful handkerchief with a flowery design, happy with her gift she quickly put it on her head, the handkerchief was so beautiful that it reminded them of the flowers seen in her dream.

Storytelling as a Motivational Resource in the Therapy

321

Scene 9 is about how, from that day on, Violeta always wore the scarf because it reminded her of the beautiful hair she saw in her dreams, she wore it when she went to the hospital and even when she accompanied her mother shopping at the supermarket, she played with it to brush it and invent a million crazy and creative hairstyles. Scene 10 talks about how one day when leaving one of her visits to the hospital, Violeta and her mother were trapped by a big cloud that announced the onset of rain, the mother ran to the car in order to escape the water, but the girl began to play under the deluge, running from one side to the other, jumping through puddles and imagining that she could control the water. In scene 11, it indicates that the constant use of the handkerchief, the rain, and the endless games, are stained by all the adventures lived with Violeta, her mother asked her to give it to her to wash it, the girl had never taken off the handkerchief, so, with a little uncertainty, the handkerchief was removed little by little. Scene 12 indicates that after removing the handkerchief Violet looked at her mother’s astonished face, ran to the bathroom mirror and was very surprised to realize that, under that beautiful, flowered handkerchief, her hair was hidden, which had grown back, at that moment her mother’s words resounded in the little girl’s head…”. To achieve it you must strive in everything you do and have hope”. 2.5 Fifth Phase. Testing It was possible to test the project on children of the proposed ages in order to have a sample of their reaction to the project, obtaining favorable results, in which you can see the interest of children for the story and the attractions that it offers, it is important to emphasize that the use of new technologies, such as augmented reality, which allows the project to be interesting, allowing children to not limit themselves to just see the illustrations, apart from continuing to interact with the story to discover all the animations that it possessed.

3 Analysis and Results In Table 1, a total of three categories can be identified: orientation, expository and creation, effectively justifying the realization of the project, in addition to suggesting how it should be carried out to be useful and innovative for children diagnosed with this disease.

322

M. L. Castro Pacheco et al. Table 1. Categories: orientation, exposure, and creation

Question 1

Creation

“Tales”

Question 2

Expository

“Why according to evidence, the narrative therapy is the best it works with children”

Question 3

Expository

“Yes, it would be perfect, in fact, it would be a good tool to complement the therapeutic process”

Question 4

Creation

“About, psycoeducation, not”

Question 5

Orientation

“Of genetic prevalence, bone cancer in the femur is the most common”

Question 6

Orientation

“ Cancer is not discussed with children because it is a complex topic”

Question 7

Creation

“Yes, it generates a stronger commitment (engagement) with the guaguas (babies)”

Note. Elaboration and own design. 2022.

4 Discussion and Conclusions Stories are a didactic contribution to the educational process of children. In addition to becoming a narrative therapy, it focusses on the construction of new stories in which the readers move away from their own situations and enter those of the protagonists of the story. The proposal of the story is clearly of a social nature aimed in this case at children with cancer, managing to gather and obtain first-hand information on various psychological aspects that directly influenced the development of the final proposal, as well as in the illustration, with the purpose of being used as a psychoeducation tool. One of the goals of the research was to identify the different strategies that can be used to reach the result, based on the quantitative and qualitative analysis, the information obtained from the different professional fronts that participated in it, it was possible to reach the conclusion that this project is of great importance since it unites different disciplines, among them graphic design and psychology. When the story was finished, the proposal could be tested with a small group of children to get a sample of their reaction to the project, obtaining favorable results, in which the interest of children in the story and the attractions it offers can be appreciated. It is important to emphasize that the use of new technologies, such as augmented reality, makes the project innovative, allowing children to observe animations within the illustrations, attracting their attention. Finally, it is important to emphasize that the project instituted in a contribution with reading as an important basis in the education of children in addition to being able to observe images, illustrations and texts designed exclusively for this proposal and the use of new technologies that combined with a very good result. In this sense the emergence of new technologies led to the development of methods that allow to connect with the public, thanks to the use of programs such as Photoshop, After Effects and ArtiVive was able to implement the use of augmented reality mentioned

Storytelling as a Motivational Resource in the Therapy

323

above, this whole process was carried out through layers of images and video, thus achieving more than 30 layers which were worked individually to then be joined, in ArtiVive was also able to separate each of the videos in 3 additional layers in order to emphasize the effect of depth.

References 1. Cabrera, P., Urrutia, B.R., Vera, V., Alvarado, M., Vera-Villarroel, P.E.: Ansiedad y depresión en niños diagnosticados con cáncer. Revista de Psicopatología y Psicología Clínica [Internet] 10(2), 115–24 (2005 May 1). [cited 2022 Jun 14]. Available from: https://revistas.uned.es/ index.php/RPPC/article/view/3994 2. Méndez, X., Orgilés, M., López-Roig, S., Psicooncología, J.E.: undefined. Atención psicológica en el cáncer infantil. revistas.ucm.es [Internet] 1, 139–54 (2004). [cited 2022 Jun 14]. Available from: https://revistas.ucm.es/index.php/PSIC/article/download/PSIC04 04110139A/16351 3. Flores Ruíz, A.F., Solís Mejía, R.C.: Impacto del cáncer en la condición física y calidad de vida en niños, niñas y adolescentes (2021). Revistavive [Internet]. [cited 2022 Jul 7]; Available from: https://revistavive.org/index.php/revistavive/article/view/137 4. Pousa Rodríguez, V., Miguelez Amboage, A., Hernández Blázquez, M., González Torres, M.Á., Gaviria, M.: Depresión y cáncer: una revisión orientada a la práctica clínica. Revista Colombiana de Cancerología [Internet] 19(3), 166–72 (2015 Jul 1). [cited 2022 Jun 14]. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0123901515000438 5. Sanchez Yara, M.: Impact of the hospital classrooms in the emotional state of children with cancer. Universidad Católica de Pereira [Internet] (2018). [cited 2022 Jul 5]; Available from: http://hdl.handle.net/10785/4975 6. Rojas Díaz, V., Pérez Guirado Y.L.: Cáncer Infantil: Una visión panorámica |. Revista PsicologiaCientifica.com [Internet] (2011 Nov 19). [cited 2022 Jul 5]; Available from: https:// www.psicologiacientifica.com/cancer-infantil-una-vision-panoramica/ 7. Rico Norman, D.: Hacer psicología terapia narrativa y cuentos terapéuticos (2016). [cited 2022 Jun 14]; Available from: https://www.uv.mx/psicologia/files/2016/10/terapianarrativa. pdf 8. Cardona Galeano, I.L., Osorio Sánchez, Y.L.: Vista de Uso de la metáfora en terapia familiar. Aportes al enfoque narrativo [Internet]. Revista virtual Universidad Católica del Norte (2015). [cited 2022 Jun 14]. Available from: https://revistavirtual.ucn.edu.co/index.php/RevistaUCN/ article/view/613/1148 9. Viscarra Villegas, S.E., Durán Martínez, G.E.: Cuentos ilustrados de información y motivación para niños que padecen cáncer (2019). [Internet]. [RIOBAMBA]; [cited 2022 Jul 5]. Available from: http://dspace.espoch.edu.ec/handle/123456789/11697 10. Freire Sánchez, A.: El relato como herramienta de contenido de marca. Conceptualización, clasificación y metodología de análisis del storytelling dirigido a niños [Internet]. [Barcelona]: Universitat Abat Oliba CEU (2017). [cited 2022 Jul 5]. Available from: https://repositorioi nstitucional.ceu.es/bitstream/10637/11607/6/Relato_Freire_UAOTesis_2017.pdf 11. Borja Ortiz, M., Robles López, J., Robles López, J.F., Borja Ortiz, M.: Diseño de material didáctico sobre el cuidado del cáncer de leucemia entre los niños de tres a cinco años, que pertenecen a la Fundación Carlos Portela de la ciudad de Cali, en el año 2018 (2019). [Internet]. Tesis. 2018 [cited 2022 Jul 5]. Available from: bit.ly/3nKmrdJ 12. Arias Flores, H., Jadán Guerrero, J., Gómez Luna, L.: Innovación educativa en el aula mediante Design Thinking y Game Thinking. HAMUT’AY [Internet] 6(1), 82 (2019 Apr 24). [cited 2022 Jun 14]. Available from: https://doi.org/10.21503/hamu.v6i1.1576

324

M. L. Castro Pacheco et al.

13. Sánchez Gómez, J.R., López Martínez, E.F.: Pensar en diseño gráfico - [Internet] (2012). [cited 2022 Jun 14]. Available from: https://bit.ly/3akhDsy 14. Frascara, J.: Diseño Gráfico Para la Gente [Internet]. Ediciones Infinito (2008). [cited 2022 Jun 14]. Available from: bit.ly/3yrhqf1 15. Marcos, J.F.: ’La princesa calva’, libro sobre el cáncer infantil [Internet] (2019). [cited 2022 Jun 14]. Available from: bit.ly/3MIXaeZ 16. Hermann-Acosta, A.: Storytelling y comunicación multidireccional: una estrategia formativa para la era digital. URU Revista de comunicación y cultura [Internet] (2019). [cited 2022 Jul 6]; Available from: https://revistas.uasb.edu.ec/index.php/uru/article/view/1482/1296 17. Feijóo, J.: Storytelling. La ciencia de crear con el relato [Internet] (2021). Editorial Almuzara., editor. [cited 2022 Jul 6]. Available from: bit.ly/3OSMpI1 18. Martínez, M.I.: Storytelling, una herramienta eficaz y motivadora en el ámbito educativo. In: Conference Proceedings CIVAE 2021 [Internet] (2021). [cited 2022 Jul 6];282–4. Available from: https://dialnet.unirioja.es/servlet/articulo?codigo=8096858 19. Razzouk, R., Shute, V.: What Is Design Thinking and Why Is It Important? (2012 Sep 1). https://doi.org/10.3102/0034654312457429 [Internet]. [cited 2022 Jun 14];82(3):330–48. Available from: https://doi.org/10.3102/0034654312457429 20. Vianna, M., Vianna, Y., Adler, I.K., Lucena, B., Russo, B.: Design Thinking (2011). [cited 2022 Jun 14]; Available from: https://issuu.com/mjvempresa/docs/e-book_-_design_ thinking_-_innovaci 21. García Canté, J.F.: Psicologia del color aplicada a los cursos virtuales para mejorar el nivel de aprendizaje en los estudiantes. grafica [Internet] 5(9), 51–6 (2017 Jan 12). [cited 2022 Jun 14]. Available from: https://revistes.uab.cat/grafica/article/view/57 22. Heller, E.: PDF Psicología del Color Eva Heller PDF - Libro Completo Gratis [Internet]. Editorial GG (2010). [cited 2022 Jun 14]. Available from: https://www.apadisenografico. com/psicologia-del-color-eva-heller-pdf/ 23. Brown, T., Wyatt, J.: Design Thinking for Social Innovation 12(1), 29–43 (2012 Oct 3). https://doi.org/10.1596/1020-797X_12_1_29 [Internet]. [cited 2022 Jun 14]. Available from: https://doi.org/10.1596/1020-797X_12_1_29 24. Yánez Maldonado, L.J., Arcentales Córdova, M.L., Polo Lema K.J.: Libro-objeto bi-tri dimensional para la enseñanza de los fundamentos del diseño gráfico. Ciencia Latina Revista Científica Multidisciplinar [Internet] 5(4), 6558–76 (2021 Sep 1). [cited 2022 Jun 14]. Available from: https://ciencialatina.org/index.php/cienciala/article/view/784/1079 25. Zhucozhañay Peralta, J.C., Murillo Peralta N.J., Urgilés Elvis, H.: Diseño innovador del juego Morlapolys para la divulgación del patrimonio cultural de Cuenca Ecuador. Ciencia Latina Revista Científica Multidisciplinar [Internet] 6(2), 190–207 (2022 Mar 15). [cited 2022 Jun 14]. Available from: https://ciencialatina.org/index.php/cienciala/article/view/1879/2676

Presyllabic Method to Correct Dysorthography in Elementary School Students Kate Lizbeth Pazmiño1(B) , Editha Jael Guerrero1 , Franklin Daniel Aguilar E.2 , Paulina Renata Arellano G.2 and Fernando Garcés Cobos1

,

1 Universidad Central del Ecuador, Quito, Ecuador

[email protected] 2 Instituto Tecnológico Universitario Rumiñahui, Sangolquí, Ecuador

Abstract. The process of teaching writing was impacted during virtual education by the Covid 19 pandemic in the first years of study, due to the inadequate use of methodological strategies. Dysorthography is a disorder that affects between 10 to 15% of the population in Latin America and is considered one of the first factors that affect the student’s school performance, so we sought to implement a methodological guide of phonemes to contribute positively to learning in students in the third year of basic education. The research was field, descriptive, and bibliographic documentation was applied. Through the application of the methodological guide of phonemes, it was possible to observe the continuous improvement of the students in the learning process. The development of the research was necessary, since it was possible to determine in the first instance that the students do not have a correct spatial orientation, there is confusion with the letters b and d, because they do not recognize phonemes, so their writing is not legible, it is evident that dysorthography affects 70% of the academic performance in the area of language and literature. After four months of application of this phoneme guide, the results were excellent, with an 80% improvement in the teaching and learning process. It is considered that 52% of the students recognize the phonemes, 30% correctly apply punctuation marks and a 40% improvement in performance has been observed. Keywords: Writing · Dysorthography · Phonemes · Teaching-learning process · Spatial orientation

1 Introduction Spelling rules are one of the main bases for language, so the process of teaching the basic skills required, such as writing, during virtual education was affected in the development of methodological strategies for its teaching. This skill, typical of verbal language, involves complex tasks such as attention, perspective, linguistics, and fine motor movements. Thus, learning difficulties, such as dysorthography, begin to appear. In this context, the achievement of writing becomes difficult, due to the incidence of factors that affect its teaching, it is feasible to name it as reading and writing disorders © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 325–336, 2023. https://doi.org/10.1007/978-3-031-25942-5_26

326

K. L. Pazmiño et al.

characterized by stroke, spelling and reading. In this sense. Dysorthography is known as a group of errors in writing and spelling, which leads the student to have difficulties in understanding and reproducing symbols. While it is true, we can consider that reading and writing are powerful weapons that we can give to children to increase the basis of their cognitive development. In this way, children can exercise writing and put it into practice, improving their psychomotor capacity. It also helps them to retain information with greater intensity, enhance learning and neurodevelopment, stimulating their brain circuits, as it increases their creative and imaginative capacity. This research topic arises from the need to find adequate methodological strategies to solve the learning disorders in reading and writing presented by children in the third year of EGB, in this way, a better development of their motor skills and learning will be achieved.

2 Development 2.1 Writing Writing is an innate human ability; it is acquired through teaching/learning. The relevant question is when to start this process and how it evolves to adapt to the levels of achievement. Writing is a main element in primary education, as mentioned “the development of writing is a permanent process of motivation in the development of capacities and skills of perception of the five senses, for writing from reality” [1]. This means that writing can be developed significantly over an extended period, which in a preparatory phase should be based on a logical plan for the development of the child’s language, phonetics, and motor skills. The development of graphemes and morphemes in writing is a transcendental step to avoid learning disorders with a significant level of complexity. 2.2 Dysorthography Orthography is considered one of the most essential pillars that make up writing. Students, at the beginning of their teaching processes, learn language skills, during which they accumulate the necessary requirements, such as: sensory and muscular development accompanied by a correct coordination to learn to write and recognize the stroke of each of the letters, which is why the methods that have been used have not been the correct ones in initial education, which can have repercussions on dysorthography in subsequent years, causing difficulty in writing. According to [2] the defined that: Dysorthography is a specific disorder that only includes errors in writing, without the need for these to also occur in reading as would be the case of dyslexia. In fact, a student with dysorthography does not necessarily have to read poorly, although this condition may commonly occur. (p. 3).

Presyllabic Method to Correct Dysorthography in Elementary School

327

In this way, dysorthography is considered as one of the first factors that affect the student’s school performance, due to there are inconveniences that once they have been detected from their origin, an adequate methodological strategy must be implemented, since this, significantly affects each one of the attention and comprehension processes, as well as guidelines towards a correct writing. 2.3 Causes of Dysorthography According to [3] the main causes of this problem are: – Intellectual causes: the presence of this type of difficulty hinders above all the acquisition of basic spelling rules, although it is not the most relevant cause, it may be associated with other types of difficulties that are clearly relevant, such as information processing. – Pedagogical causes: There is not an adequate use of the methodological strategies applied during the teaching of spelling, according to the learning pace of the students. – Perceptual causes: For the development of writing processes, it is possible to find students with visual or auditory learning, therefore, these determine the way in which the information processing of dysorthography is conducted, in this sense, factors such as: visual memory, auditory memory, spatial and temporal orientation are considered. – Linguistic causes: For students it is complex to relate and correctly perceive the sound of the letters, therefore, at the time of making the stroke, an incorrect execution is observed. There are difficulties both in their vocabulary and in their knowledge, inconveniences that affect the language acquisition (p. 23). It is important to analyze each of the factors that may indicate that the student suffers dysorthography as a learning difficulty within the basic skill of writing, only; thus, it is considered necessary to analyze the elements that impair the use of orthographic rules, since, at present it has become an aggravating factor in the educational processes, for which reason, it has been necessary to implement new methodologies.

3 Academic Performance Academic performance allows us to see the level of achievement of learning acquired by students. Within this context, in classroom practice it becomes a topic of discussion, since we refer to the results achieved by students reflected quantitatively (grades) and qualitatively (school participation). On the other hand, [4] mentions that: ...The definition of academic performance frames the limitations that participate in the internalization of knowledge according to an established profile; failure is a word used to label those who failed to achieve the minimum score that certifies the learning of the expected knowledge set forth in the syllabus. (p. 6). Currently, the meaning of academic performance refers to the students’ own motivation to achieve the expected learning, otherwise, we would speak of a possible “school

328

K. L. Pazmiño et al.

failure”, processes that reflect the uncertainty about the methodologies developed as such in the classroom practice. For this reason, adding innovative and modern elements to the school environment, makes it possible to introduce information according to the learning pace.

4 Language and Literature Language refers to the set or system of oral and written forms or signs used for communication between people in the same community. Language is an inventory that speakers use through speech but cannot modify. For example, Spanish is the language spoken by more than five hundred million people around the world [5]. On the other hand, literature is one of the artistic activities that uses the expression of language for its complete communication. Language and Literature is a subject that is responsible for the teaching of phonemes, morphemes and graphemes that are especially useful for oral and written expression, in which the expression of feelings and emotions is possible. Therefore, its teaching is considered an integration of knowledge from aesthetics. 4.1 Academic Achievement in the Area of Language and Literature [4] defined: Academic performance is the expression of the student’s abilities and psychological characteristics, developed and updated through the teaching-learning process that makes it possible to get a level of performance and academic achievement over a period, which is synthesized in a final grade that evaluates the level achieved (p. 2). We can list different origins that affect academic achievement, among them, we can find methodologies, number of students, family situation, infrastructure, or didactic material, when we talk about the school environment. On the other hand, we can find psychological, physiological, socioeconomic, and pedagogical factors. Each of them helps to an optimal and balanced affective development of the student.

5 Methodological Approach In the research [6] in the educational evaluation performed for Ecuador with Enemdu, we framed within the second and third scenario considers literacy as an educational component and minimum competencies. This scenario (Fig. 1) does not consider the minimum competencies that a child should have between the ages of 6 to 11, it merely visualizes the trend in the proportion of children in that age group who cannot read or write, from 2012 to 2019. It should be noted that from 2013 to 2017 there is no meaningful change in the results [6]. For [7] dysorthography during the teaching and learning process, presents us with the following checklist obtained through an initial evaluation process:

Presyllabic Method to Correct Dysorthography in Elementary School

329

Learning poverty 2012-2019 (read and write) 21.00% 18.00% 15.00% 12.00% 9.00% 6.00% 3.00% 0.00%

11.90%

13.90%14.30%13.00%13.60%14.90%

18.10% 16.40%

12.00%

2012 2013 2014 2015 2016 2017 2018 2019 2020

Year Fig. 1. Learning poverty

Table 1. Research

In the list (Table 1) shown seven essential factors that affect students’ writing and indicate the presence of a learning disorder, such as dysorthography, are enlisted. The information collected allows us to say that more than half of the students suffer from this learning problem, since the key factors for correct writing are not completely fulfilled. (p. 45). For [8] in their research on the incidence of dysorthography in low performance in language and literature, they provide statistical data that allow us to verify the need for teaching in terms of orthographic rules, whereby the following is obtained:

330

K. L. Pazmiño et al.

Fig. 2. Knowledge of orthographic/spelling rules

In this graphic (Fig. 2) sixty-four percent of the parents surveyed said yes, 26% always, and 10% sometimes consider it necessary to know the spelling rules. It is essential that all students practice the use of spelling rules to get a good reaction (p. 53).

Fig. 3. Dysorthography problems

The results of the survey (Fig. 3) show that 32% have a problem with spelling, 24% sometimes, 23% always, and 21% say they do not. Due to the poor knowledge of the spelling rules, we can affirm that most of the students present this problem within the educational institution. (p. 55). We can mention that the documentary compilation about dysorthography determines that this can appear early in the process of learning to write. According to [9]. Ecuador, in the writing comprehension domain, the following indicators are considered - Initial spelling (phoneme and grapheme)

Presyllabic Method to Correct Dysorthography in Elementary School

331

- Punctuation

Comprehension of "Leer to a friend" 3rd EGB Ecuador Puntuación Ortograa Inicial

25.7 26.6 0%

Categoría 1

2.112.8 24.2

20% Categoría 2

40%

59.5 38.6 60%

Categoría 3

10.6 80%

100%

Categoría 4

Fig. 4. Letter to a friend

As noted above (Fig. 4), in this grade only the association between phoneme and grapheme is evaluated, so no errors in spelling rules are considered. Where we can observe that 26.6% of the students are in category one, which refers to the appreciation of more than seven errors in the texts produced, alluding to the lack of knowledge of the phoneme-grapheme of the words. This gives us to understand that, at the national level, a spelling error is evidenced where students fail to differentiate between the sound and its respective writing. In the evaluation of the third-year student of basic general education, a similarity could be observed in the results published by INEVAL.

6 Methodology and Research Techniques The present research is field research, since the study focuses on the study of third year students of basic general education, where dysorthography was identified in their learning process. On the other hand, the bibliographic documentary research will be applied jointly, since primary and secondary sources are used, to conduct the due theoretical support of the corresponding variables [10]. The research is descriptive. However, it is not only about data collection but also about discovering the incidence of dysorthography in third grade students. When we talk about descriptive research, we do not only intend to get to know the situations and attitudes that predominate and through this research achieve a positive change in the students. A diagnostic evaluation is used at the beginning of the process to identify the learning problem they present, referring to the level of deficiencies during the writing process in third grade students, thus seeking methodologies and techniques that contribute to the improvement of the processes.

332

K. L. Pazmiño et al.

7 Results Once the initial evaluation was conducted, it could be verified that one of the children had difficulties in writing, so it was concluded that her learning problem was dysorthography. To solve this problem, a phoneme guide was used, using cards with presyllabic methodology. As a first starting point towards the solution of the problem, at the end of the first evaluation taken on April 18, 2022, it is noted that: the student has no spatial orientation, confuses the letters b and d, does not recognize phonemes of the letters so her writing is not legible, and it is also evident her low performance in the subject of Language and Literature [11]. The phoneme guide is elaborated under the requirements that students should know and apply during their learning process, according to the level they are at. In addition, it contains cards that allow them to understand the difference between tracing-drawing and tracing-writing as the letters are presented during this process. In this way, different techniques can be used for learning, such as: paper balls, quilling, dactyl painting, word search, crossword puzzles, syllable separation, etc.

Fig. 5. Phoneme guide cards

The ceiling-room-basement method is used so that students acquire a guide for the proper placement of letters during the writing process. From the ceiling to the living room, mommy letters (capital letters) will be placed, in the living room baby letters

Fig. 6. Learning card

Presyllabic Method to Correct Dysorthography in Elementary School

333

(lowercase), and from the living room to the basement, baby letters with long bodies will be placed (Fig. 5 and Fig. 6). Combination of consonant and vowel letters to recognize the phonemes showed by joining them (Fig. 7).

Fig. 7. Learning technique

Use of graph plastic techniques for students, with the purpose of developing their fine motor skills through the technique of threading, in the same way they find the ability to correctly perceive the writing of letters (Fig. 8).

Fig. 8. Application of learning

The syllable separation method allows the student to identify the letters and how the word is formed; this technique helps us with spatial orientation and phoneme recognition.

334

K. L. Pazmiño et al.

Fig. 9. Initial Evaluation

Fig. 10. Evaluation after 2 months

As we can see, in the two images shown, we can identify the progress achieved in the first week, by putting into practice the presyllabic method. As shown in the first picture (Fig. 9) there is not a good spatial orientation, there is a bad spelling and there is not a correct spelling of the letters, therefore it is difficult to understand what is written, as well as their performance is not adequate. However, there is a 20% improvement in handwriting, since it is possible to visualize the writing in a better way, even though there is still not a correct and uniform handwriting (Fig. 10).

Fig. 11. Technique applied

Fig. 12. Evaluation of the technique

Working with the little house technique (Fig. 11 Orange box) was a highly effective process to help the children recognize the location of the letters on the line, particularly the lowercase letters that are in the room. Since it was pointed out to them that the little house is divided into three parts, the ceiling, the living room, and the basement; the body of the letters will always be in the living room, the space of the living room is equivalent

Presyllabic Method to Correct Dysorthography in Elementary School

335

to the line, in this way the child can see how each letter is located in relation to the direction of the line. As can be seen in the images, in the period of a month and a half a 40% progress has been achieved (Fig. 12).

8 Conclusions To conclude, it is evident that the lack of knowledge of spelling rules can have a significant impact on the academic performance of students in language and literature, framed towards the poor practice of reading and writing. Dysorthography is considered a learning difficulty that affects 52% of the students evaluated, this disorder is immersed within the educational needs not linked to disability, therefore, writing is one of the basic language skills to be impacted by the lack of knowledge of spelling rules, spatial orientation, confusion of b and d and poor recognition of morphemes, affecting the teaching-learning process. It can be concluded that, at present, 45% of the students identify and relate the sounds of phonemes and graphemes in a better way. This shows a meaningful change with respect to the first day of the initial evaluation of the students. After having worked constantly with the students, in 4 months with the help of the phoneme guide it was possible to see the changes in their writing and identification of the sounds as can be seen in the evidence shown above, this determining an 80% effectiveness of the method applied (Fig. 13 and Fig. 14).

Fig. 13. Final evaluation

Fig. 14. Final process

References 1. Aruquipa, T.: A. La disortografía en el proceso de enseñanza y aprendizaje. Latacunga, p. 45 (2018) 2. Federación de Enseñanza de CC: OO de Andalucía. La disortografía. Temas Para La Educación 12, 1–6 (2011). https://www.feandalucia.ccoo.es/docu/p5sd7922.pdf

336

K. L. Pazmiño et al.

3. Molina, M.: La disortografía y su incidencia en el rendimiento Académico de los estudiantes de 4to de educación General básica de la unidad educativa F.A.E no. 3 TAURA en el período lectivo 2018–2019, p. 23. Repositorio ULVR (2019) 4. Edel, R.: El rendimiento académico. Concepto, investigación y desarrollo. Universidad Veracruzana de México, p. 2. Mc Graw Hill Editores (2003) 5. Océano, D.: Pedagogía y Psicología (2010) 6. INEVAL: Medir la pobreza de los aprendizajes, una labor necesaria en Ecuador. Escenarios de Aplicación, 3rd edn, pp. 11–14 (2019). http://evaluaciones.evaluacion.gob.ec/revista/wpcontent/uploads/2021/01/INEVAL_DICS_RDEE_Volumen3-1.pdf 7. Barraquel Chifla, M.: A. La disortografía en el proceso de enseñanza y aprendizaje. Latacunga, p. 45 (2018) 8. Holguín, M., Rojas, P.: Incidencia de la disortografía en el desempeño académico en el área de lengua y literatura. Diseño guía de estrategia lúdicas para el uso correcto de la ortografía dentro del aula, p. 53. Repositorio Universidad de Guayaquil (2015) 9. Erce Guerra, P., Martínez, S.: La disortografía en el bajo rendimiento del área de: lengua y literatura en los niños del quinto año de educación básica, de la escuela García moreno del sector el batán del cantón Riobamba, provincia de Chimborazo, período lectivo 2015- 2016. Repositorio UNACH (2019) 10. Ocaña, X.: La disortografía y su incidencia en el rendimiento académico del área de lengua y literatura de los estudiantes de 8ºgrado de educación general básica. Repositorio UTA (2013) 11. Ramírez, T.: La disortografía y rendimiento académico en estudiantes de segundo de secundaria-Puente Piedra. Repositorio UCV (2017)

State of ICTs as Support for the Educational Process in the Andean Region Wladimir Paredes-Parada1,2,3(B) , Christian Del Pozo1,2,3 , Silvia Elizabeth García González1,2,3 , and Franz Del Pozo1,2,3 1 Instituto Superior Tecnológico Rumiñahui, Sangolquí 171103, Ecuador

[email protected] 2 Universidad Estatal de Bolívar, Guaranda, Ecuador 3 Universidad Central del Ecuador, Av. Universitaria, Quito 170129, Ecuador

Abstract. Higher education institutions in the Andean region have shown their progress in the adoption of ICTs for education. The objective of this study is to understand how this adoption has been carried out, and what shortcomings can be identified in order to correct them. The proliferation of information and communication technologies in the field of education has posed great challenges to HEIs, so that adopting and reducing the ICT access gap, as well as its correct use among the different actors of the educational community, has become a pending task for the institutional authorities of HEIs and for local and national governments. The work focused on the use of mixed paradigm as a reference. As part of a broader study, the present work focused on determining the ICT use gap between teachers and students in this group of HEIs. To analyze the gap in ICT use, the study was based on the information resulting from a survey of professors and students, as well as on a documentary review of the public information of the HEIs under study. The results show a generalized use of LMS systems by the region’s HEIs, however, there is a lack of use of the capabilities of these platforms, especially due to the lack of training and support of their professors and students. Likewise, the use of various tools and social networks has been spreading in the IES. On the other hand, academic-administrative management support systems (SIS) are mostly developed in-house or commercially but without a holistic vision of integration and process management; only a few HEIs have decided to adopt academic ERPs as a definitive and integrated solution to institutional educational processes. Keywords: Educational process · Good practices in ICTs · SIS · Academic ERP · Data model · Data analytics · Integral development education

1 Introduction Information and communication technologies undergo considerable improvements every year, with a tendency to shorten this time to an average of six months or less. In this context, when a university or higher education institution (HEI) anywhere in the world chooses a set of software applications to support internal teaching and learning processes, as well as to improve academic and administrative processes, it must be clear that this will © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 337–346, 2023. https://doi.org/10.1007/978-3-031-25942-5_27

338

W. Paredes-Parada et al.

be a long-term institutional investment. The application of these technologies in higher education institutions must integrate all academic, administrative and management processes, from academic planning and evaluation to student follow-up, thus providing a series of benefits that help improve the efficiency and productivity of the institutions. Among the main benefits that Higher Education Institutions obtain by implementing this type of technology in their processes are collaboration, time optimization, flexibility, communication, cost reduction, data and enriched information. As mentioned in [12] “It is important that HEIs adopt academic ERPs to automate their processes, this will generate information to support institutional decision making, in addition to facilitating the adoption of a culture based on processes with reliable information. Academic ERPs have a great advantage over traditional academic management systems, as they easily adapt to institutional and regulatory changes in education, as well as to the different visions of academic planners in an environment of multiple conflicting objectives. In the academic ERP market, there are generally expensive solutions such as Banner from Ellucian, OCU from Universitas, among others, which in most cases makes adoption by Ecuadorian HEIs prohibitive. However, the emergence of academic ERPs, such as ERP a1-academia (Fig. 1), based on open architectures offer the possibility that educational centers with low budgets can consider it as a viable option.”

Fig. 1. ERP a1-academia: Academic resource planning software

It is not enough to automate or implement content management support systems (LMS), the new paradigms require that educational and technical planners must be visionary in the adoption of systems that articulate all processes and tools in the educational environment such as academic ERPs. The integration through the use of academic ERPs, clarifies the forms of collaboration and the concepts of classroom management since they integrate all the actors involved, teachers, students, coordinators and directors, who can maintain a constant monitoring and evaluation of all the actors and academicadministrative processes, as well as allowing teachers and academic planners to optimize

State of ICTs as Support for the Educational Process

339

the time and resources allocated to each of the activities involved in the educational process [5]. It should be noted that the appropriate use of ICTs has allowed higher education institutions to improve decision-making, since academic departments used to make decisions based on their experience and not on systematized information. Today, applications that use machine learning can identify early on the risk of dropping out and support students so that they do not drop out. Data is facilitating the management of educational institutions, according to a report by the University of Huddersfield, who have already implemented a data analysis system, have been able to increase retention by delivering a personalized service that has allowed them to reach a drop-out rates have fallen from percentage levels in the high teens to well below 10% [7]. In the near future, the appropriate application of technology in Higher Education Institutions will be able to deliver a completely personalized service and assistance oriented to the needs of each member of the educational community. In this context, UNESCO has prepared a framework program in ICTs, with the level of competencies that teachers need to interrelate technologies to their daily professional practices. This program integrates not only the knowledge of technologies for the classroom and course planning, but also tools to improve organizational and management skills both in the classroom and collaborative groups, so that the different technologies can be implemented as a whole and allow teachers to have the ability to create networked environments in which students have the ability to save, share and develop their work collaboratively, and learn to use technologies with flexible, student-centered teaching-learning strategies. There are several authors [1, 2, 4, 8, 11, 13, 16, 17] who define the term Educational Technology (ET) refers to the pedagogical use of all tools, equipment, or technological instruments as a means of communication, in order to facilitate the teaching-learning process. ET has evolved considerably in recent years and this can be seen in the new teacher training curricula, as well as in the increase in publications and research on the subject, not to mention the congresses, events and major international conferences that are held for the dissemination of ET. It is also clear that ET is the discipline that has evolved the most due to the accelerated changes in the sciences that compose it and above all due to the constant changes in societies. All this accelerated progress of science and technology directly influences the educational field, which has forced education to take on new challenges and challenges to develop the educational level of the population. One of the main actions taken by education is to incorporate technological and communication advances and transform the traditional educational system into a more interactive educational system, in which better and more efficient teaching-learning programs are developed. One of the pioneers of TE was Professor B. F. SKINER, who invented the teaching machine, as described by Professor Valero [15] of the Faculty of Psychology of the University of Malaga; in one of his essays called: Skinner’s teaching machines describes the machine that consisted of a box in which the teacher introduced a sheet with all the concepts that the student had to learn and another sheet where part of the text could be hidden. The procedure of this device was as follows: the student read the text introduced in the box and wrote down each of the answers to the questions that appeared. Then, the student rolled the machine and if they were correct, the sheet with

340

W. Paredes-Parada et al.

the questions was passed and a point was scored (feedback); otherwise, the machine did not allow the student to advance and he had to go back, forcing him to read the text again [3]. If we review the evolution of these systems in the short term we realize how for example MySpace, the number one social network in the United States, in the years 2009 and in Latin America Hi5, at the same time, were replaced by a single called Facebook, what happened basically is that the new social network understood that digital social media should not be invasive, give many more services, and above all understood that its target audience was not a country or a region, but the whole world. To stay in technological leadership, sometimes you need to offer services equal to or better than the competition, as for example Google+, Google’s social network, which offered privacy in publications and ways to present photos and comments began to become popular, and quickly Facebook updated its services to offer them, that is, in the world of digital communications and social networks, you have to be constantly updated; and if education wants to incorporate this type of technology must be completely open to technological adaptations that happen continuously. Facebook by mid-2019 already had more than 1.6 billion active users; 934 million users enter this platform daily, are some of the data published on its official investor account portal. In the use of social networks in South America, the countries with the largest number of users are: Brazil, Argentina, Colombia, Peru, Chile, Venezuela, Ecuador, Bolivia, Uruguay and Paraguay. It must be taken into account that these media change rapidly, so if these media are integrated into the teaching-learning process, planning must be adjustable and flexible, since, for example, in different areas there is a greater tendency to communicate through infograms or memes, which are the ones that attract the attention of students in particular. Therein lies the challenge of all those who are part of education, especially teachers in seeking strategies that allow training and knowledge generation using all media that contribute to the educational process, although many teachers are reluctant to incorporate these strategies because they consider them to be of little use. In the same sense, the effective use of ET to strengthen the research component in HEIs can generate positive results, increasing the level of collaboration between researchers from the same institutions or with researchers from several national or foreign universities, thus increasing the possibility of generating research with greater impact. The increase in publications for Latin America has been very significant, with the exception of Brazil, Chile and Argentina that have commonly had many publications, for the rest of Latin America each year there were around 1500 publications on average for 2016, for 2019 the absolute number has increased exponentially being more than 4000. According to Scimago Journal & Country Rank, in Latin America the two countries with the highest number of publications in 2019 are: Brazil, position 14 with 88,276 indexed publications, followed by Mexico, position 27 with 29,131; for the case of Chile, position 45 with 16,301; Argentina with 15,123 in position 46; Colombia in 47 place with 14,776 and Ecuador with 5,309 in position 64 [14]. It can be clearly seen that countries close to Ecuador such as Colombia and Argentina have a large community that can help joint research and social networks are the perfect way to initiate communication between researchers, it is the perfect way to initiate contacts in projects; but this issue of research and how social networks can directly help

State of ICTs as Support for the Educational Process

341

significant development in Latin American countries and above all how they can help generate knowledge in terms of joint research is the subject of a future book, since it is a broad topic, there are examples and above all there are regions such as North America and Asia-Pacific that have positive experiences in this regard. The impact of new communication and information technologies cannot be ignored, especially those that have become prominent today, such as social networks, since in the United States close to 100% of universities use them in some way, either formally or informally social networks to support the educational process, especially Facebook and Twitter as leading platforms [6], followed by LinkedIn. As a curious fact, Blogging has stagnated at 50% in HEIs, and it seems that it could continue to decrease. The main uses are for the development of classes, for the university’s courses to society, to promote the admission of new students, professional development, and to link with the network of graduates. The use of social networks as support tools in the educational process allows dynamic learning, fosters the ability to collaborate and share, promotes digital competence and helps to assimilate social values and behaviors. Some of the practical and simple examples of the application of social networks within the educational context are the following: Stimulate classroom discussion through Facebook. You can create a group or private page for students in which each week has a topic related to the subject in debate format; let students share their point of view through comments, where they can also provide links to support their arguments. Quiz on Twitter, the social network is fast, direct and allows to take advantage of short messages in any subject, especially it is very useful for subjects such as language, to learn to summarize ideas, write clearly, with the support of other classmates. Virtual classes in Google, through hangouts or other tools of the Google ecosystem, which will allow teachers to have an online classroom, share documents, edit them collaboratively. Enhancing visual communication and creativity with Instagram or Vine, through collaborative projects, very useful for careers such as graphic design or multimedia production, as well as the use of YouTube in any area, will bring multiple benefits in addition to improving oral communication. The platform allows them to modify the video, add annotations or tags and set privacy settings so that only certain users or those who know the address can see it. The rest of the students will be able to comment and evaluate the presentation in the comments. Very useful when working at home where students document the ideas or problems they have and how the community can help to solve the problems. There are computer tools oriented to collaboration in the classroom, one example is Jamboard (Fig. 2), which is a collaborative digital whiteboard; it is very intuitive, easy to use, has many features to make concept maps, summary cards, write ideas among other features. In a case study, more than 200 virtual participants in a scientific congress were able to work collaboratively on Jamboard, which allowed them to generate and contribute opinions in a debate, comment on actions and interact among participants. This type of tool allows to enhance collaborative work between teachers and students, and to store content generated in instructional classes. The use of an LMS as a tool to articulate the teaching-learning process, and if possible integrated to an Academic ERP (Fig. 3) is essential to articulate the countless number of technological tools that can be used in the educational process. This articulation is

342

W. Paredes-Parada et al.

necessary due to the diversity of tools that can be used in the educational environment. Another major problem of HEIs is the lack of integration between the teaching-learning process and the academic-administrative processes, which are often disjointed and cause serious problems of efficiency in the management of resources.

Fig. 2 Use of technological tools with a large audience

Fig. 3 Integration between an Academic ERP and an LMS

2 Scope This article is the result of research conducted in the main higher education institutions of four countries in the Andean region: Bolivia, Colombia, Ecuador and Peru; sixteen

State of ICTs as Support for the Educational Process

343

HEIs were analyzed in the four countries mentioned, which were part of the research work: “Teacher Training and ICTs, a mapping approach in the Andean Region”. This study was based on the work done by [4] on the introduction of ICTs at the National Autonomous University of Mexico (UNAM), which served as a precedent, as well as the study by [9], which proposed the importance of ICT integration by teachers in higher education classrooms through the articulation of LMS for content management and the teaching-learning process.

3 Methodology The work focused on the use of qualitative and quantitative methods since the analysis approach took the mixed paradigm as a reference. As part of a broader study, the present work focused on determining the ICT use gap between teachers and students in this group of HEIs. To analyze the gap in ICT use between professors and students, the study was based on the information resulting from a survey of professors and students, as well as on a documentary review of the public information of the HEIs under study. The surveys were based on the work of [9] and were addressed to professors and students. This survey was applied in a selected group of HEIs in four countries of the Andean Region, as part of a broader research, which was applied using a digital survey sent by e-mail to collect data.

4 Results In this study, it was found that all the selected HEIs have an LMS as the main support platform for the teaching-learning process, both for the face-to-face and the blended and virtual modalities. In addition, 87.5% of the HEIs have access to virtual libraries and catalogs of electronic books. 100% have websites, institutional blogs, and for internal communication they maintain groups of professors and students in social networks, in this aspect WhatsApp and Facebook are the most used, and in the case of students WhatsApp, Facebook and Instagram stand out for their use. For the implementation of this type of technology, 74.3% of respondents believe that there has been no planning, 85.1% believe that there has not been adequate training and support for the use of technological elements to support the teaching-learning process.

Fig. 4 LMS adoption by Bolivian and Peruvian HEIs

344

W. Paredes-Parada et al.

In the Bolivian case (Fig. 4), 80% of the HEIs have opted for the adoption of Moodle as their main LMS, as the platform that articulates the educational process; and 20% use other LMS platforms such as Claroline. With respect to the systematization of academicadministrative processes, 63% use their own SIS, 25% use commercial SIS and only 12% use academic ERP, but without integration with the teaching-learning process. In the Peruvian case (Fig. 4), 60% of the HEIs have opted for the adoption of Moodle as their main LMS; 20% Blackboard LMS and 20% Canvas. With respect to the systematization of academic-administrative processes, in the Peruvian case 48% use their own SIS, 36% use commercial SIS, 12% use academic ERP and only 4% use academic ERP with integration. In the Colombian case (Fig. 5), 50% of the HEIs have opted for the adoption of Moodle as their main LMS; 33% Blackboard LMS and 17% other LMS. With respect to the systematization of academic-administrative processes, in the Colombian case 50% use commercial SIS, 30% use their own SIS, 14% use academic ERP and only 6% use academic ERP with integration.

Fig. 5 LMS adoption by Colombian and Ecuadorian HEIs

Finally, in the Ecuadorian case (Fig. 5), 80% of HEIs have opted to adopt Moodle as their main LMS and 20% other LMSs like EDX. With respect to the systematization of academic-administrative processes, in the Ecuadorian case 53% use their own SIS, 26% use commercial SIS, 10% use academic ERP and 11% use academic ERP with integration.

5 Conclusions and Recommendations The analyses carried out led to the following conclusions and recommendations: Encourage the use and development of the technology to improve the competencies of teachers and students, which generates an improvement in the teaching-learning process. But beyond these technologies, which are very important, there are others that do not have as much structure and planning to be implemented in HEIs; however, they are the ones most used by students, teachers and the entire university community; they are the so-called social networks, which have been left aside all their educational potential and especially the main function they have: to be tools for information collaboration; a fundamental element for the generation of knowledge.

State of ICTs as Support for the Educational Process

345

The use of LMS to articulate the use of ICTs in the teaching and learning process is widespread in the region; however, there is a lack of training for both teachers and students to take advantage of them. The use of SIS integrated with LMS that articulate the entire educational process is not widespread in the region, with only 7% adoption. It is important for HEIs to plan an articulated and sustained growth based on ICTs, for which they should invest in technology and continuous training processes for all those involved. Among these tools are blogs, Facebook, Twitter, Instagram, LinkedIn, among several others; whose main objective is to share information, for this reason there must also be a planning and regulation from the university authorities to better use these tools as a means to share information and thus generate knowledge; in this way will be taking advantage not only of technology “per se”, but all the potential that students and teachers have to master these tools, so that the fulfillment of educational objectives will be more appropriate and efficient every day. Another of the main problems that can be solved by HEIs through social networks, is the link with the company and society, in order to eliminate these barriers basically requires a suitable communication between society, the community, private enterprise and the university, and the right or fastest way to achieve this is through digital communication and use of social media. With proper communication between the main actors of education, the generation of knowledge will be more appropriate and adjusted to a reality. All higher education institutions need to train teachers in ICTs, taking advantage of the model proposed by UNESCO to improve teacher practices in teaching and learning. IES should provide teachers with computer equipment that allows adequate functionality of office packages such as word processors, presentations and spreadsheets. The data analyzed show that teachers use them more frequently, so it is the responsibility of higher education institutions to provide this fundamental input for the development of their academic classes. For virtual education, the teacher must be prepared to handle LMS platforms, according to the study the most common LMS is Moodle. The use of these virtual educational platforms is fundamental for all teachers of the different modalities, both for full-time, part-time and virtual, since it is an aid for the management of the teaching-learning process in which both teachers and students are partners. These competencies must be aligned with the creation of courses and their administrative management, as well as the use of various resources that allow an adequate virtual education. Teachers should be able to use social networks for educational purposes, due to the tendency of their use among students, which allows instantaneous communication and group management. It is important that IES adopt academic ERPs to automate their processes, this will generate information to support institutional decision making, as well as facilitate the adoption of a culture based on processes with reliable information. Academic ERPs have a great advantage over traditional academic management systems as they adapt easily to institutional and regulatory changes in education, as well as to the different visions of academic planners in an environment of multiple conflicting objectives.

346

W. Paredes-Parada et al.

In the academic ERP market, there are generally expensive solutions, which in most cases makes adoption by Latam IES prohibitive. However, the emergence of academic ERPs, based on open architectures offer the possibility that education centers with low budgets may consider it a viable option.

References 1. CEPAL: Conferencia Ministerial Regional Preparatoria de América Latina y el Caribe para la Cumbre Mundial sobre la Sociedad de la Información. CEPAL. Bávaro, Punta Cana, República Dominicana (2003) 2. Silva, J., et al.: Estándares TIC para la formación inicial docente: Una propuesta en el contexto chileno. UNESCO-Enlaces, Chile (2008) 3. Fandos, M.: Formación basada en las Tecnologías de la Información y Comunicación: Análisis didáctico del proceso de enseñanza-aprendizaje, PhD. Thesis, Universitat Rovira I Virgili, Tarragona, España (2003) 4. Fombona, J., Pascual M.: Las tecnologías de la información y la comunicación en la docencia universitaria. Estudio de casos en la Universidad Nacional Autónoma de México. (UNAM). Educación XXI 14(2), 79–110 (2015) 5. Franz, L., Lee, W., Van Horn, J.: An adaptive decision support system for academic resource planning. Decision Sciences (12), 276–293 (1981) 6. Greenhow, C., Chapman, A., Marich, H., Askari, E.: Social media and Social Networks (2017) 7. Jenvey, N., O’Malley, B.: Are universities making the most of their big data? (2016). Avaible: https://www.universityworldnews.com/post.php?story=20160126195140916 8. Kalogiannakis, M.: Training with ICT for ICT from the trainee’s perspective. A local ICT teacher training experience. Education and Information Technologies 15(1), 3–17 (2010) 9. Lareki, A., Martínez de Morentin, J., Amenabar, N.: Towards an efficient training of university faculty on ICTs. Computers & Education 54(2), 491–497 (2010) 10. Luján, M., Salas, F.: Enfoques Teóricos y Definiciones de la Tecnología Educativa en el Siglo XX. Revista Electrónica “Actualidades Investigativas en Educación” 9(2) (2009). mayoagosto 11. Nordkvelle, Y., Olson, J.: Visions for ICT, ethics and the practice of teachers. Edu. Info. Technol. 10(1–2), 21–32 (2005) 12. Paredes-Parada, W., Del-Pozo, F., García-González, S., Ndea, C.: Good ICT Practices for the Integral Development of Ecuadorian Universities. Springer (2021) 13. Rodríguez, A.: Aproximación a la educación vocacional: Una perceptiva desde la Reforma Educativa (1990) 14. SCIMAGO INSTITUTIONS RANKINGS: Scimago Journal & Country Rank Networks [Online] (2019). Available: https://www.scimagojr.com/countryrank.php 15. Valero, L.: Máquinas de enseñanza de Skinner (2020). https://www.conducta.org/assets/pdf/ Valero_Maquinas_ensen%CC%83anza_Skinner.pdf 16. Van Der Vyver, G.: The search for the adaptable ICT student. J. Info. Technol. Edu. Res. 8(1), 19–28 (2013) 17. Venables, A., Tan, G.: Measuring up to ICT Teaching and Learning Standards. Issu. Inform. Sci. Info. Technol. 9, 29–40 (2013)

AT for Engineering Applications

Computerized Planning of Surface Ratios in a Milk Extraction Plant Alexis Suárez del Villar , Ana Álvarez Sánchez(B) and Alexander Ricardo Galarza Tipantuña

,

Grupo de Investigación en Sistemas Industriales, Software y Automatización (SISAu), Facultad de Ingeniería, Industria y Producción, Universidad Indoamérica, Av. Machala y Sabanilla, Quito, Ecuador [email protected]

Abstract. The research is carried out on the milk extraction process of a farm that has been showing a decrease in monthly income, negatively affecting profits, the economic deficit is affected by the milk sales price of 0.32 dollars per liter being lower than that of competitors, the lack of real historical data on the amount of milk extracted, losing control of expenses, and that the plant is not suitable for the process of extraction and storage of milk, causing the milk to be sold at times at 0.29 dollars per liter. There are multinationals that have submitted proposals for the purchase of the milk product; to close these contracts they ask for compliance with the technical standard: Guide of good practices for milk production, technical resolution No. 0217 fifth revision. 0217 fifth revision, was applied as a solution the systematic planning of the plant layout (SLP) using a methodology based on the application of the Relationship Diagram in CORELAP software, and the Method of the Elimination of the double sense of movements, selecting the second alternative, in addition, A mechanical milk extraction equipment is implemented with a milking capacity of 50 cows in 10 500 s, this is achieved by milking two cows at the same rhythm, the production time is reduced by 75% going from 55827 s to 13950 s, with this time 127 Lt are increased in the daily extraction going from 1373 Lt to 1500 Lt, and the personnel requirement is reduced from 5 to 3 operators. Keywords: Extraction · Computerization · Processes · Surface ratios

1 Introduction Milk has become one of the most important consumer goods as it is a secretion produced by mammals to feed their young, and has been used as food by humans [1]. The logistic process of milk has been transformed from a traditional farm to a production plant, so efficient methods and models are needed to improve the process of milk production and collection from these units [2, 3]. Increasing demand for high quality milk and obtaining higher quantity is the main concern of all dairy farms [4], to meet this growing demand for dairy products, excellent technological techniques are required to improve production [5, 6]. The use of hardware and packages will increase the productivity of the agricultural © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 349–360, 2023. https://doi.org/10.1007/978-3-031-25942-5_28

350

A. Suárez del Villar et al.

trade to meet the challenges they face in assembling completely different information points to drive industry production [7, 8]. Clearly, the challenge is not simply one for science and technology, but one based on broader aspects of the food system and its various stakeholders [9, 10]. However, these efforts are primarily based on the fixed layout of existing facilities. Meanwhile, although facility layout problems have been extensively studied so far, related works rarely include optimization [11]. The research field of facility layout design is extremely broad and covers other areas, such as cell layout, pick up and drop off points, and corridor structure design [12]. 1.1 Methodology The primary data for this study were collected through the Activity Relationship Diagram, Space Relationship Diagram and the Systematic Layout Planning (SLP) Methodology, which was applied seeking a balance between technical and economic aspects, taking as an advantage that the farm has an infrastructure that was used as a warehouse for the extraction equipment, which together with the equipment was enabled for the operation of the plant and served as a starting point for the entire layout design of the plant. For the optimal application of this methodology, it was necessary to follow a sequence of steps that are reflected in Fig. 1.

Fig. 1. Functional scheme of the methodology

In theory, CORELAP evaluated the areas and the TCR (Total Proximity Ratio) of the spaces we used, placing the site with the highest TCR or the largest area in the center,

Computerized Planning of Surface Ratios in a Milk Extraction Plant

351

and we continued placing the others depending on the affinities they had with the first one that was placed. In a first success in the design of the plant distribution, the distances were shortened and therefore the transport times, thus improving the total extraction time; in a second alternative analogical distribution technique, we carried out step by step the calculation of existing data obtaining the time measurement unit (TMU), this was achieved with a bimanual analysis using MTM-1 tables in which the intrinsic micro movement in each activity of the milk extraction process was examined in detail. The initial situation of the plant and its main operations were diagnosed during the survey, (see Fig. 2).

Fig. 2. Current analysis diagram of the process operations.

Being fundamental to decide which was the best planning method to use for the application of the SLP, because in this plant only one product is developed (milk extraction), it was unnecessary to perform the entire analysis because for this case the most optimal was to apply a distribution by product. We used the existing operations analysis

352

A. Suárez del Villar et al.

diagram (Fig. 2) and the current route diagram of the milk extraction plant (Fig. 3), which served as the basis for the development of the activity relationship diagram.

Fig. 3. Current milk extraction plant route diagram.

1.1.1 Activity Relationship Diagram The activity relationship diagram shows the relationship of the areas based on a ranking by the flows between them and based on criteria such as convenience; we used the codifications shown in Tables 1 and 2 respectively. The main characteristics of each area were analyzed for their preparation: 1. Corral: This is a smaller corral where the cattle are temporarily located while the extraction process is being carried out, this space is essential so that the cows do not wander around; it has a net size of 25 m2 . 2. Extraction: This is an area destined for milk extraction. The person in charge of milking, one cow at a time, and two people in charge of filling the buckets and transferring the buckets to the storage area, each with a net area of 16 m2 . 3. Storage: This is an area with three 500-L plastic garbage cans, with a net area of 30 m2 . 4. Warehouse: In this area we can find stored all the necessary equipment for the implementation of an extractor for the implementation of a mechanical milk extractor, it has an area of 12 m2 . 5. Machine area: It has an area of 12 m2 , most of it is empty, it is used to store boots, shovels, buckets, cleaning material, etc.

Computerized Planning of Surface Ratios in a Milk Extraction Plant

353

Table 1. Coding of the activity relationship diagram. Code A

Definition Absolutely necessary these two departments must be together

E I O U X

Especially important. Important. Ordinarily important. Unimportant Not desirable

Line value

Table 2. Coding by reason. Number

Reason

1

For control

2

By hygiene

3

For process

4

For convenience

5

For safety

Fig. 4. Activity relationship diagram.

For the generation of alternatives for the new plant design, the results of Fig. 3, 4 and 5 were taken into account, where it is absolutely necessary that the Extraction Area be next to the Storage Area and the Extraction Area for the Process reason; the Warehouse next to the Warehouse for convenience and adjacent to the Machine Area for Control; it is not desired that the Corral be close to the Machine Area for safety reasons nor near the Warehouse for Hygiene reasons; the Warehouse should not be next to the Machine Area for Hygiene reasons. In order to make the plant layout as objective as possible, two

354

A. Suárez del Villar et al.

Fig. 5. Diagram of space relationship.

methods were used, one using software (CORELAP) and the other based on the theory of Richard Muthher in his book Systematic Layout Planning (SLP). 1.1.2 Micro Movement Analysis with MTM-1 Tables The analysis was based on the proposal of Maynard, Niebel, Stegemerten and Schwab [13], in 1948 which they called Time Measurement Methods MTM-1 itself which provides tables with predetermined values by the authors that govern the time of the movements called therbligs being seventeen in which a task in a work station can be divided as: reaching, moving, turning, grasping, positioning, unhooking, releasing, named, etc. Figure 6 presents the initial configuration with which the two-way movement elimination method was worked out, starting from Table 3 with the meters traveled between

Fig. 6. Initial configuration of the plant layout.

Computerized Planning of Surface Ratios in a Milk Extraction Plant

355

work stations, Table 4 with the movements traveled in an eight-hour workday, and Table 3 being a weighted triangular table because it shows the sum of the total movements. Table 3. Meters traveled between areas Meters traveled

Area 1

Area 2

Area 3

Area 4

Area 5

Area 1

x

13

21

25

25

x

8

12

12

x

12

4

x

16

Area 2 Area 3 Area 4 Area 5

x

Table 4. Movements between areas (TMU). Meters traveled

Area 1

Area 2

Area 3

Area 1 Area 2

Area 4

Area 5

x

16491.1286

0

0

0

36486.9655

x

1519.5

3458.16

0

Area 3

0

32982.2571

x

0

0

Area 4

0

57334.7639

0

x

0

Area 5

0

0

0

0

x

Table 5. Sum of movements between areas (TMU). Area 1 52978.09 Area 2

0 34501.76

Area 3

0 60792.92

0 Area 4

0 0

0 0

Area 5

Based on the results obtained in Table 5, the sums of all the relationships for the final evaluation were made, ordering the areas in descending order as shown in Table 6, demonstrating that those with more TMU have a direct relationship and should be in the new design consecutively.

356

A. Suárez del Villar et al. Table 6. Results of movements between areas sorted in descending order.

Rooms

Movements between rooms (TMU)

A2 –A4

60792.92

A1 –A2

52978.09

A2 –A3

34501.76

A1 –A3

0

A1 –A4

0

A2 –A5

0

A3 –A4

0

A3 –A5

0

A4 –A5

0

The configuration proposed in Fig. 7 was reached after the application of the Method of elimination of the double direction of the movements where it was indicated that Area 2 and 4 have a relationship of 60786.92 TMU that when converted represent 36.47 min this means that these two areas should be together because there is a direct relationship, also that Area 1 and 2 have a relationship of 52978.09 TMU (31.79 min), then Area 2 should be next to Area 4 and Area 1, finally there is a relationship of 34501.76 TMU (20.70 min) between Area 2 and Area 3, as for the other configurations they have a value of 0.0 TMU that are not taken into account because it is interpreted as if there is no relationship between the areas or simply that the relationships between them are indirect.

Fig. 7. Proposed configuration of the plant layout obtained from the two-way movement elimination method.

Computerized Planning of Surface Ratios in a Milk Extraction Plant

357

1.2 Results of CORELAP Application In the list of activities, the affinity between the five areas was evaluated, and based on this criterion, each area was ordered according to its importance, resulting in a required surface area of 95 m2 and 120 m2 available, as shown in Fig. 8.

Fig. 8. Presentation of CORELAP results

1.3 Graphical Solution In response to the most appropriate solution the software is configured to the areas or spaces, in Table 7 you can visualize the relocation proposal. Table 7. Graphic solution (most suitable layout).

Number

Initial Department

1

Corral

2

Extraction

3

Storage

4

Winery

5

Machinery area

Configuration proposed by the Software

With this proposal of the CORELAP algorithm, the areas in relation to the spaces of the proposed distribution are correct because the continuous lines do not intersect or cross

358

A. Suárez del Villar et al.

each other, this is a success in the design of the new distribution because the distances are shortened and therefore the transport times, thus improving the total extraction time, (see Fig. 9).

Fig. 9. Diagram of the relationship of spaces proposed by CORELAP.

2 Discussion The review is based on 166 articles published from 1953 to 2021 in international peer reviewed journals. The literature review on facility layout problems (FLPs) is presented under broader headings of discrete space and continuous space FLPs. Important formulations of FLPs under static and dynamic environments represented in discrete and continuous space are presented. Articles reported in the literature on various facility representations for continuous space Unequal Area Facility Design Problems (UA-FLPs) are summarized [12]. The results achieved in this research increased milk production, with the plant layout design chosen, operation 1 (bucket washing), operation 2 (selection of cattle in the house) are eliminated, transfer time 1 (transfer of cattle from the house to the corral) will be reduced by 1200 s, and transfer time 2 (transfer of cattle from the corral to the extraction area) will be reduced by 350 s, the greatest reduction in time is evident in operation 3 (milking) with 16680 s, operation 4 (bucket filling) and transport 4 (transfer of buckets to the extraction area) are also eliminated, the time of operation 5 (bucket washing and the extraction area) will be reduced by 1000 s, transfer 5 (transfer of cattle from the corral to the house) was also reduced by 2400. A new proposal for the operations is made, as shown in Fig. 10.

Computerized Planning of Surface Ratios in a Milk Extraction Plant

359

Fig. 10. Proposed operations analysis diagram.

3 Conclusions The findings of this study demonstrate by means of the Method of elimination of the double sense of movements and the CORELAP software, that the new distributions of the farm are similar without affecting the direct relationship and the essential reasons why they should be together, in this case the alternative of the distribution of the method of elimination of unnecessary movements was chosen, this arrangement is valid on the basis that the Areas are related in a consecutive and linear way: 2 (extraction), 3 (storage) and 5 (machine room), which have a direct relationship therefore the arrangement of connections and hoses with the extraction pumps, engines and storage tanks is sequential in this way the movement times between areas will be reduced. The current real estate of the farm has few conditions to implement the milk extractor, with an investment of $5,000.00 dollars it will be possible to adapt these facilities to put the current equipment into operation. The selected alternative guarantees that the facilities have the conditions to maintain animal welfare, hygiene, and proper disinfection, as well as ensuring that the surfaces and materials in contact with the animals and their products are not toxic, and that they comply with Article 6.- Facilities, Equipment, and Utensils of the Guide to Good Practices for Milk Production, Technical Resolution No. 0217, Fifth Revision.

360

A. Suárez del Villar et al.

References 1. Erdemir, H., Yılmaz, M., Konyalıo˘glu, A.K., Beldek, T., Çebi, F.: A Aydın Region, Turkey. In: Durakbasa, N.M., Gençyılmaz, M.G. (eds.) Digitizing Production Systems. LNME, pp. 568– 577. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-90421-0_49 2. Karouani, E.: Sistema de monitoreo de recolección de milk-run utilizando el Internet de las cosas basado en Swarm Intelligence. IJISSCM 15(3), pp.1–17 (2022). https://doi.org/10. 4018/IJISSCM.290018 3. Rahman, T.: Selección de la ubicación de las instalaciones para la industria de fabricación de plástico en Bangladesh mediante el uso del método AHP. Rev. Int. Invest. Ingeniería Ind. 7(3), 307–319 (2018) 4. Singha, K.: Tecnologías utilizadas en granjas lecheras avanzadas para optimizar el rendimiento de los animales lecheros: una revisión. Rev. Española Invest. Agraria 19(4), 10 (2021). https:// doi.org/10.5424/sjar/2021194-17801 5. Muhammad Osama A, A.: IoT para el desarrollo de la ganadería lechera inteligente. Revista de Calidad Alimentaria (2020). https://doi.org/10.1155/2020/4242805 6. Henchionun, M.: Revisión: tendencias para el consumo de carne, leche y huevo para las próximas décadas y el papel desempeñado por los sistemas ganaderos en la producción mundial de proteínas, vol. 15 (2021). https://doi.org/10.1016/j.animal.2021.100287 7. Asmare, B.: Una revisión de las tecnologías de sensores aplicables a la producción ganadera nacional y la gestión de la salud. Adv. Agric. 6 (2022). https://doi.org/10.1155/2022/1599190 8. K «Métodos para aumentar la productividad de la leche de las vacas Ayrshire. En la colección: El papel de los jóvenes científicos en la resolución de problemas urgentes de Materiales Complejos Agroindustriales. Conferencia internacional científico-práctica de jóvenes científicos y estudiantes, dedicada al 115 aniversario de la Universidad Agraria Estatal de San Petersburgo, pp. 94–95 (2019). https://iopscience.iop.org/article/10.1088/1755-1315/852/1/012050/meta 9. Kondratieva, N.: Potencial de productividad del ganado Ayrshire y su implementación en condiciones de Agro-Volok LLC, región de Novgorod. Serie de Conferencias IOP: Ciencias de la Tierra y del Medio Ambiente (2021). https://iopscience.iop.org/article/10.1088/17551315/852/1/012050#references 10. Kondratieva, T.N.G.: Mejora de la tecnología de producción de leche con el pastoreo de vacas en las condiciones de LLC Agro-Volok. Tecnologías modernas de producción de leche que ahorran recursos: de la teoría a la práctica. Materiales de la Conferencia Científica y Práctica de toda Rusia, pp. 169–173 (2018) 11. Zhang, T.: Disposición de las instalaciones del taller de fabricación orientada al ahorro de energía: un enfoque de solución mediante la optimización de enjambres de partículas con objetivos múltiples. Sostenibilidad (Suiza), vol. 14, Article number 2788 (2022) 12. Hunagund, K.: Una encuesta sobre problemas de diseño de instalaciones de espacio discreto y espacio continuo. Rev. Gestión Instalaciones 20(2), 235 (2022) 13. Niebel, B.W.: Ingeniería Industrial, Métodos, Estándar y Diseño del trabajo. McGraw Hill, México (2019)

Methodological Proposal for Micro-enterprises Through a Mathematical - Statistical Model Based on Integral Logistics Marcelo Javier Mancheno-Saá(B) , Jenny Margoth Gamboa-Salinas, and Jacqueline del Pilar Hurtado-Yugcha Technical University of Ambato, Ambato, Ecuador [email protected]

Abstract. The present investigation was born through the execution of a commercial logistics project and its link with the digital transition, both variables have been the result of the optimization and competitiveness of several markets, the same ones that sought to develop transitory competitive advantages. The model has been established under a correlational causal methodology, which indicates that it implicitly has a data description and intends to establish a priori a conception of the variable to be tested in the field. In turn, one of the conclusions reached is that research must be complemented in the digital field, understanding that the physical-digital transition has transcended and strengthened markets, making the factors resulting from them become part of innovation. The model is applied to SMEs and micro-SMEs, where a sample of 373 companies across strata has allowed the understanding of logistics in the different markets. The application of this model strengthens the productivity and effectiveness of companies. Keywords: Logistics · Innovation · Digital transition · SMEs · Competitive advantages

1 Introduction The Integral Logistics of military origin has been refined over time and has allowed the birth of the commercial one, the sectors over the years have become very competitive, everything that began as a price competitiveness has over time become a brand competitiveness where each factor must be managed, maximized and even enhanced. Among the few factors is logistics, also logistics organization, this term has allowed the efficiency and effectiveness of companies, not only productive, to focus on increasing productivity in order to generate a competitive advantage. The term has evolved a lot and has been greatly adapted to the commercial sector considering that there are several points to be dealt with within it, and each of the factors must be treated with caution by a specialist.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 361–373, 2023. https://doi.org/10.1007/978-3-031-25942-5_29

362

M. J. Mancheno-Saá et al.

The term that was initially presented as a link between potential activities is later representing a link with the consumer, considering that the term denotes much of the consolidated quality of the product or service to be offered. Commercial logistics is a specific term that with the time required for distribution and specialized marketing, globalization has resulted in hyper-competitiveness in which most of the factors must be treated, studied and integrated. In business terms, it is necessary to have the ability to coordinate activities that are complex in themselves, trying to obtain the lowest possible costs and at the same time a very high perceptible added value in relation to the other market offerings. Trade logistics aims to put a product in a good way in the right place at the right time. Another approach that commercial logistics has taken is the marketing approach which not only takes into account the operational activities, but also takes into account the level of satisfaction that the client has at the time of doing the commercial exchange, and what activities were those that made that feeling possible. In an eminently commercial area, it is necessary to consider the reception, storage and any tactical or strategic movement until the performance of the action established within the demand for the product. The term at a commercial level is not only linked to the concepts of fluidity, uninterruptedness and speed, but also appears next to the concepts of satisfaction, competitiveness and perception. Zone 3 has shown an average treatment of companies in relation to competitiveness and commercial logistics, where a small strategic focus is seen when considering that commercial logistics, in addition to allowing competitiveness, allows long-term permanence. The consumer’s target cannot be defined as quickly as it was decades ago, it is considerable that a consumer has a changing nature, referencing a nature greatly influenced by digital marketing, evidencing a mix between two terms known as digitization and modernism. The third P of Plaza has evolved, and in addition to being a complementary factor, it has represented a specific prop in the marketing mix that can become decisive when it comes to achieving competitive advantages in the market. The study area, being eminently commercial, allows commercial logistics to be vital for each of the business activities, it is defining the areas, cost control and therefore the personnel need in them. Commercial logistics in the sector is directly related to meeting short- and long-term business objectives. It is essential to indicate that in the short term it allows the productivity of each of the resources, generating profitability through the optimization of the Cost Structure and in the long term, competitiveness and positioning are produced, allowing permanence over time. The referential factors to commercial logistics are based as complementary props in relation to delivery time, quality, customer service and price [1]. The evolution of commercial logistics is aimed at efficiency and productivity, even more so if there is an oversaturation of advertising and commercial presence of brands in the offer. The relation of logistics to distribution and optimization of times in the management of the finished products market occurs at the end of the twentieth century, generating a competitive advantage as suggested by Barney in 1991 [2].

Methodological Proposal for Micro-enterprises

363

The authors [3, 4], indicates that in the commercial evolution of logistics there is evidence of a beginning that is the management of the supply chain, while the modernist approach refers to many factors of the points for sale, this being one of the most important props at the moment (Table 1). Table 1. Lay out and their management. Dimension

Variables

Reference

Indicators

Instrument

Demand

Demand analysis product lines Quality Inventory Planning safety stocks

(De Diego Morillo, 2015) [13]

Level of demand management in Zone 3 warehouses Mean Standard deviation

Survey

Customer order process

Access to information Reception and dispatch documents Information income Response to requests Tracking claims and returns

[12, 3]

• Level of customer order management in Zone 3 warehouses • Mean Standard deviation

Survey

Supplier management

Supplier Evaluation Stock levels Communication system purchase planning Supplier Payments

(García, 2013) [14]

• Level of customer order management in Zone 3 warehouses • Mean Standard deviation

Survey

Merchandise reception

Reception Policies check Inventory control control cards They distributed and organization Entrance area

(Anaya, Logística Integral “la función Operativa de la Empresa”, 2015) García L. A., 2016)

• Level of reception management of goods from warehouses in Zone 3 • Mean Standard deviation

Survey

Storage and packaging

Politics and procedures storage techniques Signaling Shelving dimensions Adequate facilities Quick access Sanitation in warehouses Quick access Size Availability of tools

[12]

• Level of storage and Survey packaging management in warehouses in Zone 3 • Mean Standard deviation

(continued)

364

M. J. Mancheno-Saá et al. Table 1. (continued)

Dimension

Variables

Reference

Indicators

Instrument

Preparation of orders and transport

Politics and procedures storage techniques Signaling Shelving dimensions Adequate facilities Quick access Sanitation in warehouses Quick access Size Availability of tools

[8]

• Management level of order preparation and transportation of warehouses in Zone 3 • Mean Standard deviation

Survey

Layout and Human Resources

Efficient distribution warehouse areas Human Resources Training Personal protective equipment Performance

(Anaya, Logística Integral “la función Operativa de la Empresa”, 2015)

• Layout and Human Resources management level for Zone 3 warehouses • Mean Standard deviation

Survey

Organizational aspects

Normativas legales Indicadores de desempeño Planes de mejora continua Sistema ABC Automatización de procesos

[8]

• Organizational management level of warehouses in Zone 3 • Mean Standard deviation

Survey

2 Methodology The investigation has been carried out in 18 months, which has been framed with a mixed nature since it intends to establish properties and characteristics of the variables and sub variables for the conformation of a model making it descriptive and on the other hand it intends to treat the situation of the commercial stores of zone 3 as an economic phenomenon for which it would be explanatory. The design is cross-sectional since it intends to establish the characteristics of a single moment in time, the tools are validated at a statistical and field level, and a selective sampling determined by a specific stratification is used. The investigation is of a prospective nature since it is intended to establish a model that allows the management of commercial logistics in SMEs and Micro SMEs and therefore an increase in the profitability and effectiveness of the aforementioned. For the categorization of the types of existing warehouses in zone 3 of Ecuador, the database of the National Economic Census, carried out in 2010, was used, this database was prepared based on the count of all the economic units that make up the Ecuadorian productive sector.

Methodological Proposal for Micro-enterprises

365

This information is classified based on the CIUU (International Standard Industrial Classification), for which I facilitate its classification based on international standards. According to the [5], the CIUU allows data to be classified according to economic activity in a way that facilitates statistical collection and analysis. The following information was used based on the activity of the first 4 digits of the CIIU, of Section G Wholesale and retail trade; repair of motor vehicles and motorcycles, divisions 45, 46 and 47 (Tables 2, 3 and 4). Table 2. Population. Total

Category zone 3 Percentage

Retail sale in non-specialized stores with predominance of the sale of food, beverages or tobacco

7200

57,22%

Retail sale of hardware, paints and glass products in specialized stores

1180

9,38%

Retail sale of clothing, footwear and leather goods in specialized stores

3188

25,33%

Venta al por menor de productos farmacéuticos y medicinales, 1016 cosméticos y artículos de tocador en comercios especializad

8,07%

Source: Censo Nacional Económico, 2010

n=

1, 962 (0.50 × 0.50)12584 12584(0.05)2 + 1, 962 0.50 × 0.50

n = 373 warehouses in Zone 3. Table 3. Percentages by province. Categories

Ware houses

Percentages

Cotopaxi

Chimborazo

Pastaza

Tungurahua

Category 1

7200

57,22

21,38

33,78

7,19

37,65

Category 2

1180

9,38

20,51

32,29

7,54

39,66

Category 3

3188

25,33

17,85

30,52

5,58

46,05

Category 4

1016

8,07

20,37

32,97

7,28

39,37

Source: Censo Nacional Económico, 2010

366

M. J. Mancheno-Saá et al. Table 4. Percentages by province.

Categories

Ware houses

Percentages

Cotopaxi

Chimborazo

Pastaza

Tungurahua

Category 1

213

46

72

15

80

213

Category 2

35

7

11

3

14

35

Category 3

94

17

29

5

44

94

Category 4

31

6

10

2

12

31

Source: Censo Nacional Económico, 2010

2.1 Pilot Survey A pilot survey was applied to the owners or those responsible for the management of 30 stores according to what was proposed [6], who indicates that it is advisable to apply the pre-test or pilot test between 30 to 50 individuals who are within the characteristics of the sample. 2.2 Reliability According to [7], one of the most practical ways to measure the reliability of the construct is through the use of Cronbach’s Alpha, which is “an index used to measure the reliability of the internal consistency type of a scale, it is that is, to assess the extent to which the items of an instrument are correlated”. According to [8], it is used to evaluate the homogeneity of the questions or items, especially when it comes to polychotomic response alternatives, such as Likert-type scales, the values of this index are between 0 and 1, where: 0 means null reliability and 1 represents total reliability (Tables 5, 6and7). Table 5. Case processing summary. Categories

Ware houses

Percentages

Cotopaxi

Casos

Válido

30

100,0

Excluidoa

0

,0

Total

30

100,0

a. La eliminación por lista se basa en todas las variables del procedimiento.

Table 6. Case processing summary. Alfa de Cronbach

N de elementos

,867

52

Methodological Proposal for Micro-enterprises

367

Table 7. Dimensions and research variables. Dimension

Aspects

Demand

1. Demand analysis is carried out (price, place, product and promotion) 2. Specific lines of products to be marketed have been established 3. The variety and quality of products meets the demand 4. Inventories are planned taking into account customer demand 5. There is a security stock (unforeseen)

Customer order process

6. Customers have access to information on available product lines, quantities and prices 7. There are documents for receiving and dispatching orders 8. Merchandise orders and dispatches are properly managed 9. Order information is entered into a computerized system 10. The response to orders is immediate 11. Claims and returns are processed and followed up

Supplier management

12. Supplier evaluation is carried out 13. The availability and variety of merchandise offered by suppliers allow maintaining the necessary stock levels 14. An agile communication system is maintained with suppliers 15. The purchase of merchandise from suppliers is planned and reported well in advance 16. Compliance with payment commitments towards the supplier is maintained

Merchandise reception

17. There are policies and procedures for receiving merchandise 18. There are means of verifying merchandise 19. The merchandise that is received coincides with that requested in quantity and quality 20. Inventory control is carried out (code, specific name of the product, unit of measure, quantity, location, etc.) 21. Stowage cards are prepared (Record of product inputs and outputs) 22. The distribution and organization in warehouses allows a flow without interruptions and with minimum routes 23. The merchandise reception area facilitates access to suppliers

Storage and packaging

24. Storage and packaging policies and procedures are in place 25. Merchandise with the highest turnover is stored in such a way that it is more accessible 26. The areas, corridors, columns and accommodations destined for the storage of products are marked 27. The width and height measurements of the aisles and shelves allow easy handling of merchandise 28. The area has a good technical state of electrical installations, ventilation, lighting, fire extinguishers and safety devices 29. The storage area is free of insects, rodents, birds and domestic animals 30. You have quick access to all the shelves 31. Shelf pockets adjust to size of loads 32. There are tools for optimal packaging (boxes, containers, tapes, labels)

(continued)

368

M. J. Mancheno-Saá et al. Table 7. (continued)

Dimension

Aspects

Preparation of orders and transport 33. Order preparation policies or procedures are established 34. The merchandise is identified with bar codes, quantities, lot, name, expiration date and characteristics 35. There is a list of customer orders manually or computerized 36. A manual or computerized inventory system is maintained 37. Handling operations do not cause interruptions in the preparation of orders 38. Quick access to the requested merchandise 39. Tools are available for the optimal packaging of orders 40. There is a dispatch and transportation area 41. Shipping and transportation protects and guarantees the physical integrity of the merchandise Layout and Human Resources

42. There is a design for the distribution of equipment and materials 43. There are clearly defined and distinctive areas in the warehouse 44. The necessary staff is available to execute warehouse operations 45. The staff is fully trained for the activity they carry out (attitudes, knowledge and skills) 46. There are the necessary means of protection for warehouse personnel (helmets, girdles, glasses, etc.) 47. Work productivity is continuously measured and improved

Organizational aspects

48. The legal regulations for the operation of the warehouse are complied with 49. Indicators are used to measure performance in warehouse management 50. There are continuous performance improvement plans based on indicators and customer satisfaction 51. There are efficient processes with improvements in cost reduction. (ABC system) 52. The internal organization is oriented towards the automation of processes and control

3 Developing 3.1 Factorial Analysis According to [9], factor analysis is used to analyze the relationships between variables, by reducing data to find homogeneous groups of variables from a set of numerous variables. Due to the number of variables, 52 considered in the survey to design the model, the KMO multifactorial analysis was used or also known as Kaiser, Meyer and Olkin, according to [10], it calculates parallel correlations to test if the variables are suitable for correlate, the general rule states that the KMO value should be greater than 0.5 to proceed with the other analyses, the higher the better, otherwise if it is less than 0.5 the factorial analysis cannot be applied. According to the results obtained using the SPSS version 20 software, for data management, the multifactorial analysis was as follows (Table 8):

Methodological Proposal for Micro-enterprises

369

Table 8. KMO and Bartlett test. Medida Kaiser-Meyer-Olkin de adecuación de muestreo Prueba de esfericidad de Bartlett

Aprox. Chi-cuadrado

,780

gl

6096,210

Sig

1326 ,000

3.2 Bartlett’s Sphericity Test If Sig. (p-value) < 0.05 we accept H0 (null hypothesis) > factorial analysis can be applied. If Sig. (p-value) > 0.05 we reject H0 > factorial analysis cannot be applied. As calculated, the KMO is 0.780, this indicates that we can proceed with the analysis of factors, so there is evidence that there is a relationship between the variables. For [11], the closer to 1 the value obtained from the KMO test implies that the relationship between the variables is high. 3.3 Internal Consistency Coefficient Using Two Halves or Split-Half According to [12], in this approach internal consistency is considered by two parts that measure the same construct. This coefficient allows checking the results of the factorial analysis (Table 9). Table 9. Reliability statistics. Alfa de Cronbach

Parte 1 Parte 2

Valor

,756

N de elementos

26a

Valor

,816

N de elementos

26b

N total de elementos Correlación entre formularios Coeficiente de Spearman-Brown

,615 Longitud igual

,762

Longitud desigual

,762

Coeficiente de dos mitades de Guttman

,760

Obtaining an alpha of 0.756 in part 1 and 0.816 in part 2, considering these values are acceptable and show that they have internal consistency. Determination of values to extract PCA (Principal Component Analysis) (Figs. 1 and 2).

370

M. J. Mancheno-Saá et al.

According to [13], in principal components analysis, the new variables or principal components will be a linear combination of the original variables and a relatively small number of components explains most of the total variation of all the original variables (Table 10). Table 10. Case Processing Summary. Survey

Question

Extraction

Demand

1 2 3 4 5

,542 ,642 ,668 ,616 ,636

Customer order process

6 7 8 9 10 11

,665 ,676 ,592 ,713 ,595 ,745

Supplier management

12 13 14 15 16

,706 ,587 ,714 ,668 ,630

Merchandise reception

17 18 19 20 21 22 23

,562 ,584 ,641 ,659 ,616 ,608 ,704

Storage and packaging

24 25 26 27 28 29 30 31 32

,722 ,644 ,556 ,546 ,636 ,679 ,667 ,652 ,636 (continued)

Methodological Proposal for Micro-enterprises

371

Table 10. (continued) Survey

Question

Extraction

Preparation of orders and transport

33 34 35 36 37 38 39 40 41

,674 ,673 ,698 ,612 ,599 ,679 ,645 ,619 ,680

Layout and Human Resources

42 43 44 45 46 47

,656 ,747 ,618 ,642 ,725 ,590

Organizational aspects

48 49 50 51 52

,664 ,657 ,630 ,709 ,700

52 50510.8 4849 47 0.6 46 45 0.4 44 43 42 0.2 41 40 0 39 38 37 36 35 34 33 3231 302928

1

27

234 56

7

8

9 10 11 12 13 14 15 16 17 18 19 20 21 2322 262524

Fig. 1. The final results of dimensions.

For the elaboration of the model, the variables that have a rotation considered very good are taken, according to [14], the load levels of the factors above 0.45 are considered valid, above 0.55 are considered good, above 0.63 are they consider very good and above 0.71 they are excellent. Therefore, values above 0.63 are considered for the construction of the logistics model.

372

M. J. Mancheno-Saá et al.

COMMERCIAL LOGISTICS MODEL Proceso de pedidos de los clientes 0.8 Aspectos Gesón de organizacionales 0.6 proveedores 0.4 Layout y recursos humanos

0.2 0

Preparación de pedidos y transporte

Recepción de mercadería Almacenaje y embalaje

Fig. 2. A commercial logistics model.

4 Discussion Commercial Logistics in small and medium-sized companies is well represented by 8 axes for its management and projection. At the beginning of history, logistics referred to the movement of resources to obtain military effectiveness, currently the aforementioned resources generate transitory competitive advantages. The organizational aspects that a priori are considered totally unrelated to logistics, can be seen to end up being one of the most important aspects to take into account. The testing of the model can be easily identified by understanding that at the correlation level the most important categories are handled between 0.6 and 0.9. The validity of the instrument executed through theoretical props surrounds 76% of internal correlation, so it is understood that the props have a shared nature. The Model is explicit in nature with 52 variables, which indicates that several of them, although they do not have a very high correlation, present a relationship between them.

5 Conclusions The Model can be implemented in the field; however, it is suggested for new research to carry out the process of testing and feedback of the variables and sub variables. It is suggested to investigate the incidence of the term in digital commerce, understanding that it is one of the biggest trends today. Acknowledgements. This publication was possible thanks to the execution of the research project “Digital transition model for SMEs and Micro SMEs as a factor derived from the pandemic (COVID 19) in Zone 3 of Ecuador”. The project code PFCA18, financed by the Development Fund and Department of the Technical University of Ambato.

Methodological Proposal for Micro-enterprises

373

References 1. Al-Masri, A.Q., Al-Momani, N.: Sobre la combinación de pruebas independientes en caso de distribución log-logística. Revista electrónica de análisis estadístico aplicado 14(1), 217-229 (2021) 2. Alvarado Avalos, N.A.: La logística de abastecimiento para incrementar la productividad en pymes: una revisión sistemática entre el 2009–2019 (2021) 3. Aparicio Ruiz, P., Barbadilla Martín, E., Guadix Martín, J., Escudero Santana, A.: Gestión de proyectos aplicado a un juego de Logística. Aula abierta (2021) 4. Arribas, M.: Diseño y validación de cuestionarios. Matronas profesión 5(17), 23–29 (2004) 5. Campos, M.G.C., Bárcenas, J.P., González, C.M.P., Vargas, J.M.: Indicador para medir la contribución de la infraestructura de transporte al valor logístico de las cadenas de suministro 6. Costa Salas, Y.J., Castaño Pérez, N.J.: Simulación y optimización para dimensionar la flota de vehículos en operaciones logísticas de abastecimiento-distribución. Ingeniare. Revista chilena de ingeniería 23(3), 372–382 (2015) 7. Corral, Y.: Validez y confiabilidad de los instrumentos para la recolección de datos. Revista ciencias de la educación 33, 228–247 (2009) 8. Cuatrecasas-Arbós, L., Fortuny-Santos, J., Ruiz-de-Arbulo-López, P., Vintró-Sanchez, C.: Monitoring processes through inventory and manufacturing lead time. Industrial Management & Data Systems (2015) 9. Dzwigol, H., Trushkina, N., Kvilinskyi, O.S., Kvilinskyi, O.S.: La logística verde como concepto de desarrollo sostenible de los sistemas logísticos en una economía circular (tesis doctoral, Actas de la 37.ª Asociación Internacional de Gestión de la Información Empresarial (IBIMA)) (2021) 10. Epifania Moreno, M.J.: Estandarización de procesos y gestión de abastecimiento en las comercializadoras de productos farmacéuticos en el periodo 2010–2020: revisión sistemática de la literatura científica (2021) 11. Erdil, A.: La Gestión de la Cadena de Suministro Verde y Su Importancia. Investigaciones y Revisiones en Ciencias Sociales, Humanas y Administrativas 12. Escudero Zamora, J.: Mejora al proceso de compras nacionales, gerencia de abastecimiento y logística, Komatsu Cummins Chile Ltda, Doctoral dissertation, Universidad Andrés Bello (2013) 13. Garcés Guerrero, G.G.: Gestión de almacenamiento en el Centro Medico Dr. José Garcés Vera en la ciudad de Guayaquil (Bachelor’s thesis, BABAHOYO: UTB, 2021) (2021) 14. García Molina, M.A.: Análisis de la logística de amazon en la distribución de productos a través del comercio electrónico en España. Una revisión sistemática de literatura (2022)

Material Selection for a Biomass Heat Exchange Multicriteria Decision Methods: Study Case on Ecuador Juan Francisco Nicolalde1 , Javier Martínez-Gómez1,2,3(B) , Ricardo A. Narvaez C.2,4 , Daniel Rivadeneira2 , Boris German2 , Michelle Romero2 , Cristhian M. Velalcázar Rhea2 , P. Cuji2 , Danny F. Sinche Arias2 , Carlos A. Méndez Durazno2 , and E. Catalina Vallejo-Coral2 1 Facultad de Ingeniería Y Ciencias Aplicadas, Universidad Internacional SEK, Quito 170302,

Ecuador [email protected] 2 Instituto de Investigación Geológico Y Energético (IIGE), Quito 170518, Ecuador 3 Departamento de Teoría de La Señal Y Comunicación, Área de Ingeniería Mecánica) Escuela Politécnica, Universidad de Alcalá, 28805 Alcalá de Henares, Madrid, Spain 4 Universidad Central del Ecuador, UCE-GIIP, 170521 Quito, Ecuador

Abstract. Regarding the necessities of exploring efficient energy sources, the cogeneration technologies using biomass proven to be a useful alternative. However, in developing countries such as Ecuador, the best materials are not always available causing the need to import them and elevating costs. In this sense, the present research proposes a selection of the best material for a heat exchanger that uses biomass hot fluids by multicriteria decision methods means, taking in consideration the availability on the country. In this way, the method uses a ponderation of the candidate materials by the subjective technique of Analytic Hierarchy Process and a selection using the multicriteria optimization and compromise solution, the technique for order preference by similarity to ideal solution and the complex proportional assessment method, also, the relationship of the methods are correlated by the Speaman’s method. Furthermore, the validation of the selected material is performed by computation fluid dynamic simulations, comparing the best 2 materials, demonstrating that even that copper C12200 is far mor expensive also it outstands steel AISI 1015 on thermal energy transfer, allowing to produce a hotter steam output. Keywords: Multicriteria decision · Material selection · Biomass · Heat exchanger · Energy · Computational fluid dynamics

1 Introduction Global warming is an important threat to mankind, since recent data has revealed that an increment of 2 °C on the global temperature may lead to catastrophic results [1], hence, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 374–387, 2023. https://doi.org/10.1007/978-3-031-25942-5_30

Material Selection for a Biomass Heat Exchange Multicriteria

375

the scientific community has been working on solutions that leads to an eco-friendlier future. In this sense, the potential bio-energy produced by biomass can be variable depending on the country, but it is estimated that for the year 2050 the biomass could provide 3000 TWh of electricity and save 1.3B tons of CO2 emission per year [1]. On the other hand, organic material that derivates from waste materials such as lignocellulosic residues from foresting, agricultural waste, food residues and even municipal waste [1] can be used for water heating, where the material used is a renewable source and carbon neutral, meaning that the amount of CO2 produce by burning it, is equal to the CO2 absorbed for the development of the organic material [2]. In this sense, the utilization of bioenergy for primary energy heat and cooking on large-scale combustion plants can produce enough efficient heat to compete with fossil fuel sources, where most biomass power plants uses direct combustion of burning biomass to produce high-pressure steam that runs turbines [1]. In this way, the production of energy can be improved by the utilization of cogeneration systems, where a simultaneous production of electricity and thermal energy is possible by using a single fuel source that comes from biomass [3]. Regarding the equipment necessary for a waste to energy system, it’s important to take in consideration factor such as which type of waste will be managed. Only then the considerations such as the heat exchanger for heat recovery can be made [4]. Even more, it is important to consider the composition of the flue gas that comes from the combustion, where the elements that has been identified from the solid and liquid fuels are Carbon, hydrogen, oxygen and sulphur [2, 5], where sulfuric acid when reacts is a corrosive agent to metals [5]. Regarding the material used for heat exchangers, it is important to consider the intrinsic parameters of the application such as heat transfer, but also take in to account working problems such as corrosion resistance. In this sense, the heat exchangers have been used in several industrial applications such as gas turbines with stainless steel to avoid oxidation and corrosion, however, this material presents the problem of accelerated degradation due to water vapour, problem that has been solved by using nickel-based alloys with the downside of the cost [6]. On desalinisation processes with corrosion, cupronickel alloys are frequently used showing good performance considering the presence of ammonia, suspended solids and elevated working temperatures [7], and even more, on heat exchangers powered by synthetic gas (syngas) that comes from a gasifier that burns biomass the materials used are mainly hot rolled mild steel and iron mild steel for closing edges [8]. However, even that there are wide options not all the materials described on the literature are available in Ecuador, in this sense, Vicuña, used Austenitic Stainless Steel ASME SA- 213 for the tubes and ASME SA- 312 for the shell on the implementation of a heat exchanger for the biomass degradation of residual sludge from a pilot plant for gasification in supercritical water [9]. Paredes & Gallardo, used galvanic steel for the tubes and Steel ASTM A36 for the shell in the construction of a heat exchanger for a biomass characterization combustion system [10]. In the same way, Delgado et al., also used Steel ASTM A36 for the shell and tubes schedule 40 corresponding to a steel ASTM A53 GRB according to the steel house Dipac, [12]. These materials where used for design of a biomass and gas exchanger for heat generation on drying rice process [11]. Montero & Vargas, design a rector for biomass pyrolysis for burning roses and banana steams, where they used carbon steel on the tubes of the heat

376

J. F. Nicolalde et al.

exchanger and the reactor used Steel SUS 304 which is considered of the shell as well [13]. Serrano et al., developed the optimization of a biomass burner for rice drying where the heat exchanger used steel ASTM A36 in the structure and Steel AISI 1015 for the tubes [14]. Furthermore, Montesinos, used structural steel for the shell and copper for the tubes on a heat exchanger of a biodigester to control temperatures, in this sense, the author acknowledges that copper is more expensive than other materials but the thermal properties of the material makes better for the application [15]. On the other hand, the material options that the Ecuadorian markets offers besides the named steels, presents the stainless steel ASTM A269 and ASTM A312 for tubes with a described application for corrosive fluids [16]. In this sense, as it can be seen the selection options are wide and the criteria can be variable making defaulting choosing the best material, for this issue the multicriteria decision methods (MCDM) such as Analytic Hierarchy Process (AHP), the multicriteria optimization and compromise solutions (VIKOR, for its acronym in Serbian), the Technique for Order of Preference by Similarity to Ideal Solutions (TOPSIS) and the Complex Proportional Assessment Method (COPRAS), have shown to be useful on the selection of materials that relies on the thermal performance for its given application, allowing to the selector to have the best material [17]. In this way, the research of Kilkovsky et al., has used a multicriteria decision framework based on decision variables such as energy efficiency and net present value to determinate the best system configuration for a biomass gasification-based cogeneration system, where the option with the cogenerate system inclusive of sensible heat storage and internal heat recovery showed to be the best option [4], while Saldanha et al., used the MCDM to analyse and decide the best configuration for a shell and tube heat exchanger under uncertainties conditions [18]. Furthermore, the selection of the material need a validation that can be performed by Computational Fluid Dynamics (CFD) such as the performed by Garoma & Yazdi, who used the CFD to analyse the behaviour on a heat exchanger that had as objective to find out the feasibility of heating algal biomass [19]. In this sense, the present research has the objective of selecting the best material available on the Ecuadorian market for water heating, taking advantage of the burning gases that comes from biomass by using a shell and tube heat exchanger and validate the selection by CFD.

2 Method Regarding the necessities for using the energy potential of biomass burning gases, bibliographical research compared with market availability is perform to determinate the potential materials. In this way, these materials with defined criteria are assessed by MCDM to lastly validate the selection by CFD as follows. 2.1 Heat Exchanger Conditions The selection of heat exchanger has vital importance on waste to energy systems, where considerations such as temperature and composition of the fluid are taken in consideration [4]. In this sense, the heat transfer by conductive means presented in the flue gas with a laminar flow [20] may have temperatures lower than 500 °C the conventional

Material Selection for a Biomass Heat Exchange Multicriteria

377

tubular heat exchangers are preferred, and even more, these type of heat exchangers are cheaper and easy available [4], where, research agreed that at that temperature with a flow rate of 0.1 kg*s−1 the heat exchanger presented the best average effectiveness [21], making it the best option. However, in biomass co-firing conditions the heat exchangers reduces its reliability due to increasing corrosion problems [22]. On the other hand, it is important to have a design with compactness [4] and a surface area density that allows to reduce, volume, weight and cost without the necessity of eliminating components useful for a better heat transfer [23], therefore, density is a parameter to consider and also to maintain the compactness, the thermal expansion of the material is other parameter to take in consideration. 2.2 Material Selection Since the criteria and the working conditions have been declared, the following step is to select the best material for the task. In this sense, different research performed on Ecuador have used different materials for the shell and tube heat exchanger, all regarding the application of biomass burning. In this sense the proposed methodology takes the candidate materials used in the aforementioned research and submit them to an assessment by multicriteria methods means. In this way, Table 1 presents the materials considered from the bibliographical research, however, since some criteria may differ or are not available from the steel house, a more convenient solution is to take the properties expressed in the library of the software CES-Edupack for all the materials or the nearest to it, hence, Table 2 presents the candidate materials with its properties to be assessed by MCDM, also, even that there has not been present pressure problems since this factor has been considered in the design, a yield strength criteria is added and considered relevant, in the same way, the materials has been coded with letter “T” for tubes materials in order to have a better disposition on the calculations. Table 1. Ecuadorian market materials Material

Part

Reference

ASME SA- 213

Tubes

[9]

Galvanic Steel

Tubes

[10]

ASTM A53 GRB

Tubes

[11]

Carbon Steel

Tubes

[13]

AISI 1015

Tubes

[14]

ASTM B280

Tubes

[15]

ASTM A269

Tubes

[16]

ASTM A312

Tubes

However, for the case of the material ASME SA 213 the software does not displays this material but for its composition recommends the AISI 309, expressing an application

378

J. F. Nicolalde et al.

of heat exchanger and oil burner, making this the nearest option. The steel ASTM A36 has found a recommendation on the material AISI 304 that at the same time correspond to the material SUS 304 how has the applications of tubes in heat exchanger which is the demanded use [24] and regarding the Ecuadorian market this material is used to manufacture exhaust pipes [12]. The AISI 1015 is found on the software and is a carbon steel with applications to general engineering mechanics. The stainless steel ASTM A269 has similarity to the stainless steel AISI 316 expressed with a typical use for heat exchangers. [24] and even more, according to the steel house on Ecuador, this is the carbon steel that fulfils the requirements of the A53 standards used with galvanization [25]. Lastly the copper pipes for fluids that works in high thermal and pressure properties are found under the normative ASTM B280 [26], and in the software has an homologue in the material Copper C12200 that specifies a typical use in heat exchangers [24]. Table 2. Referenced candidate materials Reference

Material

Code

Corrosion rate (PREN)

Price (USD*kg−1 )

Thermal Conductivity (W*m−1 *°C−1 )

Thermal Expansion (µstrain*°C−1 )

Yield Strength (MPa)

Density (kg/m^3)

ASTM A213

AISI 309

T1

22

3.64

13

16

205

8000

ASTM A36/ SUS 304

AISI 304

T2

18

2.93

14

18

205

8060

Galvanic/ Carbon/ ASTM A53/ AISI 1015

AISI 1015

T3

Not susceptible

0.78

50

13

255

7900

ASTM A269

AISI 316

T4

22.6

4.07

13

18

205

8070

ASTM A312

AISI 348

T5

17

2.91

16

17.3

192

8090

ASTM B280

C12200

T6

Not susceptible

6.82

290

17

55

8950

2.3 Multicriteria Decision Methods Regarding the design and construction of shell and tubes heat exchanger for biomass applications in Ecuador, it has been used the carbon steel ASTM A36 as a majority considering this material as a structural steel. However, in the selection of tubes, the bibliographical research and the market offer shows different options. In this sense, the selection of the best materials for the tubes is performed with an objective weighting of the criteria with the AHP method followed by a ranking selection with the methods VIKOR, TOPSIS, and COPRAS. Furthermore, a last evaluation of the correlation of the results is developed using the Spearman’s coefficient. Furthermore, for the utilization of the MCDM its important to categorise the criteria on beneficial for those where the greater the best and non-beneficial for the greater the worst as follows:

Material Selection for a Biomass Heat Exchange Multicriteria

379

Corrosion rate: Beneficial Price: Non-beneficial Thermal conductivity: Beneficial Thermal expansion: Non-beneficial Yield strength: Beneficial Density: Non-Beneficial 2.4 Analytical Hierarchy Process The MCDM the assignation of weights for the criteria is an important step that must be determinate, where the AHP is one of the most popular method [27]. This technique was used as a pairwise comparison since other objective weighting methods gives importance to the tensile strength criteria and does not take that consideration over CO2 footprint, facts that does not align with this research objectives. In this sense, the AHP was performed as described by Odu, (2019), following the pairwise comparison method. 2.5 VIKOR Method After achieved to assess the different criteria with a weight according to our objectives, the ranking of the material begins with the method VIKOR, this technique ranks the alternatives for its closeness to the ideal solution and have an advantage when the decision maker doesn’t have a preference [28], which is our case. In this way, the method is developed as expressed by Papathanasiou & Ploskas, (2018) until the proposition of a compromise solution. In this method the criteria are taken from Table 3 as a matrix of decision. On the other hand, it is necessary to evaluate the of acceptable advantage   conditions  (C1) by accomplishing the condition Q A2 − Q A1 ≥ DQ for the ranked candidates Al,  if this condition is not satisfied the compromise solution should consider for  and Q Al −Q A1 < DQ. . Furthermore, the condition of acceptable stability (C2) considers that the best material on the Qi must be also the best for Si and/or Ri, if only C2 is not satisfied then A1 and A2 should represent the compromise solution. 2.6 Technique for Order Preference by Similarity to Ideal Solution The TOPSIS method is based on obtaining artificial solutions that will indicate in which position each value is located according to the criteria to be obtained, up to the least desired criteria the development of this selection [30]. Furthermore, the TOPSIS method was developed as Papathanasiou & Ploskas, (2018a) explained, until the calculation of the distance of each alternative to the ideal and anti-ideal solution, lastly, the closeness to the ideal solution is calculated, where, the best is the nearest to 1. 2.7 Complex Proportional Assessment The COPRAS method make use of direct and proportional dependence utility degree of the alternatives that may have conflict, giving an optimal solution and has been effective

380

J. F. Nicolalde et al.

in material selection [32]. In this sense, the application of this method is applied as done by Mousavi-Nasab & Sotoudeh-Anvari, (2018), where the comparative significance of Qi is calculated and the level of utility U i is determinate, where the nearest to 100 is the best option. 2.8 SPEARMAN’S Coefficient For this analysis, the results obtained through the decision of previously analysed multicriteria methods should be grouped, preferably this grouping will be in pairs, making combinations so that all these methods come to be combined with each other using the following method [34]. 2.9 Validation of Heat Potential

Fig. 1. CFD Model

Table 3. Simulation conditions Parameter

Condition

Maximum burning gases temperature

500 °C

Burning gases Flow rate

0.1 kg * s−1

Initial water temperature

24 °C

Inlet water pressure

50 PSI

[4, 20–22, 35].

The design of the heat exchanger takes in consideration the material used, where the shell made of structural steel ASTM A36 has been used in a common way, however, the tubes that are the part where the energy transfer takes place has different options. In this way, the MCDM has chosen that material copper C12200 as the best suited for the

Material Selection for a Biomass Heat Exchange Multicriteria

381

task, however, this material has the disadvantage of an elevated price. In this sense, a comparison between this material and the second best the steel AISI 1015 is performed to analyze if the excessive price of copper is worth it. Therefore, Fig. 1 displays the CAD model of the shell and tube heat exchanger along with the intervening fluids, while Table 3 shows the conditions that are simulated by CFD using the software Autodesk Inventor, where the conditions of the burning gases are taken from the bibliographical research, considering the CO2 as the working fluid, and the water to be heated comes from tap water at environment temperatures impulse by an industrial water pump found on the Ecuadorian market.

3 Results and Discussion The research has compared the referenced candidate materials by MCDM means, showing a clear winner and the CFD shows the performance of the materials as follows. 3.1 AHP Method

Table 4. AHP matrix Criteria

Corrosion Price Thermal Thermal Yield Density Weight Ranking rate conductivity expansion strength

Corrosion rate

0.20

0.38

0.15

0.26

0.28

0.23

0.25

2

Price

0.07

0.13

0.15

0.26

0.17

0.14

0.15

3

Thermal 0.59 Conductivity

0.38

0.44

0.26

0.39

0.32

0.40

1

Thermal Expansion

0.07

0.04

0.15

0.09

0.06

0.14

0.09

4

Yield Strength

0.04

0.04

0.06

0.09

0.06

0.14

0.07

5

Density

0.04

0.04

0.06

0.03

0.06

0.05

0.05

6

The AHP begins with a pair wise comparison that makes a weighting of all criteria between each other, followed by the normalize and the summatory of all the rows giving the weight for each criterion and a ranking of the importance that everyone represents in Table 4. In this sense, the objective of this research was to magnify the thermal transfer, while the material gests preserved by resisting the corrosion of burning gases and considers the price. Therefore, the priority of the weights considers the thermal conductivity, corrosion rate and price. The verification that the method has been applied correctly according to Odu, (2019), where the consistency ratio is 0.091 which is lower than 0.1 making it acceptable.

382

J. F. Nicolalde et al.

3.2 VIKOR Method From the Table 2 of the materials characteristics and the Table 5 with the weights of all criteria the standardized matrix of VIKOR is obtained, followed the values of S, R and Q with the ranking of the method on Table 6. Moreover, the difference of Qi between the first and the second material is less than the compromise value v = 0.5, and that the best material is the best in the S and R parameters, meaning that has an acceptable advantage and an acceptable stability. Table 5. VIKOR matrix Material

Si

Ri

Qi

Ranking

T1

0.57

0.40

0.75

3

T2

0.77

0.39

0.96

5

T3

0.34

0.34

0.40

2

T4

0.59

0.40

0.78

4

T5

0.80

0.39

0.99

6

T6

0.34

0.15

0.00

1

The VIKOR method gives first place to copper 12200 (T6) and second place to the AISI 1015 (T3). The reason for these positions to these materials given by the VIKOR method is that this method is based on the succession of fuzzy mathematical tasks, giving a certain degree of preference to the materials closest to an ideal solution. Regarding the use of the method, Beltrán & Martínez-Gómez, (2019) positively used the VIKOR method to select the best material for thermal applications on buildings, method that was consistent with others and proven to being the best selection for an Amazon environment. 3.3 Method TOPSIS For the TOPSIS method the same decision matrix is used, however, now the normalization divides de terms by the square root of the summatory of the quadric value of each criterion. Next, the normalized weight matrix that multiplies their terms for its weight obtained in AHP is calculated, also presenting de ideal positive and negative solutions. In this case, the best and worst values are determinate alike the VIKOR method where the beneficial criteria seek the best for its positive ideal and the worst for the negative idea, and in inverse sense for the non-beneficial criteria. Furthermore, to calculate the difference between measurements, the positive and negative distances (D) are obtained respectively, followed by the closeness of the solutions and with it the TOPSIS ranking, these results are displayed on Table 6. It is observed that the first place is awarded to the copper C12200 (T6) and second place is obtained by steel AISI 1015 (T3). Moreover, TOPSIS method analyses the efficiency of each alternative, the copper being the most efficient material for its application on energy transfer, due to its high thermal conductivity. Furthermore, the TOPSIS

Material Selection for a Biomass Heat Exchange Multicriteria

383

Table 6. Results to ranking TOPSIS Designation

Distance D +

Distance D-

Solution proximity Ci

Ranking

T1

0.3739

0.0595

0.1373

5

T2

0.3722

0.0646

0.1479

4

T3

0.3215

0.1140

0.2618

2

T4

0.3749

0.0552

0.1284

6

T5

0.3698

0.0642

0.1479

3

T6

0.0988

0.3721

0.7902

1

method was conclusively useful and effective in the other researches where selection of material for thermal energy exploitation is needed [36]. 3.4 COPRAS Method The normalized COPRAS matrix takes every value and divides it by the summatory of each criterion, followed by the weighted matrix that multiples each criterion by its weight. Finally, Table 7 presents the summatory of the weights for the beneficial and nonbeneficial solutions utilizing the same principal as VIKOR and TOPSIS, also, this table has the comparative significance that leads to the percentage of development determinate the ranking. Table 7. COPRAS ranking matrix Material

S+

S-

Qi

Ni

Rank

T1

0.06968

0.04771

0.11243326

31%

3

T2

0.06273

0.04450

0.10857201

30%

5

T3

0.11104

0.02455

0.1941228

53%

2

T4

0.07087

0.05263

0.10962904

30%

4

T5

0.06192

0.04376

0.10853975

30%

6

T6

0.33843

0.07216

0.36670314

100%

1

Evaluating the benefits of taking advantage of the energy produced by the biomass, it is observed again that the first position is obtained by copper C12200 (T6) and in second place by steel AISI 1015 (T3) the reason for these results is because the COPRAS method uses ideal and non-ideal thresholds to be able to obtain an optimal solution where they influence in a strong way the values obtained by weighting, where the highest value obtained is the criterion of thermal conductivity (w3) which is the highest characteristic corresponding to this material. Furthermore, the utilization of the COPRAS method have been used as methodology for the selection of a material for heat dissipation on break discs, delivering an optimal design [37].

384

J. F. Nicolalde et al.

3.5 Spearman Correlation Observing the results obtained by applying the VIKOR, TOPSIS and COPRAS multicriteria methods, it is determined that the winning material is Copper C12200, being the best material in all the applied methods. The data obtained from the Spearman correlation shows the MCDM results in a paired way. The relationship between VIKOR and TOPSIS has a perfect correlation of 1 between them, meaning that the methods gave the same ranking. Regarding the correlations with the COPRAS method, they obtain lower relationship of 0.486 which indicates a moderate correlation meaning that the ranking in COPRAS has differences from the other, however, this difference tends to a positive correlation demonstrating that the Copper 12200 win is consistently. 3.6 Simulation Results The simulated scenario sets the event where the biomass is burned and where the CO2 represents the flue gases, these goes to the heat exchanger at a maximum temperature of 500 °C, with a flow rate of 0.1kg*s-1. On the other end, the water that will take advantage of this heat comes from the city piping and is thrown into the heat exchanger at volume flow rate of 111 GPM. The heat transfer produced when the tubes used are made of copper C12200 is displayed on Fig. 2.a and for the event where the tubes are made of steel AISI 1015 on Fig. 2.b.

a)

b) Fig. 2. CFD Model

Material Selection for a Biomass Heat Exchange Multicriteria

385

In this way, the simulation shows a clear difference on the thermal energy heat transfer that is reflected on the output of heat water, where, in the case of the copper tubes the heated fluid comes out with a temperature of 215 °C while on steel tubes the output is lower with 189 °C. In this sense, even that the superior thermal conductivity of copper is superior allowing to manage the heat transfer on a more quick and efficient way, the difference of final temperature with steel is nearly of 25 °C. Hence, considering that in the Ecuadorian market can copper C12200 can be found as ASTM B280 and AISI 1015 as carbon Steel ASTM A53 galvanized, that both materials has a not susceptible heat resistance, but the price difference is great and in both cases the heated water can be used as steam for energy generation.

4 Conclusions Regarding that the present investigation aims for taking advantage of the burning gases from biomass in a heat exchanger to heat water and considering the thermal properties of the tube’s material as main criteria to have the best heat transfer, the multicriteria decision methods choses the Copper C12200 that in the Ecuadorian market would have an equivalent on the copper ASTM B280. This second-best material is the AISI 1015 that is equivalent to the carbon steel galvanized A53 and is the preferred option among the designers, since have good properties over other options and has the best price. In this sense, the copper ASTM B280 is more expensive and does not have the best properties except for the thermal conductivity. This thermal criterion exceeds in performance to the other materials in such an important way that the methods VIKOR, TOPSIS and COPRAS agrees that this characteristic makes it the best. However, the CFD simulation revels a steam output with a temperature difference of 25 °C. Therefore, taking in mind that a biomass burning plant must take advantage of all the heat possible to produce electric energy, it is recommended to perform an efficiency and exergy analysis that shows if the copper option for the tubes is profitable even if it is by far more expensive, considering the thermal advantages over steel.

References 1. Antar, M., Lyu, D., Nazari, M., Shah, A., Zhou, X., Smith, D.L.: Biomass for a sustainable bioeconomy: An overview of world biomass production and utilization. Renew. Sustain. Energy Rev. 139, 110691 (2021). https://doi.org/10.1016/J.RSER.2020.110691. Apr. 2. Datta, R.G., Sarkar, L.: Energy and exergy analyses of an externally fired gas turbine (EFGT) cycle integrated with biomass gasifier for distributed power generation. Energy 35(1), 341– 350 (Jan. 2010). https://doi.org/10.1016/J.ENERGY.2009.09.031 3. Segurado, R., Pereira, S., Correia, D., Costa, M.: Techno-economic analysis of a trigeneration system based on biomass gasification. Renew. Sustain. Energy Rev. 103, 501–514 (2019). https://doi.org/10.1016/J.RSER.2019.01.008. Apr. 4. Kilkovsky, B., Stehlik, P., Jegla, Z., Tovazhnyansky, L.L., Arsenyeva, O., Kapustenko, P.O.: Heat exchangers for energy recovery in waste and biomass to energy technologies – I. Energy recovery from flue gas. Appl. Therm. Eng. 64(1–2), 213–223 (2014). https://doi.org/10.1016/ J.APPLTHERMALENG.2013.11.041. Mar.

386

J. F. Nicolalde et al.

5. Paraschiv, L.S., Serban, A., Paraschiv, S.: Calculation of combustion air required for burning solid fuels (coal/biomass/solid waste) and analysis of flue gas composition. Energy Rep. 6, 36–45 (2020). https://doi.org/10.1016/J.EGYR.2019.10.016. Feb. 6. Min, J.K., Jeong, J.H., Ha, M.Y., Kim, K.S.: High temperature heat exchanger studies for applications to gas turbines. Heat Mass Transf. 46(2), 175 (2009). https://doi.org/10.1007/ s00231-009-0560-3 7. Malik, U., Al-Fozan, S.A., Al-Muaili, F.: Corrosion of heat exchanger in thermal desalination plants and current trends in material selection. Desalin. Water Treat. 55(9), 2515–2525 (2015). https://doi.org/10.1080/19443994.2014.940642. Aug. 8. Nwokolo, N., Mukumba, P., Obileke, K.: Thermal performance evaluation of a double pipe heat exchanger installed in a biomass gasification system. J. Eng. 2020, 6762489 (2020). https://doi.org/10.1155/2020/6762489 9. Vicuña, L.: Selección y diseño de un intercambiador de calor para la degradación de biomasa de lodos residuales de una planta piloto de gasificación en agua supercrítica. Escuela Superior Politécnica de Chimborazo (2014) 10. Paredes, E., Gallardo, P.: Diseño y construcción de un sistema de combustión con capacidad de 10KW. Para caracterización térmica de biomasa residual, con aplicación al laboratorio de energias renovables del DECEM. Escuela Politécnica del Ejército (2008) 11. Delgado, E., Arévalo, A., Ávila, W.: Diseño de un horno intercambiador de biomasa y gas para la generación de calor utilizada en el proceso de secado del arroz. Escuela Superior Politécnica del Litoral (2019) 12. Dipac: Tubo Cédula 40. Productos (2022). https://dipacmanta.com/producto/tuberia-sin-cos tura/tubo-cedula-40/tubo-cedula-40/. Accessed 02 Jun. 2022 13. Montero, C., Vargas, J.: Diseño de un reactor para pirólisis de biomasa residual: raquis de banano y tallos de rosas. Universidad Cantral del Ecuador (2019) 14. Serrano, G., Rendón, C., Delgado, E.: Optimización de un horno de combustión de biomasa para el secado de arroz. Escuela Superior Politécnica del Litoral (2020) 15. Montesinos, J.J.: Diseño y Construcción de un Intercambiador de Calor para el Biodigestor a Escala Piloto y Control de las Condiciones de Temperatura. Universidad San Francisco de Quito (2009) 16. Dismetal: Tuberia de acero inxoidable cedula 10. Productos (2022). https://dismetal.ec/pro ductos/tuberias/acero-inoxidable/cedula-10. Accessed 03 Jun. 2022 17. Nicolalde, J.F., Cabrera, M., Martínez-Gómez, J., Salazar, R.B., Reyes, E.: Selection of a PCM for a Vehicle’s Rooftop by Multicriteria Decision Methods and Simulation. Appl. Sci. 11(14) (2021). https://doi.org/10.3390/app11146359 18. Saldanha, W.H., Arrieta, F.R.P., Ekel, P.I., Machado-Coelho, T.M., Soares, G.L.: Multicriteria decision-making under uncertainty conditions of a shell-and-tube heat exchanger. Int. J. Heat Mass Transf. 155, 119716 (2020). https://doi.org/10.1016/J.IJHEATMASSTR ANSFER.2020.119716. Jul. 19. Garoma, T., Yazdi, R.E.: Algal biomass harvesting using low-grade waste heat: evaluation of overall heat transfer coefficient in a heat exchanger. J. Heat Transfer 143(1) (Nov. 2020). https://doi.org/10.1115/1.4048473 20. de Best, C.J.J.M., van Kemenade, H.P., Brunner, T., Obernberger, I.: Particulate emission reduction in small-scale biomass combustion plants by a condensing heat exchanger. Energy Fuels 22(1), 587–597 (2008). https://doi.org/10.1021/ef060435t. Jan. 21. Al-attab, K.A., Zainal, Z.A.: Performance of high-temperature heat exchangers in biomass fuel powered externally fired gas turbine systems. Renew. Energy 35(5), 913–920 (2010). https://doi.org/10.1016/J.RENENE.2009.11.038. May 22. Simms, N.J., Kilgallon, P.J., Oakey, J.E.: Degradation of heat exchanger materials under biomass co-firing conditions. Mater. High Temp. 24(4), 333–342 (2007). https://doi.org/10. 3184/096034007X281640. Dec.

Material Selection for a Biomass Heat Exchange Multicriteria

387

23. Shen, C., Jiang, Y., Yao, Y., Wang, X.: An experimental comparison of two heat exchangers used in wastewater source heat pump: A novel dry-expansion shell-and-tube evaporator versus a conventional immersed evaporator. Energy 47(1), 600–608 (2012). https://doi.org/10.1016/ J.ENERGY.2012.09.043. Nov. 24. Granta-Design, L.: About Eco-Audit Tool (2019). https://support.grantadesign.com/resour ces/cesedupack/2019/help/topic.htm#t=html/eco/eco_about.htm%23material. Accessed 15 Feb. 2022 25. Ecuador, G.M.S.: Tuberia ASTM A53 GrB Cedula 40 - sin costura. FICHA TECNICA TUBERIA DE ACERO CEDULA 40 (2020). https://www.gmsecuador.net/tuberia/tuberiade-acero/. Accessed 03 Jun. 2022 26. Construex: Tuberia de cobre para refrigeración. Producto (2022). https://construex.com.ec/ exhibidores/metalfuji/producto/tuberia_de_cobre_para_refrigeracion 27. Odu, G.O.: Weighting methods for multi-criteria decision making technique. J. Appl. Sci. Environ. Manag. 23(8), 1449 (2019). https://doi.org/10.4314/jasem.v23i8.7 28. Moghtadernejad, S., Chouinard, L.E., Mirza, M.: Multi-criteria decision-making methods for preliminary design of sustainable facades. J. Build. Eng. 19, 181–190 (2018). https://doi.org/ 10.1016/j.jobe.2018.05.006 29. Papathanasiou, J., Ploskas, N.: VIKOR. In: Multiple Criteria Decision Aid : Methods, Examples and Python Implementations, pp. 31–55. Springer International Publishing, Cham (2018) 30. Salazar Loor, R.B., Martínez-Gómez, J., Rocha-Hoyos, J.C., LLanes Cedeño, E.A.: Selection of materials by multi-criteria methods applied to the side of a self-supporting structure for light vehicles. Int. J. Math. Oper. Res. 16(2), 139–158 (2020). https://doi.org/10.1504/IJMOR. 2020.105844 31. Papathanasiou, J., Ploskas, N.: TOPSIS. In: Multiple Criteria Decision Aid : Methods, Examples and Python Implementations, pp. 1–30. Springer International Publishing, Cham (2018) 32. Emovon, I., Oghenenyerovwho, O.S.: Application of MCDM method in material selection for optimal design: A review. Results Mater. 7, 100115 (2020). https://doi.org/10.1016/j.rinma. 2020.100115 33. Mousavi-Nasab, S.H., Sotoudeh-Anvari, A.: A new multi-criteria decision making approach for sustainable material selection problem: A critical study on rank reversal problem. J. Clean. Prod. 182, 466–484 (2018). https://doi.org/10.1016/j.jclepro.2018.02.062 34. Beltrán, R.D., Martínez-Gómez, J.: Analysis of phase change materials (PCM) for building wallboards based on the effect of environment. J. Build. Eng. 24(February), 100726 (2019). https://doi.org/10.1016/j.jobe.2019.02.018 35. Conauto: Mark grundfos bomba centrifuga DS8 DS9 DS10. In: Productos (2022). http:// www.conauto.com.ec/index.php/mark-grundfos-bomba-centrifuga-ds8-ds9-ds10/. Accessed 06 Jun. 2022 36. Nicolalde, F., Cabrera, M., Martínez-Gómez, J., Salazar, R.B., Reyes, E.: Selection of a phase change material for energy storage by multi-criteria decision method regarding the thermal comfort in a vehicle. J. Energy Storage 51, 104437 (2022). https://doi.org/10.1016/J.EST. 2022.104437. Jul. 37. Maheshwari, N., Choudhary, J., Rath, A., Shinde, D., Kalita, K.: Finite element analysis and multi-criteria decision-making (MCDM)-based optimal design parameter selection of solid ventilated brake disc. J. Instit. Eng. (India): Series C 102(2), 349–359 (2021). https://doi.org/ 10.1007/s40032-020-00650-y

Design and Simulation of an Aircraft Autopilot Control System: Longitudinal Dynamics Luis A. Coello(B)

, Fausto A. Jácome , Jonathan R. Zurita , Carlos W. Casa , and Jonathan S. Vélez

Universidad de las Fuerzas Armadas ESPE, Sangolquí, Ecuador [email protected]

Abstract. This research was carried out with the purpose of designing and simulating a control system for the longitudinal dynamics of an autopilot of a general aviation aircraft, which allows the execution of an approach and landing circuit automatically and correctly, based on the requirements given by the maximum category of the Instrument Landing System (CAT III C) that allows fully automatic landing. In the design of the control loops to perform the coupling to the glide slope, the Linear Quadratic Regulator (LQR) Methodology and the Affine Parameterization Methodology were used. The controllers were then tested through a dynamic autopilot simulation model for the aircraft under study, where a gap was found between the pitch angle reference and the measured pitch angle of the aircraft, so it would be necessary in the future to implement a state observer for the design of the vertical attitude controller, so that the developed control system complies with the general requirements and the proposed pre-design specifications. The methodology used could serve as a basis for the compilation of new results and possible comparisons. Keywords: Autopilot · Longitudinal dynamics · LQR · Affine parameterization

1 Introduction The advance in aircraft design from the very limited capacity of the Wright Brothers Flyer to high performance aircraft requires the development of various technologies such as: aerodynamics, structures, materials, propulsion and flight controls [1]. Current aircraft designs rely heavily on the automatic control system to monitor and control many of the aircraft’s subsystems. The development of the automatic control system has played an important role in the growth of aviation worldwide [2]. Modern aircraft include a variety of automatic control systems that assist the flight crew in navigation, flight management, and increasing the stability characteristics of the aircraft. Automatic pilots are designed that include automatic landing systems, which must be able to guide the aircraft from a certain altitude to the runway in safe conditions. An automatic landing system or better known as an ILS1 , is a precision landing aid 1 ILS: Instrument Landing System.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 388–402, 2023. https://doi.org/10.1007/978-3-031-25942-5_31

Design and Simulation of an Aircraft Autopilot Control System

389

recommended by the ICAO2 [3], which is used to provide precise signals to guide to aircraft and landing on the runway in normal or adverse weather conditions. Installation of the instrument landing system is a highly accurate and reliable means of navigating to the runway under IFR3 conditions, the ILS provides the necessary lateral (localizer) and vertical (glide slope) guidance to perform a precision approach [4]. The design of an automatic landing system is complex and requires extensive experience in control systems theory and knowledge of the different aerodynamic parameters involved in the design and operation of an aircraft [5]. Many works have been carried out on control systems for automatic pilots, where different methodologies have been used, among the most current is the NEM4 , which is a control design technique that exploits the energy state of a system to achieve stabilization and/or tracking, but relies on the complexity of studying nonlinear aircraft systems [6]. Another method used is the NNs5 , which are based on the signals provided by the observers who receive information regarding the error of the automatic control system [7], but there is a need to use a greater number of resources. Most of the automatic landing system controllers presented in the literature are developed based on linear models, the same ones that allow decoupling longitudinal dynamics and lateral-directional dynamics [8]. Thus, in the present work, an investigation is carried out on the development of control schemes for the longitudinal dynamics of a general aviation aircraft, the system involves a vertical attitude controller, a speed controller and a vertical trajectory controller [9]. The most practical and simple methodologies used in the development of controllers are the LQR and the Affine Parameterization Methodology [10].

2 Automatic Landing System 2.1 Instrument Landing System Instrument Landing System is a precision runway approach aid employing two radio beams to provide pilots with vertical and horizontal guidance during the landing approach [11]. The localizer (LOC) provides azimuth guidance, while the glide slope (GS) defines the correct vertical descent profile. Marker beacons and high intensity runways lights may also be provided as aids to the use of an ILS [22]. The ILS LOC aerials are normally located at the end of the runway; they transmit two narrow intersecting beams, one slightly to the right of the runway centerline, the other slightly to the left which, where they intersect, define the “on LOC” indication (see Fig. 1). The ILS GS aerials are normally located on the aerodrome; they transmit two narrow intersecting beams, one slightly below the required vertical profile and the other slightly above it which, where they intersect, define the “on GS” indication (see Fig. 2) [23]. 2 ICAO: International Civil Aviation Organization. 3 IFR: Instrument Flight Rules. 4 NEM: Nonlinear Energy Method. 5 NNs: Neural Networks.

390

L. A. Coello et al.

Fig. 1. Aircraft on localizer optimal path.

Fig. 2. Aircraft on glide slope optimal path.

The CAT III C category is the highest category of the ILS, which allows fully automatic landing in ceiling conditions and zero visibility. This category does not require minimums and it is the autopilot that lands the aircraft and starts the deceleration run until the pilot takes control by deactivating the A/P6 to take the aircraft off the runway. To be able to land an aircraft without visual reference to the runway, an automatic landing system is required that can intercept the localizer and glide slope (see Fig. 3) signals, then the aircraft is guided along the glide slope with a certain rate of descent until a certain height where the plane executes the flare maneuver so that it touches the runway [12].

Fig. 3. Maneuver to intercept the glide slope, typical angle between 2.5° to 3°.

3 Mathematical Model of Longitudinal Dynamics 3.1 Aircraft Model A state equation is a first order, vector differential equation. It is a natural form in which to represent the equation of motion of an aircraft. Its most general expression is the 6 A/P: Auto Pilot.

Design and Simulation of an Aircraft Autopilot Control System

391

indicated in Eq. (1), where the x(t) is the state vector and u(t) is the control vector. The elements of the vector x(t) are termed the state variables and the elements of the vector u(t) the control input variables; A is the state coefficient matrix and B the driving matrix. The system of equations that describes the motion of the aircraft is detailed in [13]. x˙ (t) = Ax(t) + B u(t)

(1)

3.2 Longitudinal Dynamics Model For longitudinal dynamics, the axial velocity perturbation and normal velocity perturbation are u and w, respectively. The pitch rate perturbation is q, and the pitch angle perturbation is θ . The elevator angle perturbation is η (see Fig. 4), and the engine thrust perturbation is τ . Also, the coefficients of the state matrix A are the aerodynamic stability derivatives, referred to aircraft body axes, and the coefficients of the input matrix B are the control derivatives, as indicated in the Eq. (2). The definitions of the derivatives are given in [14]. ⎡

⎤ ⎡ u˙ xu ⎢ w˙ ⎥ ⎢ zu ⎢ ⎥=⎢ ⎣ q˙ ⎦ ⎣ mu 0 θ˙

xw zw mw 0

xq zq mq 1

⎤⎡ ⎤ ⎡ u xθ xη ⎥ ⎢ ⎥ ⎢ zθ ⎥⎢ w ⎥ ⎢ zη + mθ ⎦⎣ q ⎦ ⎣ mη 0 0 θ

⎤ xτ  zτ ⎥ ⎥ η mτ ⎦ τ 0

(2)

Fig. 4. Aerodynamic controls notation.

The longitudinal state equation may be augmented to include the engine dynamics, after some rearrangement, may be written as indicated in the Eq. (3), where the throttle lever angle is ε, the turbo-jet engine time constant is Tτ and the turbo-jet engine gain

392

L. A. Coello et al.

constant is kτ . This equation is used later in the design of the controllers [15]. ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ xη 0 xu xw xq xθ xτ u u˙ ⎢ ⎥ ⎢ ⎥ ⎢ w˙ ⎥ ⎢ z z z z zτ ⎥ ⎥⎢ w ⎥ ⎢ zη 0 ⎥ ⎢ ⎥ ⎢ u w q θ ⎥⎢ ⎥ ⎢ ⎥ η ⎢ ⎥ ⎢ ⎢ q˙ ⎥ = ⎢ mu mw mq mθ mτ ⎥⎢ q ⎥ + ⎢ mη 0 ⎥ ⎥⎢ ⎥ ⎢ ⎥ ε ⎢ ˙⎥ ⎢ ⎣θ ⎦ ⎣ 0 0 1 0 0 ⎦⎣ θ ⎦ ⎣ 0 0 ⎦ τ τ˙ 0 0 0 0 −1/Tτ 0 kτ /Tτ

(3)

The dynamic stability modes. Both longitudinal dynamic stability modes are excited whenever the aircraft is disturbed from its equilibrium trim state. A disturbance may be initiated by pilot control inputs, a change in power setting, airframe configuration changes such as flap deployment and by external atmospheric influences such as gusts and turbulence [15]. The short period mode. The short period mode is typically a damped oscillation in pitch (see Fig. 5). Whenever an aircraft is disturbed from its pitch equilibrium state the mode is excited and manifests itself as a classical second order oscillation in which the principal variables are incidence, pitch rate and pitch attitude [16]. This observation is easily confirmed by reference to the eigenvectors in the solution of the equations of motion.

Fig. 5. Short period mode.

The phugoid mode. The phugoid mode is most commonly a lightly damped low frequency oscillation in speed which couples into pitch attitude and height (see Fig. 6). A significant feature of this mode is that the incidence remains substantially constant during a disturbance [16]. Again, these observations are easily confirmed by reference to the eigenvectors in the solution of the equations of motion.

Fig. 6. Phugoid mode.

Design and Simulation of an Aircraft Autopilot Control System

393

4 Control System for Coupling to the Glide Slope The control system to perform the coupling to the glide slope consists of three controllers: a vertical attitude control, a speed control and a vertical trajectory control. These together allow the aircraft to engage the glide slope with a constant speed in the approach phase up to decision height, and then slow down as needed to landing speed. In Fig. 7, the variables involved in the coupling of the aircraft to the glide slope are observed. u0 is the speed of the aircraft, γ is the glide slope angle, γr is the reference angle of the glide slope (2.5° is taken as reference) and  is the difference between the angles, which can be estimated considering the aircraft pitch angle θ perturbation as seen in Eq. (4).

Fig. 7. Basic glide slope terminology.

 = γ − γr ≈ θ − γr ≈ θ − 2.5◦

(4)

The aircraft tries to follow the flight path with a deviation (d > 0), therefore, through geometry, the variation will be given by Eq. (5). Along the glide slope, the pitch attitude θ and the speed u0 must be controlled, the speed is controlled by the speed controller.

d˙ = u0 sin  = u0 sin(γ − γr ) ≈ u0 (γ − γr ) ≈ u0 θ − 2.5◦ (5)

4.1 Vertical Attitude Controller The vertical attitude controller was designed using the LQR methodology (MIMO7 ), the procedure is detailed in [15] and [17]. To design the pitch or vertical attitude controller, all state is considered to be measurable and the longitudinal state given in Eq. (2), since in this case u has to be constant, this is how Eq. (6). ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤ w˙ w zw z q z θ zη  ⎣ q˙ ⎦ = ⎣ mw mq mθ ⎦⎣ q ⎦ + ⎣ mη ⎦ η (6) τ ˙θ 0 1 0 0 θ 7 MIMO: Multiple-Input, Multiple-Output.

394

L. A. Coello et al.

Based on the open loop dynamics of the aircraft, the longitudinal modes given by the state matrix A can be observed. Where the phugoid mode can be identified with a frequency of ωp = 0.185(rad/s) and a damping factor of ζp = 0.048; and the short period mode with a frequency of ωs = 1.53(rad/s) and a damping factor of ζs = 0.552. The two modes are slightly damped, therefore, it is necessary to increase the damping using closed loop control, for which the requirements of flight qualities according to the MIL8785C standard for the aircraft as indicated in [15] and [18]. In addition, the limits of the elevator angle perturbation η is ±16° for maximum value and ±1 for normalized maximum value. The matrices Q and R are defined by Bryson’s rules as detailed in [15]. And using the LQR methodology, the coefficients of the matrices are varied until the appropriate closed loop characteristics are obtained. The phugoid mode can be identified with a frequency of ωp = 1.03(rad /s) and a damping factor of ζp = 1; and the short period mode with a frequency of ωs = 2.66(rad /s) and a damping factor of ζs = 0.676. Once the vertical or pitch attitude controller was designed (see Fig. 8), it was verified that it works correctly through simulation for a step reference of the aircraft’s pitch angle.

Fig. 8. Block diagram of the vertical attitude controller.

4.2 Speed Controller The speed controller was designed using the Affine Parameterization methodology (SISO8 ), the procedure is detailed in [15] and [19]. It is necessary to design the speed controller, since it is necessary to keep the speed constant. The engine thrust disturbance τ is controlled by the throttle lever angle ε. For a turbo-jet engine such as the aircraft under study, the relationship between τ and ε is approximated by a transfer function with first order delay, as indicated in Eq. (7). Kτ τ (s) = ε(s) (Tτ s + 1)

(7)

where the time constant Tτ is taken as 1 s, and the gain constant Kτ is determined from a test of a block that simulates a turbojet engine with the maximum thrust of 22240 Newtons (Kτ = 13600), delivered by the two engines, as indicated in Fig. 9. 8 SISO: Single-Input, Single-Output.

Design and Simulation of an Aircraft Autopilot Control System

395

Fig. 9. Turbofan Engine System.

The longitudinal state equation may be augmented to include the engine dynamics, was the one indicated in Eq. (3), and then the state was reduced to design the speed controller, since in this case θ has to be blocked as indicated in Eq. (8). ⎤⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡  u u˙ 0 xu xw xτ ⎣ w˙ ⎦ = ⎣ zu zw zτ ⎦⎣ w ⎦ + ⎣ 0 ⎦ η (8) ε 0 0 −1/Tτ τ τ˙ kτ /Tτ For the speed controller a frequency of 0.5 rad/s is required, which corresponds to a settling time of 9 s. Thus, using the affine parameterization methodology, the desired complementary sensitivity function FQ (s) (closed loop dynamics) is defined, in order to force integral action on the controller. FQ (s) =

α2

s2

1 + α1 s + 1

(9)

Once the speed controller was designed (see Fig. 10), it was verified that it works correctly by means of simulation for a step reference for the speed.

Fig. 10. Block diagram of the speed controller.

4.3 Vertical Trajectory Controller The speed controller was designed using the Affine Parameterization methodology (SISO). Considering that the vertical attitude controller is fast enough with respect to

396

L. A. Coello et al.

the vertical trajectory controller, the vertical trajectory control system is designed, the model for the vertical trajectory controller is the one indicated in Eq. (10). G011 (s) =

u0 d (s) = θ (s) s

(10)

For the vertical trajectory controller, a frequency of 0.3 rad/s is needed, which corresponds to a settling time of 14 s. But there is also a lower limit for the frequency given by the gusts to which the aircraft is exposed. Using the affine parameterization methodology, the desired complementary sensitivity function FQ (s) is defined as well as Eq. (9). Once the vertical trajectory controller was designed (see Fig. 11), it was verified that it works correctly by means of simulation for a step reference for the deviation d .

Fig. 11. Block diagram of vertical trajectory controller.

5 Integration, Validation and Analysis of Results The previously designed controllers were verified unitarily under ideal conditions; therefore, it was necessary to carry out an experimental validation under certain real operating conditions and verify that they are capable of interacting and achieving the desired control action for each situation. This was carried out using a dynamic autopilot simulation model (see Fig. 12) for the aircraft under study [20].

Fig. 12. Dynamic autopilot simulation model.

Each controller designed with its respective architecture was integrated into the FCC9 (see Fig. 12), where the controllers were implemented as indicated in Fig. 13 and Fig. 14.

Design and Simulation of an Aircraft Autopilot Control System

397

Fig. 13. Integration of vertical attitude controller and speed controller.

Fig. 14. Integration of vertical trajectory controller.

Once the controllers were integrated into the real model of the autopilot of the aircraft [21], the vertical trajectory controller was tested in straight and level flight, where for an initial deviation of d = 10 meters, in Fig. 15 a time setting time of 23 s, and the design set time was 15 s; this due to the restrictions imposed by the vertical trajectory model, 9 FCC: Flight Control Computer.

398

L. A. Coello et al.

but the response of the aircraft is acceptable since the separation between the vertical attitude controller and the vertical trajectory controller must be greater than a decade (see Fig. 16).

R (m)

1.0

Range

0.5 0.0 -0.5 -1.0 Deviation

d (m)

10 0 -10 0

20

40

60

80

t (sec)

67 66 65 64 63

Speed

4

Pitch angle Pitch angle ref

θ (°)

u0 (m/s)

Fig. 15. Test of vertical trajectory controller, d = 10 m.

0 -4

Elevator angle

η

2 1 0 0

20

40

60

80

t (sec) Fig. 16. Test of vertical attitude controller, d = 10 m.

The vertical trajectory controller was then tested to perform glide slope coupling with ideal conditions (see Fig. 17 and Fig. 18). Figure 17 shows the range of the aircraft until reaching the runway and the error for the deviation while the aircraft engages the glide slope. Figure 18 shows how the speed remains constant until 28 s, when it is assumed that the aircraft reached the decision height and the flare control system starts to work

Design and Simulation of an Aircraft Autopilot Control System

399

for the transition from the decision height to the runway. Thus, it is observed how the speed controller reduces the speed and the pitch angle increases as the aircraft touches down the runway at 57 s. Range

R (m)

3000 1500 0

d (m)

20

Deviation

10 0 -10 0

10

20

30

40

50

60

t (sec) Fig. 17. Test of vertical trajectory controller, glide slope coupling with ideal conditions.

u0 (m/s)

70

Speed

60 50 40

η

θ (°)

8 4 0 -4

Pitch angle Pitch angle ref

1

Elevator angle

0 -1 0

10

20

30

40

50

60

t (sec) Fig. 18. Test of vertical attitude controller, glide slope coupling with ideal conditions.

Finally, the vertical trajectory controller was tested to perform the glide slope coupling with real conditions (see Fig. 19 and Fig. 20). Figure 19 shows the range of the aircraft until reaching the runway and the following error for the deviation, in this case there are more fluctuations, the response is not as clean as in the ideal case due to the disturbances. Similarly, in Fig. 20, it can be seen how the speed remains almost constant

400

L. A. Coello et al.

until second 20, when it is assumed that the aircraft has reached the decision height and the flare control system starts to work for the transition from the decision height to the runway. Thus, it is observed how the speed control reduces the speed and the pitch angle increases as the aircraft touches down the runway at second 52; again, fluctuations due to disturbances are observed. Range

R (m)

3000 1500 0 -1500

d (m)

20

Deviation

10 0 -10 -20 0

10

20

30

40

50

60

70

t (sec)

u0 (m/s)

Fig. 19. Test of vertical trajectory controller, glide slope coupling with real conditions.

Speed

70 60 50

θ (°)

40 12 8 4 0 -4 -8 -12

Pitch angle Pitch angle ref

η

1

Elevator angle

0 -1 0

10

20

30

40

50

60

70

t (sec) Fig. 20. Test of vertical attitude controller, glide slope coupling with real conditions.

Design and Simulation of an Aircraft Autopilot Control System

401

6 Conclusions Verifying the results obtained, the behavior of the autopilot complies with the general requirements and the proposed pre-design specifications, in addition the closed loop in each case has an acceptable and robust performance. In the analysis of results there is an offset between the pitch angle reference and the measured pitch angle, therefore, it would be necessary in the future to implement a state observer for the design of the vertical attitude controller (pitch). In a pre-design, said offset does not have much relevance in the response of the aircraft. The design specifications do not contemplate the restrictions imposed by the models, therefore, it is necessary to make adjustments to different parameters in the controllers once installed in the real model of the aircraft, so that the restrictions are absorbed in the robustness of the problem, as well the controllers will better respond to the imposed requirements. Having forced integral action (poles at the origin) in the trajectory controllers, these guarantee that at low frequencies the gains tend to infinity, therefore, that there is no error. In addition, in the design of discrete time controllers, it is observed that terms that should be canceled are not completely canceled, this is due to the variation of the parameters, mainly the sampling time and the design frequency for each case, therefore, a priori it would be a numerical problem, but not a design one.

References 1. Petrescu, R.V., Aversa, R., Akash, B.: History of aviation-a short review. J. Aircr. Spacecraft Technol. 1(1), 43–46 (2017) 2. Nelson, R.: Flight Stability and Automatic Control, 2nd edn. McGraw Hill, United States (1998) 3. Kügler, M.E., Heller, M., Holzapfel, F.: Automatic take-off and landing on the maiden flight of a novel fixed-wing UAV. In: 2018 Flight Testing Conference, p. 4275 (2018) 4. Ifqir, S., Combastel, C.: Multi-sensor data fusion for civil aircraft IRS/GPS/ILS integrated navigation system. In: 2021 European Control Conference (ECC), pp. 10–16 (2021) 5. Gonzalez, P., Boschetti, P., Cárdenas, E.: Design of a landing control system which considers dynamic ground effect for an unmanned airplane. In: 1st WSEAS International Conference on Aeronautical and Mechanical Engineering, pp. 143–148 (2013) 6. Akmeliawati, R., Mareels, I.: Nonlinear energy-based control method for aircraft automatic landing systems. IEEE Trans. Control Syst. Technol. 18(4), 871–884 (2009) 7. Lungu, R., Lungu, M.: Automatic landing system using neural networks and radio-technical subsystems. Chin. J. Aeronaut. 30(1), 399–411 (2017) 8. Rao, D., Go, T.: Automatic landing system design using sliding mode control. Aerosp. Sci. Technol. 32(1), 180–187 (2014) 9. Wahid, N., Rahmat, M.: Pitch control system using LQR and Fuzzy Logic Controller. In: 2010 IEEE Symposium on Industrial Electronics and Applications (ISIEA), pp. 389–394 (2010) 10. Vlk, J., Chudy, P., Prustomersky, M.: Light sport aircraft auto-land system. In: 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC), pp. 1–10 (2019) 11. Aishwarya, C.: The instrument landing system (ILS)-a review. Int. J. Progressive Res. Sci. Eng. 3(03), 1–6 (2022)

402

L. A. Coello et al.

12. Dudek, E., Kozłowski, M.: The concept of the instrument landing system – ILS continuity risk analysis method. In: Mikulski, J. (ed.) TST 2018. CCIS, vol. 897, pp. 305–319. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-97955-7_21 13. McLean, D.: Automatic Flight Control Systems. Prentice Hall International, UK (1990) 14. Nair, M.P., Harikumar, R.: Longitudinal dynamics control of UAV. In: 2015 International Conference on Control Communication & Computing India (ICCC), pp. 30–35 (2015) 15. Cook, M.: Flight Dynamics Principles, 2nd edn. Elsevier Ltd., UK (2007) 16. Blakelock, J.: Automatic Control of Aircraft and Missiles, 2nd edn. John Wiley & Sons Inc., Canada (1991) 17. Guardeño, R., López, M., Sánchez, V.: MIMO PID controller tuning method for quadrotor based on LQR/LQG theory. Robotics 8(2), 36 (2019) 18. Department of the Air Force: MIL-8785C. Air Force, United States (1980) 19. Bakolas, E.: Dynamic output feedback control of the Liouville equation for discrete-time SISO linear systems. IEEE Trans. Autom. Control 64(10), 4268–4275 (2019) 20. Ashish, T.: Modern Control Design with MATLAB and SIMULINK. John Wiley & Sons Ltd., India (2002) 21. Petkov, P., Slavov, T., Kralev, J.: Design of Embedded Robust Control Systems Using MATLAB®/Simulink®. Institution of Engineering and Technology, United Kingdom (2018) 22. SKYbrary (2021). https://skybrary.aero. Last accessed 26 May 2022 23. IVAO (2022). https://mediawiki.ivao.aero, Last accessed 30 May 2022

Management Innovation and Competitive Success in Peruvian Companies of the Manufacturing Sector Rina A. Valencia-Durand1 , Aleixandre Brian Duche-Pérez2(B) , Cintya Yadira Vera-Revilla3 , Olger Albino Gutiérrez-Aguilar1 , Milena Ketty Jaime-Zavala3 , and Anthony Medina Rivas Plata3 1 Universidad Nacional de San Agustín, 04001 Arequipa, Peru 2 Universidad Privada Norbert Wiener, 15046 Lima, Peru

[email protected] 3 Universidad Católica de Santa María, 04001 Arequipa, Peru

Abstract. Innovation seeks to establish organizational strategies to maximize the use of resources that contribute to the proper development of a company. This study identifies the relationship between management innovation and the competitive success of companies in the manufacturing sector in southern Peru. From the quantitative paradigm, of correlative type, a sample was taken, by means of a survey, of managers of 37 companies located in the manufacturing sector. Positive correlations were identified between business competitiveness with product innovation (r = .776), process innovation (r = .779), marketing innovation (r = .556) and organizational innovation with business competitive success (r = .605). It is concluded that the study variables achieve a positive and moderate correlation index (r = .783). Keywords: Business · Competitiveness · Success · Management · Manufacturing sector

1 Introduction Innovation processes were usually considered as a primary part of the industrial and technological sector; but in recent years there has been a considerable increase in the commercial sector in order to achieve greater benefits and business development [1, 2]. Innovation presents significant advantages in the processes of design, manufacture, promotion and sale of products [3]. Therefore, companies have been incorporating innovation management processes in their different functional areas related to the design and development of products and services [4], advertising and business marketing, personnel management, and even export and international expansion [5]. The significant improvement in the use of technologies to achieve higher levels of competitiveness in the market has been emphasized as a business development objective [6, 7]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 403–416, 2023. https://doi.org/10.1007/978-3-031-25942-5_32

404

R. A. Valencia-Durand et al.

In the Argentine and Chilean case, the identification of inputs of innovation constitutes one of the priorities of the nations, which are derived from the formulation of public policies that strengthen the efforts of business innovation and seek respond to the investment allocated to these activities. In fact, the effort of manufacturing companies in these countries to promote innovation is determined by internal and external aspects of each company, and it is relevant to establish what these aspects are in these economies. In Mexico, it observed that innovation depends on the degree of education of the entrepreneur, the size of the organization and the proactivity of the company. And in the Peruvian case, it determined that the competitiveness of a country must consider factors such as strategic planning, production and operations, quality, marketing, accounting and finance, human resources, environmental management and information systems, and complemented with external indicators based on the systemic competitiveness approach. According to the OSLO manual, innovation is essential for the growth of both production and productivity and as the world economy develops, the innovation process does too [8]; The phenomenon of globalization has led to a significant increase in the need for access to more in-depth information on national and international markets, but also to greater competence and knowledge in the management of supply chains [9, 10]. Due to advances in technology and in the flow of information, knowledge is increasingly seen as a main determinant of economic growth and innovation [11, 12]. Innovation in organization could be a prior and necessary condition for technological innovations, organizational innovations are not only a support factor for product and process innovation [13, 14]; they can considerably influence the results of companies. They can improve the quality and efficiency of work, favor the exchange of information and provide companies with a greater capacity to learn and use new knowledge and technologies [15], in addition to considering in this frame the methods of commercialization or marketing. In short, the OSLO manual, takes into account four relevant aspects in terms of business innovation: product, process, marketing and organizational innovations [8]. In this context, the companies of the Arequipa Region, in Peru, are no strangers to the search for actions that determine the improvement of their productivity and competitiveness, a concern that leads to verifying or knowing the degree of relationship that would exist between management innovation entrepreneurship and competitive business success in medium and large companies in the manufacturing sector in the area of Metropolitan Arequipa.

2 Materials and Methods The proposed study corresponds to a non-experimental design of a prospective crosssectional type because data is generated and examined from a given moment and establishes the level of association of the variables, also corresponding to the analytical type, since a process of analysis on the variables. Likewise, the level of study or research that corresponds to the Relational type, due to the meaning and structure of the hypothesis, as well as because it handles two relationship variables in order to provide a degree of relationship or association between the variables.

Management Innovation and Competitive Success

405

Considering that it is the set of all the cases that agree with certain specifications; the universe or population of our study was represented by companies of the manufacturing sector of Arequipa city. In addition, the study unit was made up of officials and/or executive level personnel, who meet inclusion criteria such as: that the worker has been working for at least one year, which works directly in the institution, as well as being able to answer questions related to business management innovation and competitive business success. Considering relevant aspects that characterize medium and large companies, such as legal and fiscal formalization, which remain up to date, registered in Public Registries, Ministry of Production, Ministry of Work, etc., a filter has been carried out and the number of 37 companies, which in this case represent the Universe or Population, for which it was not convenient to work with a sample, but rather to take into account in the process of surveying all of them, which means carrying out a census (see Table 1). Table 1. Census structure. Companies

Economic activity

1

Production, processing and preservation of meat

1

Forging, pressing, stamping and rolling of metals, powder metallurgy

1

Garment manufacturing; except fur garments

3

Leather tanning and dressing

1

Manufacture of bakery

1

Manufacture of basic chemicals

1

Manufacture of basic iron and steel

1

Manufacture of fabrics and knitted and crocheted articles

1

Manufacture of grain mill products

1

Manufacture of machine tools

1

Manufacture of metal products for structural use

2

Manufacture of non-alcoholic beverages; mineral waters

1

Manufacture of non-refractory ceramic products for non-structural use

1

Manufacture of non-refractory clay and ceramic products for use

3

Manufacture of other food products nec

1

Manufacture of other non-metallic mineral products nec

1

Manufacture of other rubber products

1

Manufacture of other textile products nec

1

Manufacture of other types of machinery for typical use

2

Manufacture of pharmaceutical, medicinal and botanical products (continued)

406

R. A. Valencia-Durand et al. Table 1. (continued)

Companies

Economic activity

1

Manufacture of soaps and toilet detergents

1

Manufacture of wood pulp, paper and cardboard

2

Preparation and spinning of textile fibers; textile weaves

2

Printing activities

2

Processing and preservation of fruit, legumes and vegetables

1

Production of animal feed

1

Production of cocoa and chocolate and confectionery

1

Other manufacturing industries

Considering that the type of research was chosen for the Structured Survey technique applied in the collection of field data of the relationship variables raised. And, for the instruments, the questionnaire proposed by the authors, composed of five dimensions, seven indicators and thirty-four sub-indicators distributed in thirty-seven questions, corresponded. The measurement scale of the variables proposed in the data collection instrument corresponded to the “Likert scale” ranging from 1 to 20, where 1 represents “totally disagree” and 20 represents “totally agree”, whose characteristics to be evaluated and measured are the following: In the case of the variable “Business management innovation”, we sought to identify the following indicators: product innovation, innovation of processes, marketing innovation and organizational innovation. While, in the case of the second variable, Business Competitive Success, the following indicators were established: technological position, product quality and training of workers. The instrument passed the validity and reliability tests by identifying the Cronbach’s Alpha coefficient (see Table 2). Table 2. Reliability statistics. α

Number of elements

Product innovation

.756

4

Process innovation

.918

6

Marketing innovation

.903

9

Organization innovation

.935

5

Technological position

.814

3

Product quality

.660

4

Worker training

.959

3

Competitive success

.878

11

Management Innovation and Competitive Success

407

The field work was carried out with the support of previously trained assistants and under the supervision of the authors. The application of the surveys was done in situ, at the premises of each of the companies to be registered and was part of the working day, seeking to be attended by the civil servant or executive level personnel of the visited company. The collected information was subjected to a data coding process in relation to the control and coding variables, which corresponded to the relationship variables. The duly tabulated information allowed the formulation of frequency tables, as well as descriptive statistical indicators, and statistical correlation indicators. The statistical procedures were applied depending on the type, design and level of research, among which we can mention the information processing and tabulation that was carried out in Excel and in the statistical package SPSS version 23. The results were handled at the level of descriptive statistics, taking into account the mean, median, standard deviation, variance, centiles, maximum and minimum values, relative and accumulated frequencies, etc., and for the relational level, the degrees of association or relationship between the variables and comparison were determined with a scale of Kendall’s Tau-b and Tau-c correlation indices, in order to determine the intensity or degree of relationship between the variables and dimensions. Regarding the statistical tests, Cronbach’s Alpha indicator or coefficient was used to measure the reliability of the content of the instrument in relation to the responses; likewise, in the process of non-parametric testing of the relationship hypothesis, the Pearson Chi-square statistic was used, and the Kendall Tau-b correlation coefficient with significance levels of 5%. In this regard, it was worked with contingency tables or cross tables in the statistical package SPSS version 23.0.

3 Results and Discussion Tables 3, 4, 5, 6 and 7. Table 3. Sociodemographic variables of the population and study units. Detail

Frequency

Percentage

Accumulated percentage

Study population Years of operation

Business sector

From 2 to 4 years

2

5.4

5.4

From 4 to 6 years

1

2.7

8.1

From 6 to 10 years

3

8.1

16.2

more than 10 years

31

83.8

100.0

Total

37

100.00

Textile

5

13.5

13.5

Foods

10

27.0

40.5 (continued)

408

R. A. Valencia-Durand et al. Table 3. (continued) Detail

Frequency

Percentage

Accumulated percentage

Metal mechanics

2

5.4

45.9

Water

1

2.7

48.6

Printing

1

2.7

51.4

Pharmacy

3

8.1

59.5

Building

2

5.4

64.9 100.0

Others

13

35.1

Total

37

100.00

Study units Respondent’s age

Gender of respondent Academic degree of respondent

Position held by the respondent

Seniority in the respondent’s position

25 to 35 years old

1

2.7

2.7

From 35 to 50 years

22

59.5

62.2

More than 50 years

14

37.8

100.0

Total

37

100.00

Feminine

11

29.7

29.7

Male

26

70.3

100.0

Secondary

2

5.4

5.4

Higher

7

18.9

24.3 100.0

Graduate

28

75.7

Total

37

100.00

Manager

21

56.8

56.8

Administrator

16

43.2

100.0

Total

37

100.00

2

5.4

From 4 to 6 years

6

16.2

21.6

From 6 to 10 years

5

13.5

35.1

More than 10 years

24

64.9

100.0

Total

37

100.00

From 2 to 4 years

5.4

According to Table 3, we can see that 83.8% of the companies surveyed stated that they had been in the market for more than ten years, which leads us to understand that the majority have a clear understanding of the dimensions of what business management innovation means and the factors of competitive business success that have been evaluated in this research work, that is to say that the answers to the content of the questionnaire come from reliable primary sources, at least in terms of the experience they have.

Management Innovation and Competitive Success

409

Table 4. Relevant statistics of the dimensions and variables IPd

IPc

IM

IO

IG

PT

CP

FT

ECE

Percentile: 25

15.88

16.50

15.11

14.90

15.39

15.50

16.00

14.67

15.74

50

18.25

17.50

16.22

16.00

16.84

16.67

17.00

16.67

17.03

75

18.75

19.08

17.78

17.40

18.14

18.00

18.25

18.00

17.75

Table 5. Contingency between BMI and BCS. Business competitive success (Group) Product innovation

Good

Outstanding

Excellent

0

1

0

1

0.0%

11.1%

0.0%

2.7%

Outstanding

2

6

1

9

100.0%

66.7%

3.8%

24.3%

Excellent

0

two

25

27

0.0%

22.2%

96.2%

73.0%

Good

Total Process innovation

2

9

26

37

100.0%

100.0%

100.0%

100.0%

Good

1

3

0

4

50.0%

33.3%

0.0%

10.8%

Outstanding

1

3

0

4

50.0%

33.3%

0.0%

10.8%

0

3

26

29

0.0%

33.3%

100.0%

78.4%

Excellent Total Marketing innovation

Total

2

9

26

37

100.0%

100.0%

100.0%

100.0%

2

two

0

4

100.0%

22.2%

0.0%

10.8%

0

5

8

13

0.0%

55.6%

30.8%

35.1%

Excellent

0

2

18

20

0.0%

22.2%

69.2%

54.1%

Total

2

9

26

37

Good

Outstanding

(continued)

410

R. A. Valencia-Durand et al. Table 5. (continued) Business competitive success (Group)

Organization innovation

Total

Good

Outstanding

Excellent

100.0%

100.0%

100.0%

100.0%

Insufficient

1

3

0

4

50.0%

33.3%

0.0%

10.8%

Good

1

0

0

1

50.0%

0.0%

0.0%

2.7%

0

5

9

14

0.0%

55.6%

34.6%

37.8%

Excellent

0

1

17

18

0.0%

11.1%

65.4%

48.6%

Total

2

9

26

37

100.0%

100.0%

100.0%

100.0%

Outstanding

Table 6. Chi-square tests.

Product innovation and business competitive success

Process innovation and business competitive success

Marketing innovation and business competitive success

Value

gl

Sig. asymptotic (bilateral)

Pearson chi-square

25,487

4

.000

Likelihood ratio

25,929

4

.000

Linear by linear association

19,696

1

.000

Valid cases number

37

Pearson chi-square

25,198

4

.000

Likelihood ratio

27,176

4

.000

Linear by linear association

20,905

1

.000

Valid cases number

37

Pearson chi-square

24,399

4

.000

Likelihood ratio

19,593

4

.000 (continued)

Management Innovation and Competitive Success

411

Table 6. (continued)

Organizational innovation and business competitive success

Value

gl

Sig. asymptotic (bilateral)

Linear by linear association

15,694

1

.000

Valid cases number

37

Pearson chi-square

34,026

6

.000

Likelihood ratio

24,992

6

.000

Linear by linear association

17,701

1

.000

Valid cases number

37

Table 7. Symmetric measures of business management innovation and business competitive success. Value

Typ error nod a

Approximate Tb

Approximate sig.

Product innovation and business competitive success Ordinal by ordinal

Interval by interval

Kendall’s Tau-b

.776

.0.92

4,836

.000

Spearman correlation

.807

.098

8,084

,000c

Pearson’s R

.740

.075

6,502

,000c

Valid cases number

37

Process innovation and business competitive success Ordinal by ordinal

Interval by interval Valid cases number

Kendall’s Tau-b

.779

.0.78

4,291

.000

Spearman correlation

.817

.084

8,384

,000c

Pearson’s R

.762

.073

6,962

,000c

37 (continued)

412

R. A. Valencia-Durand et al. Table 7. (continued) Value

Typ error nod a

Approximate Tb

Approximate sig.

Marketing innovation and business competitive success Ordinal by ordinal

Interval by interval

Kendall’s Tau-b

.556

.125

3,445

.001

Spearman correlation

.577

.133

4,184

,000c

Pearson’s R

.660

.109

5,201

,000c

Valid cases number

37

Organizational innovation and business competitive success Ordinal by ordinal

Interval by interval Valid cases number

Kendall’s Tau-b

.605

.096

4,352

.000

Kendall’s Tau-c

.471

.108

4,352

.000

Spearman correlation

.641

.108

4,946

,000c

Pearson’s R

.701

.076

5,818

,000c

37

Another important aspect that will serve for the process of analyzing the qualification of the Business Competitive Success variable, is the control variables, such as the category to which the surveyed companies belong, since it will allow us to establish comparisons of the results according to the operating category to which it belongs. It can be seen that the companies that participate the most in the survey process correspond to food and textiles, highlighting that due to the diversity of items that exist in the market, it has had to be grouped in the category of others. The control variable referring to the age of the person responsible for attending the surveys turns out to be also important together with their level of education. As can be seen, 59.5% belong to the age range of 35 to 50 years, and 37.8% to people over 50 years of age, which for the most part could mean that they have sufficient experience in business management, therefore the responses would be more consistent. Given that the surveys have been applied to owners and/or top-level executives in each of the companies considered in the census, taking into account the tabulation of results in relation to gender, we can state that most of them are men and that these important positions are also held by women, representing 29.7% of those surveyed, a not insignificant percentage.

Management Innovation and Competitive Success

413

Undoubtedly in a research process and when the source corresponds to people with a higher education and/or postgraduate level, the answers can be considered with a greater business and academic perspective, which contributes to the consistency of the work [3, 8, 12]. Likewise, we can see that the vast majority of the people surveyed have a higher level of education and postgraduate studies, 18.9% and 75.7% respectively, a control variable that will be useful in the analysis of cross results. It can also be seen that the position that corresponds to the business population is manager and administrator, whose participation is 56.8% and 43.2% respectively, which means that the answers come from workers with the highest rank and therefore their appreciations have foundation and support of business rigor [1, 7, 9, 11]. It is important to obtain information preferably from people with greater knowledge and experience in the business field, due to the level of veracity and seriousness of the answers, so in the present case of research, it turned out that the surveys were applied mostly to workers or executives with more of 10 years of work in the companies surveyed, as we can see in the preceding table, that would correspond to 64.9%. In general terms, according to the dimensions and factors of the business management innovation and business competitive success variables shown in Table 4, some characteristics can be extracted, such as that in all the dimensions and factors, they show a positive assessment, since that the mean is located at a point very close to the median, as well as the result of the 25th, 50th and 75th percentiles, which denote an almost uniform score, as well as the maximum range that also show small differences between each of them [4, 14]; results that served to determine the rating of each component, which is shown below; also highlighting that in none of the dimensions and factors including the results variable, the respondent expresses extreme positive or negative evaluations, except for the product quality factor whose range is lower than that corresponding to the other dimensions or factors [5, 6, 9, 15]. From the results of the research around the Product Innovation dimension, it can be seen that in general are importantly concerned about issues of innovation of their products and/or services, such as it can be seen in the table, the most important rating achieved is “excellent” with 73%, followed by the rating of “outstanding” with 24.3%. Process Innovation dimension occurs with the introduction of a new, or significantly improved, production or distribution process [13]. From the results of the research around the Process Innovation dimension, it can be seen that, in general, the companies are importantly concerned about the Innovation of their Products and/or Services. As can be seen in the table, the most important rating achieved is “excellent” with 78.4%, followed by the rating of “outstanding and good” with 10.8%. The Marketing Innovation Dimension is the application of a new marketing method that involves significant changes in the design or packaging of a product, its positioning, its promotion, or its pricing [2, 7, 15]. From the results of the research around the Marketing Innovation dimension, it can be evidenced that in general the companies are importantly concerned about the Innovation of their Products and/or Services, such as it can be seen in the table, the most important rating achieved is “excellent” with 54.1%, followed by the rating of “outstanding” with 35.1%.

414

R. A. Valencia-Durand et al.

The Organization Innovation dimension occurs with the introduction of a new organizational method in the practices, the organization of the workplace or the external relations of the company [4, 7, 14, 15]. From the results of the research around the Organizational Innovation dimension, it can be seen that in general the companies are importantly concerned about issues of Innovation of their Products and/or Services, such as can be seen in the table, the most important rating achieved is “excellent” with 48.6%, followed by the rating of “outstanding” with 37.8%. As it is evident, it has been observed that the dimensions that make up the Business Management Innovation variable have obtained ratings ranging from outstanding to excellent, consequently the rating of the respondents regarding this variable has the same connotation [7, 9, 12], such is the case that 73% of the respondents expressed their qualification response as excellent, while 18.9% as outstanding, results that would explain that the companies are concerned about carrying out actions in relation to innovation of business management. Business competitive success on average, the companies have shown that most of the 37 or almost all of their concerns are due to the technological aspect, highlighting the rating of excellent, lo 59.5%, followed by the rating outstanding with 32.4%, which is an indicator that companies in this sector really take into account that the technological aspect is vital to seek to be competitive [1, 8, 11, 12]. Regarding the quality of the product in the medium and large manufacturing companies of Metropolitan Arequipa, it has been possible to know that, a large part of the companies surveyed, they consider Product Quality as a fundamental element for differentiation and Competitive Business Success [6], then, as can be seen in the table above, the responses are in relation to the rating of “excellent”, with 73%. In verifying the behavior of companies on one of the components of Business Competitive Success, which is the training of personnel and according to the preceding table of qualifications, we can state that more than 90% of the companies surveyed agree in mentioning that they care about the human factor in terms of training policies and budget allocation for education and training, as well as the promotion of teamwork and motivations for proactivity. Business Competitive Success in companies has a changing and dynamic behavior over time and is prone to changes in the environment, the habits, and preferences of customers, as well as the behavior of competitors, among other aspects. The achievement of Business Competitive Success of a company involves factors that have to do with innovation, product and/or service quality, technological innovation, permanent staff training, among others [3, 7, 8]; competitive success is not an end, it is a means to achieve the consolidation of the company in the market [3, 6, 9, 12]; Competitive Business Success does not occur by itself, but depends on the fulfillment of other factors such as those that were evaluated and that make up the variable [14]. In this framework, the results of the preceding table referring to Business Competitive Success, do nothing more than confirm the results and qualifications of its determining factors [4, 7, 9], so we can state that, when the companies studied are concerned about the factors related to innovation, staff training, technology, product quality, they would be managing to have the conditions to achieve Business Competitive Success [6, 7, 13], at least to the date of the cross-sectional study that was developed in the companies.

Management Innovation and Competitive Success

415

The table of results according to control variables, provides important information for the management and use of this report in the field of companies, since the appreciation or perception of the Competitive Success of companies are aimed at giving a higher score to the qualifications of “outstanding” and “excellent”, with more preference to “outstanding”, from which we can deduce that in general, the staff appreciates that the company cares about aspects that are related to the permanent search for competitiveness.

4 Conclusion The results shown in the preceding tables, whose ordinal measurement scale goes from Poor to Excellent, show that both in the Business Management Innovation variable and in the Business Competitive Success variable, they are located in the excellent rating according to the variables of control considered for this case, which are: years of operation, category of the company, age of the respondent, gender, academic degree, position held and seniority; i.e., that the consideration of the companies for the search for Business Management Innovation and Business Competitive Success, deploy different actions, as can be evidenced in the results found and presented in the cross rating tables. Finally, having carried out the hypothesis tests, we conclude that there is a high correlation between business management innovation and business competitive success in the companies of the manufacturing sector also mentioning that the correlation coefficient is significant (r = 0.783). Likewise, there is a significant correlation between the product innovation dimension and business competitive success (r = 0.776), a significant correlation between the process innovation dimension and business competitive success (r = 0.779), a moderate degree correlation between the marketing innovation dimension (r = 0.556) and business competitive success (r = 0.605), of significant order, between the organizational innovation dimension and business competitive success. Likewise, the qualification profiles achieved in relation to Business Management Innovation yield a qualification between “outstanding” and “excellent”, consistent with the qualification results that the respondents gave to each of its dimensions, such as product innovation, process innovation, marketing innovation and organization innovation. In this regard, the qualification profiles achieved in relation to Business Competitive Success show results similar to those of the Business Management Innovation variable, with most of the qualifications in the excellent range.

References 1. Lee, S.-B.: An analysis on success factors and importance of six sigma innovation in small and medium venture companies. J. Korea Acad.-Ind. Cooperation Soc. 19(5), 527–536 (2018) 2. Law, K., Lau, M.Y., Ip, W.H.: What drives success in product innovation? Empirical evidence in high-tech and low-tech manufacturers in China. Int. J. Technol. Manage. 79(2), 165–198 (2019) 3. Ritter, T., Gemunden, H.G.: The impact of a company’s business strategy on its technological competence, network competence and innovation success. J. Bus. Res. 57(5), 548–556 (2004) 4. Skvirskaja, V.: ‘Russian merchant’ legacies in post-Soviet trade with China: moral economy, economic success and business innovation in Yiwu. Hist. Anthropol. 29, S48–S66 (2018)

416

R. A. Valencia-Durand et al.

5. Schuh, G., Arnoscht, J., Rudolf, S., Riesener, M., Wissel, S.: Lean innovation – critical success factors for medium-sized pharmaceutical companies. Pharmazeutische Industrie 75(1), 131-+ (2013) 6. Livingstone, P.: A strategic balance commercial success for high-technology companies never occurs without a strong commitment to research and innovation. R&D Mag. 55(6), 12–16 (2013) 7. Marullo, C., Casprini, E., Di Minin, A., Piccaluga, A.: ‘Ready for Take-off’: how open innovation influences startup success. Creativity Innov. Manage. 27(4), 476–488 (2018) 8. OSLO: Manual de OSLO Directrices para la recogida e interpretación de información relativa a innovación. OECD y Eurostat, Madrid, España (2005) 9. Kant, A.: Innovation as a success factor for the companies in the paper industry. Wochenblatt Fur Papierfabrikation 145(12), 790-+ (2017) 10. Braga, R.M.: The success of the innovation process is in the integration of academy, company and government. Humanidades Inovacao 5(2), 9–21 (2018) 11. Demirdogen, G., Isik, Z.: Effect of internal capabilities on success of construction company innovation and technology transfer. Tehnicki Vjesnik-Technical Gazette 23(6), 1763–1770 (2016) 12. Hsiao, Y.-C., Hsu, Z.-X.: Firm-specific advantages-product innovation capability complementarities and innovation success: a core competency approach. Technol. Soc. 55, 78–84 (2018) 13. Dickinson, S.L.J.: Fear of de-facto price controls forcing cuts in biotech innovation, officials say – with health-care reform proposals threatening financial prospects, firms are shelving projects, laying off researchers. Scientist 8(7), 1–000 (1994) 14. Garcia-Muina, F.E., Pelechano-Barahona, E., Navas-Lopez, J.E.: Knowledge codification and technological innovation success: empirical evidence from Spanish biotech companies. Technol. Forecast. Soc. Chang. 76(1), 141–153 (2009) 15. Zhou, M., Leenders, M.A.A.M., Cong, L.M.: Ownership in the virtual world and the implications for long-term user innovation success. Technovation 78, 56–65 (2018)

Comparative Study of Accounting and Management Perceptions of the Usefulness of Financial Information in Small and Medium-Sized Timber Companies in Colombia María del Pilar Corredor García(B) , Natalia Murillo Gallego, and Jasleidy Astrid Prada Segura Corporación Universitaria Minuto de Dios, Bogota D.C., Colombia [email protected]

Abstract. This article focuses on establishing the managerial and accounting perceptions regarding the implementation of International Financial Reporting Standards in lumber SMEs in the municipality of Madrid Cundinamarca, in order to determine the benefits in organizational and market competitiveness offered by the implementation of international standards, Besides what they contribute to the usefulness of the accounting information, where two SMEs of the timber sector are taken as object of study, which are in different phases of IFRS implementation, that is to say, one has the totality of the Standards implemented and the second one is in an initial evaluation phase to implement IFRS. This objective is developed and estimated from mixed research, with a qualitative approach, where a series of instruments such as case studies and semi-structured interviews are applied, to know the benefits in organizational and market competitiveness provided by the adoption of IFRS to wood SMEs. Keywords: Competitiveness · Perceptions · Usefulness of accounting information

1 Introduction One of the effects caused by globalization has been the need to achieve uniformity in the accounting and financial system in the world; this has resulted in the emergence of the International Financial Reporting Standards in the mid-eighties, all this in order to achieve a single business language, which would show the true economic situation of each organization and each country, establishing the requirements for recognition, measurement and presentation of financial information, achieving homologation and consistency in the issues of interest to accounting professionals. The adoption of IFRS/IFRS offers an opportunity to improve the financial function of organizations through greater consistency in accounting policies, obtaining potential benefits of greater transparency, increased comparability, and improved efficiency. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 417–428, 2023. https://doi.org/10.1007/978-3-031-25942-5_33

418

M. del P. Corredor García et al.

Through the documentary review process, it is established that companies that achieve the complete implementation of international accounting standards, obtain greater competitiveness in important markets with respect to those that do not yet have such standards. Law 1314 of 2009 regulates the accounting and financial information principles and standards in Colombia. The purpose of this law is to enforce compliance with the process of convergence of local standards to international standards by organizations, in terms of their accounting information system, so the National Government entrusted the Technical Council of Public Accounting (CTCP) as the main body of this process. Latin America and other Caribbean countries have stood out, because their economies are strengthened by small and medium-sized organizations so the adoption of IFRS in Colombia has not been an easy process for many, because the Colombian economy is strengthened by SMEs in 99.58% according to (Confecámaras 2019) [1], which are mostly made up of family, which hinders the transformation of the accounting system, due to various conflicts of interest, insufficient capital, differences in decision making, limited forms of financing, among other problem factors, which make this process be perceived as a complete challenge, in addition to how complex it has been for many of the accounting professionals, since as Castro (2016) [2] exposes, accountants are mostly clinging to the previous practices and regulations, which further delays the transition process, because the accountant along with senior management are the protagonists of this process of standardization of accounting and financial information in organizations. The study of management perceptions regarding the implementation of IFRS has been analyzed in recent years by accountants such as Correa (2019) [3], where, based on a case study, he states that senior management perceives the adoption of IFRS as a responsibility of the CPA, to comply with legal requirements and not as an opportunity for organizational growth. The purpose of this research is to make a comparison between two wood SMEs, regarding the accounting and management perceptions of the usefulness of the information, before and after the implementation of IFRS, through the application of interviews to the personnel of the two SMEs [4], supported by a documentary review of similar cases, which allow to deepen and support the topic. 1.1 Problem Definition IFRS are a set of international technical standards that determine the pattern that companies must follow when preparing and presenting their financial information; their objective is that each report provided by professionals describes the financial performance of the company in a standardized manner. Company and can be understood by any professional in the world involved in economics and finance. With Law 1314 of 2009, Colombia gives way to the adoption of the International Financial Reporting Standards in response to globalization and the standardization of accounting language. IFRS have been perceived as a challenge for SMEs, since most of them are composed of families and their only financing options are banking and private capital, which limits the transition and complete adoption of these standards, since this implies a greater investment to achieve compliance with the requirements and standards imposed by the norms.

Comparative Study of Accounting and Management Perceptions

419

The Colombian economy according to data from Confecámaras (2019) [5] and ECLAC (2018) [6], is composed of 99.58% of MSMEs, which means that most companies in the country perceive that standardization has become a challenge as it brings a great responsibility and a series of important changes in the structure of the accounting and financial system, Therefore, in this case it is important to establish what are the differences in the perceptions of accountants and businessmen with the implementation of IFRS in two small timber organizations in Madrid Cundinamarca?, and what are the benefits that the implementation of IFRS brings to small timber organizations, regarding the usefulness of information, competitiveness and opening of new markets, identifying the main factors why currently one of these SMEs has not implemented these international standards’.

2 Methodology 2.1 Type of Research and Approach The comparative research method is used, which consists of observing two or more objects, each of the factors that provide important information for the study being carried out is addressed, where it is sought to estimate relationships and differences existing between the two objects of study, it allows obtaining either qualitative or quantitative information, to perform the analysis of each case (Piovani 2017) [7]. All this frames the problem or situation, since it allows observing the perception of the usefulness of the information when implementing IFRS, from the accounting and administrative approach of the selected timber SMEs, identifying the causes and effects brought by the standardization of the accounting system. Comparative research is often used as an antecedent to research designs through mixed studies, i.e., qualitative – quantitative that are characterized by giving depth to the analysis, through the combination, allowing in an interactive and theoretical way, a better understanding in the teaching and learning process [8]. The research approach is qualitative, data are obtained through documentary review and the application of interviews, as collection instruments [9]. The scope is descriptive, it specifies an organizational process resulting from the globalization and standardization of accounting information, associated with the perception of each actor involved in the process. 2.2 Collection Instruments The personal interview is generally a conversation between two people (one the interviewer and the other the interviewee). The questions can be recorded on a form called a questionnaire or a tape recorder can be used to record the data obtained. When the interview and the questionnaire are used personally, it is called: Face to Face [10]. The information will be collected through semi-structured interviews, constructed according to the profile of the interviewee and the knowledge they may have regarding International Financial Reporting Standards, considering criteria such as the position they hold in the organizations and their training, thus ensuring objectivity, accuracy and quality of the information collected.

420

M. del P. Corredor García et al.

In this comparative case, the target population for the development of the research are two woodworking SMEs in the municipality of Madrid Cundinamarca; Madrid is a municipality where there are three organizations dedicated to woodworking, but on this occasion the criterion selected to choose the two SMEs is that one has fully implemented the International Financial Reporting Standards and the second one is currently in the process of initial evaluation to implement them. This aspect is very important for the application of a comparative case, since they are in different stages of the IFRS adoption process [11], which allows easily identifying the difference in accounting and management perceptions regarding the usefulness of the information, before and after standardization.

3 Background The transition of SMEs in Colombia to the International Financial Reporting Standards began in 2009, where a series of actions have been developed to comply with the provisions of Law 1314 of the same year, which proposes international standards in accounting and financial information, defining three groups of organizations, Botero et al. (2018) [12], with Group II and Group III being the most representative as they are composed of MSMEs that represent 99% of Colombian companies and for which implementation could be more traumatic mainly due to their own characteristics and the scope of the standards that have been defined for them. The above, added to the knowledge derived from other research, shows that the implementation process in these companies has not been the most favorable, which responds to a series of diverse causes, such as lack of infrastructure, resources, interest on the part of the administration or the incomplete response they have given to the Superintendencies and the Central Board of Accountants (JCC), oversight authorities defined in Law 1314, have been given their functions, since if these functions had been strictly observed, compliance and the quality of the process established by law for the companies could be better guaranteed (p. 133–134) [13]. The implementation of IFRS has brought a series of effects in the organizations in Colombia and for this reason, there are currently small and medium-sized companies that do not have a standardized financial and accounting system, according to what is established in the International Standards. One of the main problems faced by SMEs in Colombia is a tax and fiscal vacuum, and in most cases a financial vacuum, which does not allow the complete migration to the International Financial Reporting Standards, which is complex when it comes to complying with all the requirements of the standard. Rodríguez (2017) [14] states that during this migratory process that SMEs accept and must develop to comply with the legal requirements, the successes and failures that entail the gap presented by the Fiscal and Tax area by not adopting IFRS for now, and that of the financial branch already with the adoption of the standard must be analyzed, taking into account that Colombia has handled the accounting system 2649 of 1993 that responds more to tax and fiscal requirements than those properly related to the financial situation [15], handling different processes and functionalities, so companies must be especially careful with it, since the IFRS for SMEs in Colombia must work hand in hand with the Accounting system 2649 of 1993, for the due compliance with the different Tax and Fiscal regulations that small and medium-sized companies must comply with (p. 8) [16].

Comparative Study of Accounting and Management Perceptions

421

The effects brought by the adoption of International Financial Reporting Standards have been presented both in the private and public sector, “in the financial magnitudes and procedural changes derived from the implementation of this type of accounting innovations”. León (2018) [17] mentions that the pressure for the implementation of international accounting models in Colombian organizations in the last two decades is significant. In that framework different actors have had a relevant role for such models to penetrate both the private and public sectors. The national States have gradually ceded their capacity to regulate some elements of business life, so that the regulation issued by international organizations with international links is beginning to be more evident (p. 91).

4 Theoretical Framework 4.1 Competitiveness vis-à-vis IFRS The IFRS implementation process worldwide is defined as a door for organizations to become competitive abroad, as stated in the finance section of EL TIEMPO magazine, where it is necessary for companies at international level, “to manage loans with foreign banks and be more attractive to foreign investors, i.e., to be more competitive is a great advantage that companies can achieve by following the International Financial Reporting Standards” [18]; where this will be a favorable aspect to enter the most important markets in the world and thus enrich the economy of each country. All this is completely linked to the possibility of providing a single financial language at the international level, which allows all accounting professionals to access and understand the information presented by each organization in the financial statements, so that it is possible to make decisions more easily, based on their own analysis, as stated by [19]. On the other hand, when making the decision to implement the International Financial Reporting Standards, it is necessary for businessmen to be clear that this will bring great changes, not only at the accounting level, but it will also affect technological platforms and management systems implemented in the organization [20]. In order to carry out an optimal IFRS implementation process, it is necessary that the people in charge of this change have the capacity, knowledge and experience in international regulations and methodology, as well as a wide knowledge in the different areas of the company, so that they can easily identify the possible effects that this process will bring in each of them, and in this way they can plan strategies to mitigate the impact and risks that this standardization may bring. 4.2 Implementation of IFRS in Latin America The adoption of IFRS has advanced steadily since the creation of the IASCF – International Accounting Standards Committee Foundation, where globalization and investment in the international market have made it necessary to standardize financial and accounting information, so that it is understandable to foreign investors, to make negotiations between countries easier and thus make it possible to compare information, to ensure its transparency. All countries worldwide have seen the need to implement IFRS;

422

M. del P. Corredor García et al.

Latin American countries started their adoption process in 2000, in a favorable way. During this year, the need arose to generate standards that consider the need of SMEs and emerging economies, which represent more than 90% of the organizations in developing countries according to data from [21]; all this points out that it is necessary to generate educational programs to facilitate the study and application of IFRS in small and medium-sized companies and, in this way they determined, that it is essential to produce strong relationships with national bodies to promote the convergence of national standards with IFRS. However, in some countries these processes were quite complex, especially due to the deep conceptual change, as presented by [22], and structural change of the organization, which causes the adoption of IFRS, and the high investment that this change brings consequently, where the focus of information usefulness is sought, where it is desired to achieve the understanding international, to standardize the financial language, so as to be more competitive in the world markets. 4.3 Causes and Effects of IFRS Implementation in Small and Medium-Sized Timber Companies in Colombia During the process of adopting the International Financial Reporting Standards in Colombia, different situations have arisen for the companies, since the change from Colombian regulations to international regulations has been an extensive process, since more than 90% of the Colombian economy is made up of SMEs, which has made this transition difficult, since many of these SMEs do not have the capital, structure and willingness and adaptation to the changes that this standardization requires. According to [23], the adaptation to IFRS in SMEs has brought more disadvantages than advantages. The disadvantages include the following: a. It requires a great deal of work to bring all the concepts and contents of IFRS into a national format. b. financial statements prepared under national standards adapted to IFRS can never have an explicit and unreserved statement that IFRS have been applied in full. c. If the company must present its financial statements under an international approach, it must prepare two sets of statements or make a reconciliation to IFRS. d. It is very difficult to keep up with the pace at which the IASB is generating standards so as not to fall behind in the adaptation process (p. 20). But as [24] states, the adoption of IFRS does not only have negative effects on Colombian companies, since this transition has advantages: a. To become more attractive to large foreign investors, allowing them to take national products to other countries. b. Work can be done to train all people related to financial information, so that IFRS are properly applied. c. Practical exercises, answers to frequently asked questions and other elements to support entities and auditors can be developed. d. The financial statements prepared in accordance with these national standards (which adopted IFRS) can be sent directly abroad to any user who requires them.

Comparative Study of Accounting and Management Perceptions

423

e. Increases competitiveness with other industries in the same sector, since, sharing the same accounting and financial language, they can position themselves in important international markets. For [25], in the convergence, when moving from local to international standards, the main aspects to consider in SMEs and in this case, those corresponding to the timber sector are: 1. Consider whether the company has the capital to cover the investment required for this change, since these are not only accounting changes, but also structural, technological, and training changes, which require a good investment. 2. Redefine accounting policies by senior management, which should oversee this issue. 3. Define a strategic IFRS implementation plan, which must mitigate risks in each area of the company. 4. Evaluate the technological platforms used, to establish accounting and financial information systems that allow a flow of reports according to the needs of the company. 5. Appointment of leaders with ample capacity and experience in local and international regulations for the management of the IFRS project. 6. Follow-up on the reports and commitments of each leader, seeking the commitment of all collaborators. 7. Training and coaching for all personnel involved in the implementation of the International Standard, so that a joint commitment is acquired, which allows the process to be complied with as established in the standard.

5 Results With the application of the collection instruments, it is evident that according to the accounting and management perceptions, the adoption of the International Financial Reporting Standards has caused that the role of the public accountant has become more relevant in the organizations, leading him to impose new challenges, not only in personal but also in professional matters, having to be updated in accounting and tax matters, which will allow the adaptation to the changes more easily and an optimal and very complete application of the standards, facilitating his participation and his functions as a fundamental member for the decision making in the organizations. With the collection of information through the interviews, a certain degree of apathy has been observed on the part of the top management, regarding different aspects brought about by the implementation of the IFRS, where they perceive the international regulations as an unfounded obligation, which entails unnecessary costs, rather than as an opportunity for improvement. According to the points of view of the personnel interviewed, not only from the top management, but also accountants, assistants and tax auditors, there is a high coincidence in the negative opinion regarding the effects that IFRS have brought to the organization and the profession, that is, they consider on different occasions that Decree 2649 of 1993 provides better conditions for the company and for the professionals, because with the

424

M. del P. Corredor García et al.

entry into force of the International Regulations, they have been forced to make constant updates, to be at the forefront of the standard and be competent at the time of rendering their services (Table 1 and Table 2). Table 1. Management perceptions – Own source Management perceptions Criteria

SME 1 (IFRS implemented)

SME 2 (IFRS no. Implemented)

IFRS have already been implemented

Yes

No

IFRS contribute to the competitive development of the organization

Yes

No

Bring unnecessary benefits or costs

Benefits

Unnecessary costs

Prior to implementation, they had quality standards in place

Yes

Partially

With the implementation of the Yes accounting advice

Yes

The implementation of IFRS brings additional costs and expenses

Yes

Yes

Advantages of IFRS implementation

Increased competitiveness s, opening new markets and business opportunities financing and growth

Increased competitiveness, opening of new markets and financing and growth opportunities

The role of the public accountant with the implementation of IFRS

It has always been very It has become increasingly important for the organization important for the organization

Facilitating access to financing Partially through transparent and reliable information

No

IFRS are necessary in the opening of international businesses

Yes

Yes

Comparative Study of Accounting and Management Perceptions

425

Table 2. Perceptions of accountants – Own source Accountants’ perceptions Criteria

SME 1 (IFRS implemented)

SME 2 (IFRS not implemented)

Have they required further updating?

Yes

Yes

Do you think that with the Yes implementation of the International Standards of Financial Information is further accounting advice required?

Yes

Do you believe that the implementation of International Financial Reporting Standards has an impact on the development of the company competitive organization?

No

Yes

What is your perception of Profitability has improved in before and after the the organization implementation of the International Financial Reporting Standards, according to performance organization’s financials?

They have not been implemented, but I believe it would decrease the profitability of the organization

do you think that the implementation of IFRS generates additional costs and expenses for the organization?

Yes

Yes

Do you consider that the accounting profession is becoming more important since the implementation of International Reporting Standards Financial?

Yes

Yes

What advantages do you think the implementation of IFRS brings to the company profession?

More comprehensive and analytical training; greater capacity for interpretation of the information

More comprehensive and analytical training; greater ability to interpret information

Does it facilitate access to credit?

No

No (continued)

426

M. del P. Corredor García et al. Table 2. (continued)

Accountants’ perceptions Criteria

SME 1 (IFRS implemented)

SME 2 (IFRS not implemented)

Do you consider that in Colombia the tax bases differ from the accounting bases and lead organizations to have double accounting, as there is no tax basis? adequate reconciliation?

Yes

Yes

How do you identify the role of the accountant within the organization before and after of the implementation of International Financial Reporting Standards?

It has become more relevant The role of the accountant in the organization’s decision has not become more relevant making process with the public entry of IFRS

Do you believe that the If they were needed in International Standards Colombia information were necessary in Colombia or that it should have continued with Decree 2649?

They were not necessary in Colombia

6 Conclusions and Discussion After a collection of results, these reflect that the theories initially exposed regarding the implementation of the International Financial Reporting Standards are largely fulfilled, since the entry into force of the international standards, organizations have required a greater investment in terms of the training of accounting personnel or the modifications that their processes have required, to meet the objective of implementing the IFRS; the implementation of this normativity has not been an easy process, mainly for SMEs, and this is a point on which the interviewees agree, since according to what was stated by [26], this process has not been the most propitious for organizations, especially for small companies, since they lack resources, infrastructure, trained personnel, willingness, among others, and those who most point to the lack of resources and sustainability are the personnel who belong to the organization that has not implemented IFRS, highlighting mainly the opinion of the manager. SMEs in the timber sector in the municipality of Madrid have not experienced a gradual progress in the process of implementing the International Financial Reporting Standards, due to the dependence on knowledge and limited resources, in addition to the unwillingness of management in organizations, for the implementation of systems and processes to facilitate the change, which generates delays and non-compliance in the presentation of information. This is evidenced in the report presented in July 2017 by the Superintendence of Corporations, with a cut-off for 2016, where they show that more

Comparative Study of Accounting and Management Perceptions

427

than 50% of SMEs in Colombia comply with the requirement to send the information, but there are some representative percentages both in extraordinary sending (19%) and in non- compliance with this obligation (16%), thus confirming the conclusions presented by [27]. This study allows considering important aspects in the IFRS implementation process for SMEs in the timber sector of the municipality of Madrid. First, it is observed that the convergence generates several effects, one is the lack of commitment and responsibility of the administration in this process, a fact that can be reflected in an institutional resistance; another important aspect is the lack of training for those responsible for the generation and presentation of the information. The lack of awareness on the part of the SMEs management, not only in the timber sector, but in general, hinders the convergence process in every sense, because they do not consider important the investment in personnel training, expert support, use of technology and other resources, which are necessary to achieve an adequate standardization; since this investment may minimize the difficulty experienced by the SMEs in the implementation of IFRS. Some SME entrepreneurs consider this investment as an expense and measure the values to be invested, thus opposing the change, and ignoring the added value that this adoption may generate to the organization.

References 1. Albarracín Muñóz, M.E.: Financial risk: a qualitative approach to the interior of MSMEs in Colombia. Aglala 139–160 (2017) 2. Arrubla, M.: Finance and financial education in family businesses SMEs. Revista de Investigación de la escuela de administración y mercadotecnia del Quindío, pp. 99–118 (2016) 3. Berrío, G.: IFRS, a door to be competitive abroad. Portafolio (2013) 4. Bonilla, J.R.: Theoretical Trends that have been developed in the Literature on “Creative Accounting.” Corporación Universitaria Minuto de Dios, Bogotá (2018) 5. Botero, A., Marulanda, C., Álvarez, M., Muñoz, L.: IFRS implementation process in Colombia: an approach to the oversight authorities defined in Law 1314 of 2009. Contaduría Universidad de Antioquia; Medellín, pp. 131–162 (2018) 6. Cannon, O.A.: Analysis of the Perception on Economic Variations in the Adoption of IFRS in Residential use Coproperties of the horizontal property regime in sector 2, 3and 4 in Bogotá, pp. 3–28. University of Bogotá Jorge Tadeo Lozano (2018) 7. Castillo, C.: IFRS, a door to be competitive abroad. PORTAFOLIO (2013) 8. Castro. J.: The Perception of Colombian Medium-Sized Entrepreneurs in the City of Bogota vis-à-vis the Adaptation of International Financial Reporting Standards (IFRS), pp. 10–76. Universidad Militar Nueva Granada (2016) 9. Correa. A.: IFRS: An Investment or an ExpensE? Universidad Militar Nueva Granada, pp. 3– 15 (2019) 10. Flores, S.O.: Competitiveness of family firms. Interscience 236–241 (2018) 11. Fuster Antolín, Á.: Creative accounting: theoretical aspects. MADRID: TFG Area: Analysis of Financial Information (2015) 12. Gamboa, C.A: Apuntes Sobre Investigación Formativa. S.D., Ibagué (2013) 13. García Carvajal, S.: Qualitative aspects in SMEs and the new managerial challenges facing IFRS. Academia y Virtualidad Magazine; Bogotá, pp. 108–120(2016) 14. GBP GROUP: The adoption of IFRS in Colombia. GLOBAL BUSINESS PARTNER (2018)

428

M. del P. Corredor García et al.

15. Gil, J.J.: Adoption of international financial reporting standards (IFRS) in latin America. Legis Int. J. Account. Auditing 38, 13–66 (2009) 16. Henao, D.J.: Convergence to IFRS in Colombia, Regulation, and Perspectives, pp. 272–283. Universidad Externado de Colombia (2014) 17. Hernández, S.R., Fernández, C.C., Baptista, L.M.: Metodología De La Investigación, 5th edn. Mc Graw-Hill, Mexico (2012) 18. Torres, I.K.: Data collection methods for research (2019) 19. León, E.F.: Effects of the Implementation of International Accounting Standards in Public Companies of the Colombian Electric Sector, pp. 87–120. Criterio Libre, Bogotá (2018) 20. Llopis, R.M.: IFRS for SMEs: The Solution to the Problem for the Application of International Standards? Fondo Editorial PUCP, Lima (2013) 21. Luna Restrepo, J.M.: Colombia: Towards the Adoption and Application of IFRS and its Importance, pp. 26–43. Adversia – Universidad de Antioquia (2011) 22. Macias, H.: Introduction to Critical Accounting Research (ICC) in its Original Context, pp. 103–127. Contaduría Universidad de Antioquia; Medellin (2017) 23. Monterrey, M.J.: Between creative accounting and accounting crime. Legis Int. J. Account. Auditing 117–138 (2002) 24. Morales, F.: Know 3 Types of Research: Descriptive, Exploratory and Explanatory (2012) 25. Olave, J.C.: IFRS, a door to be competitive abroad. PORTAFOLIO (2013) 26. Parra, D.: Analysis of the implementation of IFRS in SMEs in the city of Cuenca: perception, causes and impact, pp. 6–44. Universidad Politécnica Salesiana Sede Cuenca (2016) 27. Pérez y Soto Domínguez, A., Flórez, K., Giraldo, F.: Regulación De La Salud En Colombia: Un problema de información secuestrada. Bogotá: Papel Político (2017)

Influence of Aqueous Phase of Hydrothermal Carbonization Feeding on Carbon Fixation by Microalgae Mayra S. Andrade Guerrero1 , Daysi N. Bayas Moposita1 , Cristhian M. Velalcázar Rhea2 , P. Cuji2 , Danny F. Sinche Arias2 Carlos A. Méndez Durazno2 , and Javier Martínez-Gómez2,3,4(B)

,

1 Escuela Superior Politécnica de Chimborazo ESPOCH, Facultad de Ciencias, Riobamba,

Ecuador 2 Instituto de Investigación Geológico Energético IIGE, Quito EC17051, Ecuador

[email protected] 3 Facultad de Ingeniería y Ciencias Aplicadas, Universidad Internacional SEK, Quito 170302,

Ecuador 4 Departamento de Teoría de la señal y comunicación, (Área de Ingeniería Mecánica) Escuela

Politécnica, Universidad de Alcalá, 28805 Alcalá de Henares, Madrid, Spain

Abstract. CO2 biofixation is one of the most promising alternatives in CO2 capture and storage. In this study, the ability to cultivate microalgae and the influence of the use of the aqueous phase (AP) from hydrothermal carbonization (HTC) of coffee husk on the biofixation of CO2 were investigated. The influence of nutrient addition on the growth rate of Chlorella sp., was evaluated through the response surface methodology. The results indicate that the optimum nutrient levels were 0.20 g L−1 of sodium acetate and 1.32% (v/v) of AP. The effect of CO2 concentration on growth and biofixation kinetics were determined using 0.04, 5, 10, 15, and 30% (v/v) CO2 . The maximum CO2 biofixation (71.00 mg L−1 d−1 ) and the highest biomass concentration (0.40 g L−1 ) were determined at 15% (v/v) CO2 . Keywords: Hydrothermal carbonization · Microalgae · Biofixation · Chlorella sp

1 Introduction Climate changes due to the continuous increase of the anthropogenic emissions has contributed to severe temperature changes. CO2 is one of the main factors contributing to global warming. The NOAA’s Mauna Loa Atmospheric Baseline Observatory reported that in 2021 the highest levels of atmospheric carbon dioxide have been reached about the measurements from 63 years ago [1]. CO2 mitigation strategies depend on the design of energy systems, natural resources and access to mitigation technologies. An alternative is the use of renewable material to improve energy efficiency with negative CO2 emissions. In the bioenergy methodology © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 429–441, 2023. https://doi.org/10.1007/978-3-031-25942-5_34

430

M. S. A. Guerrero et al.

with carbon capture and storage (BECCS), CO2 is absorbed during the photosynthesis of biomass and subsequently stored in the conversion of biomass into biofuels [2]. CO2 biosequestration with promising agents such as microalgae is a sustainable alternative approach. The capability of microalgae to fix CO2 has been used to capture flue gas from power plants and reduce GHG emissions [3–5]. Microalgae contain high photosynthetic efficiency; is capable of using CO2 as a carbon source in their metabolism, and it has a high growth rate. Microalgae also have carbon concentration mechanisms (CCMs) that allow them to fix more CO2 than terrestrial plants. The tolerable concentration of CO2 is specific to each species of microalgae; they can tolerate from 5 to 20%. Power plants generally produce up to 15% CO2 , therefore microalgae could be used as candidates for CO2 biofixation [6]. The supply of CO2 to microalgae promotes their photosynthesis and improves their growth rate. The efficiency of biosequestration depends on extrinsic factors such as photobioreactor design, pH, CO2 supply, temperature and culture medium nutrients. Several studies have demonstrated the growth capacity of microalgae exposed to different concentrations of CO2 [7]. The effective biofixation of CO2 depends on the selection and separation of strains with the best CCMs. Native strains are potential candidates for CO2 biofixation. However, the culture conditions of native strains for optimal growth and maximum biofixation are unknown [8]. Mixotropically cultivated microalgae has used simple carboxylic acids and amino acids as carbon and nitrogen sources. An alternative process is the use of the aqueous phase (AP) of hydrothermal carbonization (HTC) treatments of lignocellulosic biomass. HTC can be carried out under subcritical conditions (~200 °C) in residence times from seconds to hours [9]. The aqueous phase is a seldom used residue containing high amounts of carbon, phosphorus, nitrogen, trace elements and it may contains toxic phenolic derivatives [10]. Therefore, the AP obtained from the HTC process must be remediated before being removed. Using AP as a nutrient source in algal culture could reduce biomass production costs and mitigate AP [11]. In general, maximum microalgal growth is obtained in diluted AP solutions, due to the decrease in inhibitors [12]. Tarhan et al. [13] report that Chlorella minutissima and Botryococcus braunii microalgae were grown in AP from HTC of olive pomace. Growth rates in AP were determined high at low dilutions rates. Belete et al. [10] report that growth rate of Chlorella sp and Coleastrella sp cultivated in HTC water process from activated sludge was similar that synthetic growth media BG 11. In other study, growth rates of Chlorella v. cultivated in AP from HTC of Nannochloropsis oculata was higher than media BG11[9]. The present study optimizes the beneficial use of the AP obtained from the HTC of coffee husk as a source of nutrients for the cultivation of Chlorella sp. For higher biomass productivity and CO2 biofixation. First of all, the effect of AP and activator concentration on the productivity of Chlorella sp was studied. Second of all, the growth performance and biofixation capability of Chlorella sp. With a media supplied elevated CO2 was evaluated.

Influence of Aqueous Phase of Hydrothermal Carbonization Feeding

431

2 Method 2.1 Materials Wild type Chlorella sp. Obtained from culture collection of algae at IIGE was used as strain in this study. HTC experiments with coffee husks (Quito-Ecuador), were carried out in a 200 mL hydrothermal autoclave reactor (Techinstro). For each experiment, 10 g of dried coffee husk and 100 g of distilled water were used as 1/10 of biomass to water weight ratio. HTC was conducted at 180 °C for 1 h. Vacuum filtration was used in the separation of solid and liquid products. The hydrochar product was dried in an oven at 50 °C for 24 h and stored for other studies. The liquid phase was stored and protected from light in a refrigerator at 4 °C until its use in the present study. 2.2 Characterization Analysis After eleven days of cultivation period, algae were analyzed for the total elemental analysis (PerkinElmer 2400 CHNS/O Organic Elemental Analyzer). The functional groups of microalgae cultivated in different media were analyzed by Fourier transform infrared spectroscopy (FTIR) using FT-IR spectrometer (JASCO FT/IR-4100). Each spectrum was recorded in a wavenumber range from 4000 cm−1 to 520 cm−1 . For the analysis of the aqueous phase, pH and electrical conductivity were measured with multiparameter pH and conductivity meters (OAKTON/PC-2700). The aqueous phase was analyzed for chemical oxygen demand (COD) using dichromate colorimetric method (VELP Scientifica). The biochemical oxygen demand (COD) of AP was analyzed using the respirometric method (HACH BOD Trak II digester). Lactic acid, acetic acid, cellobiose, glucose and xylose analysis were conducted with HPLC (Agilent Infiniy 1260). The total nitrogen (TN) by Kjeldhal (4500-N). The total phosphorus (TP) was determined through colorimetric method (Thermo Scientific Evolution 220). Lastly, trace metals were detected through atomic absorption spectrophotometry (Perkin Elmer PinAAcle 900T) following the AOAC Regulations. 2.3 Time Course Growth Studies The cultures were carried out in 250 ml Erlenmeyer flasks with an initial biomass concentration of 10% of strain. The cultures were mixed through air injection using diaphragm pumps. The air was previously filtered and drop removed. Two glass tubes passed through the covering cap of the Erlenmeyer flasks – one for sample collection and the other for the continuous aeration. The cultures were illuminated with fluorescent lamps (Philips, TL 20W) providing an illuminance of 1000 lx. The cultures were maintained in a greenhouse at room temperature. Optical density (OD 680) was measured according to Griffiths et al. [14] using a spectrophotometer (Hach 6000). Cultures were diluted to an Oxygen Demand (OD) of less than one, to fall within the linear range of measurement. The actual OD was calculated by multiplying this value by the dilution factor. Dry weight (DW) was measured by filtering 20 ml culture samples using a preweighed 0.45 μm (Millipore), rinsing with dH2O and drying at 80 °C overnight. Dry weight (DW) measurements were carried out in triplicate.

432

M. S. A. Guerrero et al.

2.4 Productivity Optimization Response Surface Methodology (RSM) was implemented to optimize the biomass of Chlorella sp concentration, while studying the effect of adding nutrients through a batch culture. AP and sodium acetate were selected as experimental factors. A full-factorial design consisting of two-factors-two-levels pattern with nine design points (nine combinations with two replicates on the center point) was used. The factorial design and Central Composite Designs (CCDs) are presented in Table 1. The experimental data was analyzed utilizing the response surface regression procedure. This method is based upon the use of the polynomial model in Eq. (4): The results have been summarized from nine independent experiments repeated in duplicates. Table 1. Factorial and CCDs with the code and real variables used for cultivation of chlorella sp (X1 – Initial sodium acetate concentration, X2 – AP concentration) # Assay

COD

Treatment

Factor A

Factor B

X 1 (g/L)

X 2 (%v/v)

1

RK1

1

(−)1

(−)1

0.30

0.90

2

RK2

a

(+)1

(−)1

0.50

0.90

3

RK3

b

(−)1

(+)1

0.30

2.60

4

RK4

ab

(+)1

(+)1

0.50

2.60

5

RK5

(−)αa

(−)1.414

0

0.20

1.75

6

RK6

(+)αa

(+)1.414

0

0.60

1.75

7

RK7

Centro

0

0

0.40

1.75

8

RK8

(−)ab

0

(−)1.414

0.40

0.50

9

RK9

(+)ab

0

(+)1.414

0.40

3.00

2.5 Growth Kinetics and CO2 Biofixation The microalgae biomass productivity (P), specific growth rate (μ) and biofixation (P CO2) were calculated in order to evaluate the microalgae growth according to the equations as described by Kassim et al. [15]. After analyzing the results of the first experiment, using a response surface methodology, the biofixation of CO2 was evaluated. Erlenmeyer flask contained 250 mL of medium with supply of CO2 (0.04–30%). The culture medium consisted of 0.2 g L−1 sodium acetate and 1.32% (v/v) of AP. The closed photobioreactor was kept at 1000 lx.

3 Results and discussions 3.1 Characterization of Hydrothermal Process Water Analysis of hydrothermal carbonization from coffee husks was performed at 180 °C for one hour. The AP from HTC was used as nutrient for algal cultivation. Mild temperature conditions and short treatment time were applied to avoid the production of toxic

Influence of Aqueous Phase of Hydrothermal Carbonization Feeding

433

compounds that inhibit algae growth. To determine the AP components, their physicochemical properties were analyzed as shown in Table 2. The pH of the AP was measured after each experiment and was found to be 4.7. The hydrothermal treatment condition allows to partially separate the structural components (polysaccharides and lignin) from the lignocellulosic materials. The system is naturally acidified by the generation of organic acids, mainly acetic acid [16]. In general, AP of HTC contain high content of organic material and the relative content of nutrients (N,P,K) [17].Chemical oxygen demand (COD) and Biochemical oxygen demand (BOD) are both indicators of carbon bioavailability. COD value was determined as 4553.30 ± 41.63 mg L−1 and BOD value was 1300.00 ± 11.50 mg L−1 . In our study, lactic acid and xylose content of AP were analyzed using HPLC and determined as 66.22 ± 0.00 and 219.64 ± 0.01 ppm, respectively. Acetic acid, glucose and cellobiose were not detected. The TP concentration for AP was 15.20 ± 0.06 ppm. TN was not detected due to poor deamination of amino acids in the organic material at the evaluated temperature, whereas the content of P, Ca, Mg, Na in AP was low, most of the minerals were accumulated in the hydrochar. The content of mineral in AP depends of HTC conditions, feedstock type and the content of inorganic matter in the feedstock [18]. Table 2. Physicochemical characteristics of AP from HTC of cofee husk (*n.d: not detected) Physicochemical characteristics

Value

pH

4.70 ± 0.00

Conductivity(μS/cm)

459.00 ± 0.00

DQO(mg/L O2 )

4553.30 ± 41.63

DBO (mg/L O2 )

1300.00 ± 11.50

DQO/DBO

3.50 ± 0.06

DBO/DQO

0.28 ± 0.00

Acetic acid (ppm)

n.d.

Lactic acid (ppm)

66.22 ± 0.00

Glucose* (ppm)

n.d.

Xilose (ppm)

219.64 ± 0.00

Celobiose* (ppm)

n.d.

TN* (mg/L)

n.d.

TP (mg/L)

15.20 ± 0.06

Ca (ppm)

5.92 ± 0.03 (continued)

434

M. S. A. Guerrero et al. Table 2. (continued)

Physicochemical characteristics

Value

K (ppm)

76.52 ± 0.67

Cu* (ppm)

n.d.

Fe (ppm)

n.d.

Mg (ppm)

3.61 ± 0.04

Na (ppm)

3.69 ± 8.53

Zn* (ppm)

n.d.

3.2 Dry Biomass In all the culture environment of Chlorella sp. in the proposed tests, the concentration of biomass increased. Figure 1. Show that, the runs RK1–RK9, have AP concentration in the central point (run RK7) and high concentration of sodium acetate. These conditions have a negative influence on the growth Chlorella sp. It is possible that the maximum addition of sodium acetate and AP in run RK8 and RK9 has led to a cell death. Conversely, low sodium acetate concentration and AP concentration in the central point (run RK5) have shown the best responses to specific growth rate for Chlorella sp. The maximum specific growth rate of RK5 (μmax ) was 1.330 ± 0.08 day−1 for 11 days of cultivation, as shown in Fig. 1. The maximum biomass productivity (0.09 g L−1 d−1 ) of run RK5 was higher than synthetic medium NPK (0.10%). The most plausible reason for this may be the high COD and BOD values of AP.

Fig. 1. A Time course of Chlorella sp biomass concentration for runs RK1 to RK9, B Maximum specific growth rate (μ máx)

Influence of Aqueous Phase of Hydrothermal Carbonization Feeding

435

Response surface and contour plots (Fig. 2) show the effect of sodium acetate (X 1), AP (X 2) on cell productivity after eleven days of culture. The results presented previously confirm that the best values for cell productivity were found when AP was added to central point and with the minimum acetate concentration. The trials presented a similar growth pattern. The experiments were performed with a central point experimental design, where both nutrients present different values of u max. The increase in concentrations at the highest levels both, PA and sodium acetate, lower the response in biomass productivity. The p-value analysis shows that there is no significant interaction between the parameters X1 and X2 on biomass productivity, therefore the coefficients of the proposed model are presented in (Eq. 4). z = 0.128228 − 0.227258x + 0.00399375x + 0.105151x2 + 0.0294118xy − 0.00374793y2

(4)

The results indicate that nutrients of AP from HTC support the growth of Chlorella sp. According to Abreu et al. [19], one of the components that contribute to the cost of microalgae cultivation is the carbon source. In this way, microalgae could grow within a recycling process and reduce the production costs of biomass [9].

Fig. 2. Response surface of cellular productivity of Chlorella sp at 11 days of culture (X1 – sodium acetate; X2 – initial AP concentration)

3.3 Characterization of Algae FTIR and CHNS Analysis. In this study, the algal samples were studied using FTIR technique to determine the biochemical composition. Identification was based on comparison of the bands of the recorded FTIR spectra with those of reference literature [1]. FTIR spectra for microalgae, cultivated in AP and acetate for runs RK1 to RK9 and RL1 to RLair, is shown in Fig. 3. From Figure N, there were six explicit bands identified within the wavenumber, in the range from 4000 to 600 cm−1 . It was noticed that microalgae samples exhibited peaks in similar wavenumbers, nevertheless the peak areas were different for every culture condition [20]. The bands found in the samples are re-ported in Table 3.

436

M. S. A. Guerrero et al. Table 3. Characteristics bands identified in the Fig. 3. (A) and (B)

Wavelength range (cm−1 )

Identification of the bands

References

3282–3374 (A)

v(O-H) stretching; Water, Carbohydrates, proteins, and lipids

[21–23]

2854–2927 (A)(B)

vas (CH2 ) stretching, C-H stretching; lipids, proteins, Carbohydrates, alkanes

[21, 24–26]

1744–1746 (A)

[21, 24, 27]

1743 (B)

C = O stretching; lipids (esters and fatty acids)

1639–1643 (A)

N-H; Proteins amide-I

[21]

-CH2 ; Proteins, methyl lipid

[21, 26]

v(C-O-C); polysaccharides

[24, 25, 28, 29]

3278–3440 (B)

1631–1546 (B) 1404–1456 (A) 1411–1454 (B) 1038–1046 (A) 1037–1049 (B)

It was found that all samples RK1 to RK9 and, RL1 to RLair consisted of protein, lipids, and carbohydrates [30]. The run RK4 showed the highest peak areas at 3282– 3374 cm−1 , 1639–1643 cm−1 , 1404–1456 cm−1 , and 1038–1046 cm−1 . On the other hand, the RK9 spectral result had the highest intensity at 1744–1746 cm−1 corresponding for triglycerides (lipids) and fatty acids [23]. From Fig. 3. (B), the sample RL1 spectrum (5% CO2 ) respect to the samples RL2 to RLair presented a clear difference in regard to the peak areas in the bands 3278–3440 cm−1 , 2854–2927 cm−1 , 1631–1546 cm−1 , 1411–1454 cm−1 , and 1037–1049 cm−1 . Instead, the sample RL2 (10% CO2 ) had the highest intensity in the wavelength 1743 cm−1 identified for lipids [27]. The FTIR spectral results for the microalgae samples RK1 to RK9 and RL1 to RLair showed similar results as obtained in other studies [1, 22–24].

Fig. 3. FTIR spectra A runs RK1–RK9, B biofixation assay

Influence of Aqueous Phase of Hydrothermal Carbonization Feeding

437

After FTIR characterization, elemental analysis of the dry biomass of Chlorella sp. Analyzed in the response surface method (MSR) recorded 43.40%C, 6.24%H, 2.43%N. There was no detection of sulfur. On the other hand, after the analysis of Chlorella sp. The highest biofixation (15% CO2 ) was 41.8 ± 1.36%C, 6.0.0 ± 72%H, 3.7 ± 0.04%N, without sulfur detection (see Table 4). Given these results, there is a major correlation in the composition of carbon and hydrogen that would represent the carbohydrates and lipids of the microalgal biomass and a lower concentration of nitrogen compounds or proteins. Table 4. Elemental composition of Chlorella sp. Element (% wt) Microalgae

C

H

N

S

Reference

Chlorella sp.

44.00

7.09

8.53

0.84

[31]

Chlorella sp. (MSR)

43.40 ± 0.00

6.24 ± 0.00

2.43 ± 0.00

0.0 ± 0.00

This study

5

44.10 ± 2,04

6.30 ± 1,20

2.50 ± 0.19

0.0 ± 0.00

10

44.90 ± 3,53

6.70 ± 1,38

3.00 ± 0.55

0.0 ± 0.00

15

41.80 ± 1.36

6.00 ± 0.72

3.70 ± 0.04

0.0 ± 0.00

30

36.80 ± 1,80

5.50 ± 0.81

4.10 ± 0.29

0.0 ± 0.00

Chlorella sp.

Microalgal Biomass and CO2 Biofixation. The effect of the concentration of CO2 (0.04, 5, 10,15 and 30% v/v) in the microalgae growth, cultivated in a culture environment, with AP (1.37% v/v) of HTC in coffee husks and sodium acetate (0.20 g L−1 ) is shown in Fig. 4 A. The concentrations cells with CO2 at 0.04, 5, 10, 15 and 30% CO2 were 0.19, 0.18, 0.11, 0.40 and 0.36 g L−1 , respectively. The maximum biomass concentration was produced after eleven days of incubation, and was close to 0.40 g L−1 , using 15% (v/v) CO2 . The maximum growth rate was 0.98 d−1 with 10% (v/v) CO2 as shown in Fig. 4 B. The value obtained from specific growth rate, according to Larosière et al. [32], was 1 d−1 for chlorella strain for high concentrations of CO2 in feed gas. Recent studies have shown that high concentrations of CO2 stimulate the growth of microalgae, however the process of how is this achieved is still unknown [33, 34]. The gradual acclimatization of microalgae to extreme levels of CO2 is an alternative to improve tolerance to CO2 [35].

438

M. S. A. Guerrero et al.

Fig. 4. Effect of various CO2 concentrations on Chlorella sp. A Growth. B Growth rate. C CO2 biofixation

The biofixation of CO2 is dependent on factors such as the species of microalgae and the concentration of CO2 in the culture [36]. Figure 4. C shows that biofixation did not increase linearly with the increase in CO2 concentration supplied to the medium. In this study the maximum CO2 fixation rate under 15% (v/v) CO2 was 71.00 mg L−1 d−1 as shown in Fig. 4 C. Microalgae can tolerate high concentrations of CO2 and under optimal conditions a maximization of biomass concentration is observed. The decrease in the CO2 biofixation rate declines at concentrations of 5 and 10% (v/v), it may be associated with a deactivation of the enzymes carbonic anydrase (CA) and Rubisco enzymes with a direct effect on microalgal growth and on CCMs [37, 38]. Show that CO2 biofixation is not related to cell growth [39]. The supply of 30% CO2 was higher than the saturation level, before it was consumed it was probably released into the atmosphere.

4 Conclusions This study demostrated that the growth performance and biofixation capability of Chlorella sp. are influenced by media composition and CO2 concentration during cultivation of algae.The AP from HTC proccess of coffee husk was suitable for Chlorella sp. growth. Low levels of AP evaluated and sodium acetate in media culture improves the productivity biomass of Chlorella sp. The results shows that Chlorella sp. have a growth profile that depent of CO2 supplied to media according to its tolerance. The present study showed that Chlorella sp. has a high tolerance to CO2 concentrations. Such platform could be applied in a circular economy approach for CO2 mitigation applications and a sustainable bioproducts industry.

Influence of Aqueous Phase of Hydrothermal Carbonization Feeding

439

References 1. Choi, Y.Y., Patel, A.K., Hong, M.E., Chang, W.S., Sim, S.J.: Microalgae bioenergy with carbon capture and storage (BECCS): an emerging sustainable bioprocess for reduced CO2 emission and biofuel production. Bioresour. Technol. Reports 7, 100270 (2019). https://doi. org/10.1016/j.biteb.2019.100270 2. Fridahl, M., Lehtveer, M.: Bioenergy with carbon capture and storage (BECCS): global potential, investment preferences, and deployment barriers. Energy Res. Soc. Sci. 42, 155–165 (2018). https://doi.org/10.1016/j.erss.2018.03.019 3. Pires, J.C.M., Alvim-Ferraz, M.C.M., Martins, F.G., Simões, M.: Carbon dioxide capture from flue gases using microalgae: engineering aspects and biorefinery concept. Renew. Sustain. Energy Rev. 16(5), 3043–3053 (2012). https://doi.org/10.1016/j.rser.2012.02.055 4. Van Den Hende, S., Vervaeren, H., Boon, N.: Flue gas compounds and microalgae: (Bio)chemical interactions leading to biotechnological opportunities. Biotechnol. Adv. 30(6), 1405–1424 (2012). https://doi.org/10.1016/j.biotechadv.2012.02.015 5. Martunus, Helwani, Z., Wiheeb, A.D., Kim, J., Othman, M.R.: In situ carbon dioxide capture and fixation from a hot flue gas. Int. J. Greenh. Gas Control 6, 179–188 (2012). https://doi. org/10.1016/j.ijggc.2011.11.012 6. Kassim, M.A., Meng, T.K.: Carbon dioxide (CO2) biofixation by microalgae and its potential for biorefinery and biofuel production. Sci. Total Environ. 584–585, 1121–1129 (2017). https://doi.org/10.1016/j.scitotenv.2017.01.172 7. Yahya, L., Harun, R., Abdullah, L.C.: Screening of native microalgae species for carbon fixation at the vicinity of Malaysian coal-fired power plant. Sci. Rep. 10(1), 1–14 (2020). https://doi.org/10.1038/s41598-020-79316-9 8. Martínez, L., Otero, M., Morán, A., García, A.I.: Selection of native freshwater microalgae and cyanobacteria for CO2 biofixation. Environ. Technol. (United Kingdom) 34(24), 3137–3143 (2013). https://doi.org/10.1080/09593330.2013.808238 9. Du, Z., et al.: Cultivation of a microalga Chlorella vulgaris using recycled aqueous phase nutrients from hydrothermal carbonization process. Bioresour. Technol. 126, 354–357 (2012). https://doi.org/10.1016/j.biortech.2012.09.062 10. Belete, Y.Z., et al.: Characterization and utilization of hydrothermal carbonization aqueous phase as nutrient source for microalgal growth. Bioresour. Technol. 290, 121758 (2019). https://doi.org/10.1016/j.biortech.2019.121758 11. Tsarpali, M., Arora, N., Kuhn, J.N., Philippidis, G.P.: Beneficial use of the aqueous phase generated during hydrothermal carbonization of algae as nutrient source for algae cultivation. Algal Res. 60, 102485 (2021). https://doi.org/10.1016/j.algal.2021.102485 12. Levine, R.B., Sambolin Sierra, C.O., Hockstad, R., Obeid, W.: The use of hydrothermal carbonization to recycle nutrients in algal biofuel production. Environ. Prog. Sustain. Energy 32(4), 962–975 (2014). https://doi.org/10.1002/ep.11812 13. Tarhan, S.Z., Koçer, A.T., Özçimen, D., Gökalp, ˙I: Cultivation of green microalgae by recovering aqueous nutrients in hydrothermal carbonization process water of biomass wastes. J. Water Process Eng. 40, 101783 (2021). https://doi.org/10.1016/j.jwpe.2020.101783 14. Griffiths, M.J., Garcin, C., van Hille, R.P., Harrison, S.T.L.: Interference by pigment in the estimation of microalgal biomass concentration by optical density. J. Microbiol. Methods 85(2), 119–123 (2011). https://doi.org/10.1016/j.mimet.2011.02.005 15. Kassim, M.A., Meng, T.K.: Carbon dioxide (CO2) biofixation by microalgae and its potential for biorefinery and biofuel production. Sci/ Total Environ. 584–585, 1121–1129 (2017). https://doi.org/10.1016/j.scitotenv.2017.01.172 16. da Silva, C.M.S., Vital, B.R., de Ávila Rodrigues, F., de Almeida, Ê.W., de Carneiro, A.C.O., Cândido, W.L.: Hydrothermal and organic-chemical treatments of eucalyptus biomass for

440

17.

18.

19.

20.

21. 22.

23.

24.

25.

26.

27.

28.

29.

30.

31.

M. S. A. Guerrero et al. industrial purposes. Bioresour. Technol. 289, 121731 (2019). https://doi.org/10.1016/j.bio rtech.2019.121731 Langone, M., Basso, D.: Process waters from hydrothermal carbonization of sludge: characteristics and possible valorization pathways. Int. J. Environ. Res. Public Health 17(18), 1–31 (2020). https://doi.org/10.3390/ijerph17186618 Ekpo, U., Ross, A.B., Camargo-Valero, M.A., Williams, P.T.: A comparison of product yields and inorganic content in process streams following thermal hydrolysis and hydrothermal processing of microalgae, manure and digestate. Bioresour. Technol. 200, 951–960 (2016). https://doi.org/10.1016/j.biortech.2015.11.018 Abreu, A.P., Fernandes, B., Vicente, A.A., Teixeira, J., Dragone, G.: Mixotrophic cultivation of Chlorella vulgaris using industrial dairy waste as organic carbon source. Bioresour. Technol. 118, 61–66 (2012). https://doi.org/10.1016/j.biortech.2012.05.055 Bei, X., et al.: Semi-continuous cultivation of Chlorella vulgaris using chicken compost as nutrients source: growth optimization study and fatty acid composition analysis. Energy Convers. Manag. 164(March), 363–373 (2018). https://doi.org/10.1016/j.enconman.2018. 03.020 Kim, M.-K., Jeune, K.-H.: Use of FT-IR to Identify Enhanced Biomass Production_buenarefencia de bandas FTIR (2009) de Souza, M.P., et al.: Screening of fungal strains with potentiality to hydrolyze microalgal biomass by Fourier Transform Infrared Spectroscopy (FTIR). Acta Sci. Technol. 41(1), 39693 (2019). https://doi.org/10.4025/actascitechnol.v41i1.39693 Muhammad, A., Yuxi, L., Marwa, M.E.-D., Chunjiang, Z., Xiangkai, L., El-Sayed, S.: A complete characterization of microalgal biomass through FTIR/TGA/CHNS analysis: an approach for biofuel generation and nutrients removal. Renew. Energy 163, 1973–1982 (2020). https:// doi.org/10.1016/j.renene.2020.10.066 Duygu, D.Y., et al.: Fourier transform infrared (FTIR) spectroscopy for identification of Chlorella vulgaris Beijerinck 1890 and Scenedesmus obliquus (Turpin) Kützing 1833. Afr. J. Biotechnol. 11(16), 3817–3824 (2012). https://doi.org/10.5897/AJB11.1863 Song, H., et al.: Extraction optimization, purification, antioxidant activity, and preliminary structural characterization of crude polysaccharide from an arctic Chlorella sp. Polymers 10(3), 292 (2018). https://doi.org/10.3390/polym10030292 Narayanan, M., et al.: Chemosphere Phycoremediation potential of Chlorella sp. on the polluted Thirumanimutharu river water. Chemosphere 277, 130246 (2021). https://doi.org/10. 1016/j.chemosphere.2021.130246 Rizwan, M., Mujtaba, G., Memon, S.A., Lee, K., Rashid, N.: Exploring the potential of microalgae for new biotechnology applications and beyond: a review. Renew. Sustain. Energy Rev. 92, 394–404 (2018). https://doi.org/10.1016/j.rser.2018.04.034 Saka, C., Kaya, M., Bekiro˘gullari, M.: Chlorella vulgaris microalgae strain modified with zinc chloride as a new support material for hydrogen production from NaBH4 methanolysis using CuB, NiB, and FeB metal catalysts. Int. J. Hydrogen Energy 45(3), 1959–1968 (2020). https://doi.org/10.1016/j.ijhydene.2019.11.106 Hazeem, L.J., et al.: Investigation of the toxic effects of different polystyrene micro-and nanoplastics on microalgae Chlorella vulgaris by analysis of cell viability, pigment content, oxidative stress and ultrastructural changes. Marine Pollut. Bull. 156, 111278 (2020). https:// doi.org/10.1016/j.marpolbul.2020.111278 Kose, A., Oncel, S.S.: Properties of microalgal enzymatic protein hydrolysates: biochemical composition, protein distribution and FTIR characteristics. Biotechnol. Rep. 6, 137–143 (2015). https://doi.org/10.1016/j.btre.2015.02.005 Rendón-Castrillón, L., Ramírez-Carmona, M., Ocampo-López, C., Giraldo-Aristizabal, R.: Evaluation of the operational conditions in the production and morphology of Chlorella sp. Braz. J. Biol. 81(1), 202–209 (2021). https://doi.org/10.1590/1519-6984.228874

Influence of Aqueous Phase of Hydrothermal Carbonization Feeding

441

32. Clément-Larosière, B., Lopes, F., Gonçalves, A., Taidi, B., Benedetti, M., Minier, M., Pareau, D.: Carbon dioxide biofixation by Chlorella vulgaris at different CO2 concentrations and light intensities. Eng. Life Sci. 14, 509–519 (2014). https://doi.org/10.1002/elsc.201200212 33. Lu, S., Wang, J., Niu, Y., Yang, J., Zhou, J., Yuan, Y.: Metabolic profiling reveals growth related FAME productivity and quality of Chlorella sorokiniana with different inoculum sizes. Biotechnol. Bioeng. 109(7), 1651–1662 (2012). https://doi.org/10.1002/bit.24447 34. Molazadeh, M., Danesh, S., Ahmadzadeh, H., Pourianfar, H.R.: Influence of CO2 concentration and N:P ratio on Chlorella vulgaris-assisted nutrient bioremediation, CO2 biofixation and biomass production in a lagoon treatment plant. J. Taiwan Inst. Chem. Eng. 96, 114–120 (2019). https://doi.org/10.1016/j.jtice.2019.01.005 35. Rahaman, M.S.A., Cheng, L.H., Xu, X.H., Zhang, L., Chen, H.L.: A review of carbon dioxide capture and utilization by membrane integrated microalgal cultivation processes. Renew. Sustain. Energy Rev. 15(8), 4002–4012 (2011). https://doi.org/10.1016/j.rser.2011.07.031 36. Francisco, É.C., Neves, D.B., Jacob-Lopes, E., Franco, T.T.: Microalgae as feedstock for biodiesel production: carbon dioxide sequestration, lipid production and biofuel quality. J. Chem. Technol. Biotechnol. 85(3), 395–403 (2010). https://doi.org/10.1002/jctb.2338 37. Tebbani, S., Filali, R., Lopes, F., Dumur, D., Pareau, D.: Microalgae. In: Tebbani, S., Lopes, F., Filali, R., Dumur, D., Pareau, D (eds.) CO2 Biofixation by Microalgae Model. Estim. Control, pp. 1–22. John Wiley & Sons, Inc., Hoboken, NJ, USA (2014) 38. Yadav, G., Sen, R.: Microalgal green refinery concept for biosequestration of carbon-dioxide vis-à-vis wastewater remediation and bioenergy production: Recent technological advances in climate research. J. CO2 Utilization 17, 188–206 (2017). https://doi.org/10.1016/j.jcou. 2016.12.006 39. Mousavi, S., Najafpour, G.D., Mohammadi, M.: CO2 bio-fixation and biofuel production in an airlift photobioreactor by an isolated strain of microalgae Coelastrum sp. SM under high CO2 concentrations. Environ. Sci. Pollut. Res. 25(30), 30139–30150 (2018). https://doi.org/ 10.1007/s11356-018-3037-4

Assessment of the Thermal Behavior in Social Housing in Hot Humid Climate in Ecuador E. Catalina Vallejo-Coral1(B) , Francis Vásquez-Aza1 , Luis Godoy-Vaca1 Marco Orozco Salcedo1 , and Javier Martínez-Gómez1,2,3

,

1 Instituto de Investigación Geológico y Energético, Quito 170518, Ecuador

[email protected] 2 Departamento de Teoría de la Señal y comunicación, (Área de Ingeniería Mecá-nica) Escuela

Politécnica, Universidad de Alcalá, 28805 Alcalá de Henares, Madrid, Spain 3 Facultad de Ingeniería y Ciencias Aplicadas, Universidad Internacional SEK, Quito 170302,

Ecuador

Abstract. Climate change has created the need of designing efficient and sustainable housing with acceptable thermal environments for occupants. Consequently, policies that guarantee the compliance with these standards, especially for the most vulnerable populations living in hot climates are required. In this sense, this document aims to evaluate whether the compliance with NEC-HS-EE is sufficient to provide acceptable thermal environments inside naturally ventilated dwellings in very hot humid climates. For this purpose, an experimental measurement of the internal conditions, surface temperatures of the envelope and meteorological variables of a social housing located in Guayaquil – Ecuador, which complies with the regulations, is performed. Results indicate that the roof has the largest range between external and internal surface temperature compared to other envelope elements, 78 and 85 °C respectively. The internal air temperature varies between 25 °C and 36 °C, which is higher than the outside temperature in 99% of the time. In addition, the quality of the thermal environment was evaluated using adaptive models of ANSI/ASHRAE 55 and EN 15251 standards. The dwelling had acceptable thermal conditions according to EN15251 for category III buildings. However, 13.9% of occupancy hours exceeded the limit defined by ASHRAE 55. It was also shown that the envelope does not adapt to climatic conditions, despite complying with local regulations, therefore it is necessary to define comfort criteria that adapt to local conditions. Keywords: Social housing · Thermal behavior · Thermal comfort

1 Introduction Worldwide, global warming has caused an increase in the internal temperature of buildings, causing thermal discomfort in occupants and thus, an increase in the demand for energy for cooling [1]. Nearly 4 billion people in India, South East Asia and Sub-Saharan Africa require cooling systems to reduce heat stress [2], whereas, the electricity used by air conditioning systems and fans accounts for about 20% of total electricity used by © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 442–454, 2023. https://doi.org/10.1007/978-3-031-25942-5_35

Assessment of the Thermal Behavior

443

buildings or 10% of total electricity consumed worldwide [3]. This shows a thermal deficiency of the building envelope to maintain spaces in thermally acceptable conditions, which is crucial to prevent health risks. In recent years, several studies have been developed regionally to evaluate comfort in different types of dwellings and climatic zones. Valderrama C et al. [4] reported that in 67.5% of the cases studied, buildings presented inadequate conditions for occupants. In Colombia, Giraldo et al. [5] evaluated the thermal comfort of social housing (VIS). Using PMV-PPD and adaptive models, it was determined that thermal conditions outside the acceptable range were due to the high thermal transmittance of walls and insufficient insulation of opaque vertical elements. In Ecuador, Castillo et al. [6], conducted surveys to the inhabitants of the “Mucho Lote II Alameda del Río” program, in order to evaluate thermal perception in VIS in Guayaquil. The results showed that 38% of the occupants perceive that the outside temperature and thermal sensation produced by the main facade of their dwelling is “bad”. On the other hand, considering climatic conditions in Ecuador, Gallardo et al. [7], applying energy simulation, analyzed the thermal comfort in VIS through the adaptive comfort model of ANSI/ASHRAE 55 standard [8]. The results showed that, using suitable materials in Guayaquil, can reduce up to 50% of discomfort hours, with respect to the reference case defined with materials typically used. Espinosa et al. [9] studied the thermal comfort in a house built with concrete roof and block walls, through thermal simulation, in three climates of Ecuador. There is 88% thermal discomfort due to high temperatures in Guayaquil, while in Quito it decreases to 47%. Delgado et al. [10] evaluated the thermal comfort in VIS of the “Casa Para todos” project in Ecuador, using energy simulation, in all climate zones of the country and climate change (CC) scenarios. In the climate zone where Guayaquil is located, only 30% of the time, the upper floor of the dwelling has acceptable thermal conditions. This percentage decreases with all climate change scenarios to 18%. In Chile, Espinosa and Cortés [11] evaluated the impact of housing thermal regulations on the hygrothermal comfort of VIS occupants. Surveys of 320 houses built in Santiago were used for the evaluation. During the winter, the occupants of the first and second floors perceived cold and warm sensations, respectively; while in summer they perceive as cool and warm. In Brazil, Mazzone [2], determined that local policies for VIS are not adequate to counteract thermal discomfort in the Brazilian Amazon. This problem causes a significant increase in internal temperature and electricity consumption. Similarly, Medina et al.[12], through a review of the state of the art on thermal comfort for buildings in Colombia, identified that the lack of local regulation and control of existing buildings causes inadequate design of buildings and consequently, inefficient thermal environments. All the authors of the reviewed studies establish the urgent need for regional policies and standards related to thermal comfort in buildings. In this regard, Ecuador has developed several policy instruments aimed at integrating CC management criteria in different sectors. Since 2018, the Ecuadorian Building Construction Standard has been in force, in the Habitability and Health axis, which has the chapter “Energy Efficiency in Residential Buildings” (NEC-HS-EE) [13]. This chapter establishes minimum energy efficiency criteria and requirements for the design and construction of new buildings and residential remodeling in Ecuador’s climate zones. The present study proposes to experimentally evaluate the thermal behavior of the envelope

444

E. C. Vallejo-Coral et al.

and the quality of the indoor thermal environment in a naturally ventilated VIS, located in Guayaquil – Ecuador, which complies with the NEC-HS-EE [13]. The objective is to assess whether compliance is sufficient to provide acceptable thermal environments in naturally ventilated dwellings in very hot humid climates. To achieve this objective, a dwelling was monitored under real conditions of use, considering the protocol established by Escandón, et al. [14]. The collected data corresponds to air temperature and relative humidity measurements using internal sensors and an on-site meteorological station. In addition, surface temperature sensors were installed on envelope elements to determine their dynamic behavior. Finally, the quality of the thermal environment was evaluated using the adaptive model of international standards ANSI/ASHRAE 55 [8] and EN 15251 [15]. This model considers the ability of the occupants to freely operate the envelope elements, as well as to decide the type of clothing to wear in order to achieve comfort conditions [16].

2 Methodology 2.1 Case Study In this study a dwelling was selected which is part of the urbanization project “Socio Vivienda 1”, built by MIDUVI in 2013. The household is occupied by three members (two men and one woman) between 18 and 50 years old. The urbanization project is located in Guayaquil, Ecuador, where more than 2,200 mixed-use and residential housing units were built, with a total of 22,330 m2 of constructed area [17]. All dwellings were built with the same architectural design. In recent years, dwelling occupants added two bedrooms and laundry area as shown in Fig. 1. The expanded area has an uninsulated metal roof, a different material than the original roof. Godoy et al. [18] determined the overall heat transfer coefficients (U-Value) of the envelope elements (see Table 1). The dwelling is built with: concrete block walls with white plaster, single glazed transparent windows, metal exterior door, concrete floor and ceramic tile covering. The dwelling does not have air conditioning equipment. On the other hand, considering the requirements established in NEC-HS-EE [13]for envelope in non-heated living spaces in climate zone 1, the maximum ceiling assembly of the extended zone and windows are above the maximum allowed values (Umax = 3.5 W/m2 K, Umax = 3.84 W/m2 K, respectively). However, the overall transmission loss coefficient (G) of the original dwelling is lower (Gcal = 5.28 W/m3 K) than the G coefficient of the base building as required by the standard (Gbase = 5.69 W/m3 K). This means that the dwelling fulfills the aforementioned standards.

Assessment of the Thermal Behavior

445

Table 1. Features of envelope elements [18] Thickness (mm)

Area (m2 )

U-value (W/m2 K)

93.77

3.694

Element

Material

Walls

Concrete block + 2 70 + 10 layers of white filling

Windows

Single glazing

3

Original roof

Metal + internal insulation

3+8

30.04

2.96

Roof (extended area)

Metal

3

39.79

7.25

Floor

Ceramic/concrete

74

69.18

2.22

4.32

5.894 (SHGC = 0.861)

Fig. 1. Architectural drawing and location of sensors

2.2 Climate and Study Period According to the Köppen-Gaiger climate classification, Guayaquil belongs to the Aw group (tropical savannah climate)[19]. Historical monthly meteorological data (1992– 2015) [20] shows two well-defined seasons: wet (January–April) and dry (May to December). The highest mean monthly air temperature is 27.6 °C in April and the lowest is 25 °C in August (see Fig. 2). This study analyzes data collected, from February 11 to June 17, 2019 (127 days). In this way, long-term representative information is available for the most critical season of the year (wet and hot), as established by UNE-EN 15251[15]. On the other hand, in order to identify weather conditions, a VAISALA weather station was installed and collected information during 2019. Air temperature, relative humidity, wind speed, precipitation, barometric pressure, and global solar radiation were measured. Finally, dwelling occupancy was: Monday through Saturday 100% occupied from 0:00 to 7:00 and from 19:00 to 0:00; 33% from 7:00 to 11:00; and no occupancy from 11:00 to 19:00. Sundays, the dwelling is fully occupied (100%).

446

E. C. Vallejo-Coral et al.

Fig. 2. Precipitation and mean monthly temperature (1992–2015) of station M1096 – INAMHI in Guayaquil.

2.3 Thermal Behavior of the Envelope and Indoor Air A monitoring system was installed in the dwelling which measures surface temperatures on walls, windows, and roof, as well as relative humidity and indoor air temperature. Type K, Omega-SA1 thermocouples were installed on internal and external surfaces based on ASTM (C1046:2013) guidelines [21] (see Fig. 1 and Fig. 3). A Waspmote Plug & Sens-BME280 thermo-hygrometer, located in the living room (T/HR_air), was used to measure the internal air temperature, see Fig. 3d. The data was recorded every minute, using a National Instruments Compact RIO 9040 data acquisition system, in accordance with ISO 9869-1[22]. Finally, the behavior and thermal inertia of the envelope was analyzed according to the methodology defined [23].

Fig. 3. Temperature sensors a) window, b) wall, c) ceiling and d) internal air.

Assessment of the Thermal Behavior

447

2.4 Thermal Environment Evaluation The thermal comfort was analyzed with adaptive models of ANSI/ASHRAE 55[8] and EN 15251[15]. The standards set comfort limits for internal operating temperature. Equation (1) corresponds to the upper limit, with an acceptability of 80% as defined by ASHRAE 55 [8]. On the other hand, Eq. (2) corresponds to the upper limit for existing buildings (Category III) according to EN 15251 [15]. Trm is the exponentially weighted running mean temperature for (1) and the running mean temperature for (2). Trm is calculated with (3), where Ted is the mean daily outdoor temperature and (n-1) refers to the parameter for the day before the day in question. The constant α “controls the speed at which the running responds to changes in outdoor temperature” [8]. ASHRAE 55 suggests 0.9 as value to humid tropics, while EN 15251 suggests a value of 0.8.     (1) Tmax_AS (80% acceptability) = 0.31 · Trm ◦ C + 21.3 ◦ C     Tmax_EN (Catgegory III ) = 0.33 · Trm ◦ C + 22.8 ◦ C

(2)

Trm = (1 − α) · Ted (n−1) + α · Trm−1

(3)

The mean radiant temperature was not measured during the monitoring period. Therefore, internal air temperature is used in place of operating temperature. This estimate is also used in several studies [24, 24] when mean radiant temperature data are not available. However, due to the high solar exposure of the dwelling, this approach may underestimate the effect of mean radiant temperature on occupant heat stress [26].

3 Results 3.1 Thermal Behavior of the Envelope and Indoor Air To analyze the thermal behavior of the envelope, fifteen thermocouples were installed on the roof, walls and windows. Each measurement point has two sensors (exterior and interior), except for the roof in the extended area (Ttin_3), where only the interior surface was measured. The highest average temperatures were recorded on the external surfaces of the roof. The maximum and average values were 84.3 °C and 32.2 °C, at Ttext_1. Wall surface temperatures show an average range of variation between 27.9 °C and 29.7 °C. Tpint_1 has the highest average value (29.9 °C) and Tpext_2 has the maximum value of 62.0 °C. The average surface temperature of the windows is around 28.5 °C. The maximum value (53.5 °C) is recorded in Tvext_1. Box plots with the range of data for each variable are shown in Fig. 4.Troof_out1 and Troof_out2 have the largest range of variation between 21 °C–80 °C. Internal roof temperatures are lower with a range of variation between 23 °C and 60 °C. Troof_in3 behaves similarly to exterior surfaces. Ambient temperature has a smaller range of variation than external surfaces and similar to internal wall surfaces. The wider ranges on surfaces such as roof and external walls are due to the influence of additional external variables such as global solar radiation and wind speed. Finally, the maximum values vary considerably among the envelope elements; however, the median values are around 26 °C. This indicates that half the time, twelve hours without solar radiation, the envelope surface temperature does not exceed 26 °C.

448

E. C. Vallejo-Coral et al.

Fig. 4. Surface temperature variation ranges.

Fig. 5. Roof surface temperatures, from 2019/02/26 to 2019/03/02.

Fig. 6. Wall surface temperatures, from 2019/02/26 to 2019/03/02.

The transient behavior of surface temperatures are shown in Fig. 5 and Fig. 6. These figures show surface temperatures of the extended zone ceiling (Troof_in1 and Troof_out1) and north-facing wall (Twall_in 1 and Twall_out 1). External surface temperatures are higher than internal temperatures around midday, and lower at night (18:00–06:00).

Assessment of the Thermal Behavior

449

Fig. 7. a) Thermal inertia of the dwelling, b) Indoor and outdoor air temperature.

There is no delay in the occurrence of peak values due to the high thermal conductivity of elements. In addition, there is a maximum difference between peak values, 16 °C for the roof and 5 °C for the wall. This behavior is due to the greater heat storage capacity of the concrete block compared to the metal roofing material, which decreases the amount of heat transferred to the interior. On the other hand, Fig. 7a shows the temperature difference over a 24-h period, which corresponds to the thermal curve of the dwelling as a function of its inertia [23]. During all hours with solar radiation (06:00–18:00), the internal temperature does not remain constant and is always higher than the external temperature, demonstrating an inefficient thermal behavior of the envelope [23]. In addition, the temperature differences during the night (18:00–06:00) do not vary considerably with respect to the day, even though the ambient temperature is lower, as it is shown in Fig. 7. This phenomenon is seen at night, where the interior space heats up due to the thermal inertia of walls, internal loads, appliances, lighting, among others, which causes fans to turn on. 3.2 Thermal Environment Evaluation Figure 8 shows the hourly fluctuations of operating temperature inside the dwelling during the studied period. It reaches a maximum value of 35 °C and exceeds the maximum acceptable limits of the two standards, mainly during the first four months. The limits are between 29.5 °C and 31.6 °C for ASHRAE 55 and EN 15251, respectively. Furthermore, the operating temperature is never below the lower limit.

450

E. C. Vallejo-Coral et al.

In order to identify the time-period where the operating temperature exceeds the limit, the cumulative frequency was calculated according to occupancy schedules described in Sect. 2.2. Figure 9 shows that in 80% of the hours, the operating temperature reaches its highest values after midday, with a variation of 6 °C and a minimum temperature of 29 °C. Meanwhile, during the morning and evening the variation is only 3 °C with a maximum temperature of 30.5 °C. This shows that critical thermal conditions occur during unoccupied hours.

Fig. 8. Monitored indoor air temperature dwelling between February 11th to June 17th

Figure 10 shows that, during the occupancy period, 20% of comfort hours are outside the limits acceptable by ASHRAE 55 and 4% above the limit in EN 15251. ASHRAE 55 acceptability criteria states that a thermal environment is suitable when the operating temperature is within 80% of acceptability. Whereas, EN 15251 establishes up to 5% as an acceptable deviation. This means that the occupants are subjected to thermal stress according to ASHRAE 55; while for EN 15251 the thermal environment of the dwelling is acceptable. If a full occupancy scenario is considered, due to particular conditions such as confinement caused by the Covid19 health emergency, 36% of the time, the operating temperature is above the maximum value defined by ASHRAE and 14% above that established in EN 15251. These conditions are not acceptable under any standard. According to the evaluation methodologies for NEC compliance [13], the dwelling complies with current energy efficiency standards despite the built extension. However, data collected in the hottest months showed that the air temperature inside the dwelling varies between 25° and 36 °C, and is always higher than outside air, even at night. This behavior was also observed by Forero and Hechavarría [27], who monitored three VIS with similar characteristics in “Socio Vivienda 2” (Guayaquil). The maximum and minimum temperatures registered were 39.6 °C and 24.8 °C respectively. This particularity shows that the envelope, in this type of dwelling, has an inefficient thermal behavior, which is observed in its main components.

Assessment of the Thermal Behavior

451

Fig. 9. Cumulative frequency of Top during morning, afternoon and night time

Fig. 10. Discomfort hours, from February 11th to June 17th

The original internal surface of the roof reached values up to 60 °C; while in the extended zone the maximum value was 78 °C. In addition, the wall temperature fluctuated between 23 °C and 46 °C. These values are similar to those recorded by Forero and Hechavarría, who, using thermography images, recorded roof internal surface temperatures higher than 60 °C and above 35 °C in the main façade walls. This confirms that dwellings with similar features have the same thermal behavior as the VIS studied. Such behavior generates a high heat exchange between the dwelling and the environment [28], consequently a negative impact on the thermal environment of the occupants. The lack of criteria to evaluate the quality of thermal environments in current standards means that acceptability is freely defined. Gaudry et al. [29] in their study defined a comfort range between 22 °C and 26 °C throughout the year. In another study, Forero and Hechavarría, used the Auliciems’ Theory and Fanger’s adaptive model to define a comfort range between 24 °C and 29 °C. However, in naturally ventilated buildings, the comfort temperature fluctuates depending on the outside temperature [24]. In this sense, it is not ideal to define permanent comfort ranges for all seasons of the year. Under this

452

E. C. Vallejo-Coral et al.

context, in this study the maximum limit of internal temperature, defined by ASHRAE 55 and EN 15251, is around 29.5 °C and 31.6 °C as shown in Fig. 8. During the study period, 20% of the occupancy hours exceeded the comfort limit defined by ASHRAE 55, which establishes that the thermal environment is not acceptable. On the contrary, based on the EN 15251 standard, the dwelling has an acceptable thermal environment since only 4% of the hours the operative temperature exceeded the established limit. This discrepancy is due to the fact that European standards define building categories for which comfort limits are less stringent. In this way, it avoids unnecessarily damaging those buildings with natural ventilation or low energy consumption [16]. Although the comfort limits of EN 15251 standard are broader, when considering full occupancy in the dwelling, 14% of the hours the operating temperature exceeds the established limit. Considering that the maximum percentage allowed is 5%, the dwelling has an unacceptable thermal environment that generates high temperature stress for the occupants, especially after midday. This scenario occurs in circumstances such as confinement due to the Covid19 health emergency, holidays in the Ecuadorian coast, or complete permanence of at least one occupant as reported in [10] for this type of dwellings.

4 Conclusions The analyzed dwelling, despite complying with current energy efficiency standards, does not guarantee an adequate thermal space for the occupants, in accordance with ASHRAE 55 and EN 15251 standards. In addition, surface temperature monitoring showed that the envelope does not adapt to local climate conditions, due to the type of materials used in its construction. Therefore, the importance of having standards that are adapted to local conditions to promote the construction of dwellings is evident, considering the indoor thermal environment and the limited access to air conditioning technologies, especially for the vulnerable population in very hot humid climates. On the other hand, due to the high percentage of time in which the air temperature exceeds the maximum comfort limits, it is necessary to evaluate the severity and occurrence of such events in order to identify the risk of overheating in dwellings. It is strongly recommended that future studies include a bigger simple size such as radiant temperature and air velocity measurements to obtain the operative temperature and assess the thermal perception of the occupants through surveys. This will allow the identification of comfort models that are adapted to local conditions.

References 1. Silvero, F., Lops, C., Montelpare, S., Rodrigues, F.: Impact assessment of climate change on buildings in Paraguay—overheating risk under different future climate scenarios. Build. Simul. 12(6), 943–960 (2019). https://doi.org/10.1007/s12273-019-0532-6 2. Mazzone, A.: Thermal comfort and cooling strategies in the Brazilian Amazon. An assessment of the concept of fuel poverty in tropical climates. Energy Policy 139, 111256 (2020). https:// doi.org/10.1016/j.enpol.2020.111256

Assessment of the Thermal Behavior

453

3. IEA: The Future of Cooling – Opportunities for energy-efficient air conditioning. https:// www.iea.org/reports/the-future-of-cooling (2018) 4. Valderrama-Ulloa, C., Silva-Castillo, L., Sandoval-Grandi, C., Robles-Calderon, C., Rouault, F.: Indoor environmental quality in latin American buildings: a systematic literature review. Sustainability 12(2), 643 (2020). https://doi.org/10.3390/su12020643 5. Giraldo-Castañeda, W., Czajkowski, J.D., Gómez, A.F.: Confort térmico en vivienda social multifamiliar de clima cálido en Colombia. Rev. Arquit. 23(1), 115–124 (2021). https://doi. org/10.14718/RevArq.2021.2938 6. Castillo, E., Mite, J.: Influencia de los materiales de la envolvente enel confort térmico de las viviendas. ProgramaMucho Lote II, Guayaquil. Univ. y Soc. 11(4), 303–309 (2019). http:// scielo.sld.cu/pdf/rus/v11n4/2218-3620-rus-11-04-303.pdf 7. Gallardo, A., Palme, M., Beltrán, D., Lobato, A., Villacreses, G.: Analysis and optimization of the thermal performance of social housing construction materials in Ecuador. In: 32nd International Conference on Passive and Low Energy Architecture. Cities, Buildings, People: Towards Regenerative Environments; Passive and Low Energy Architecture (PLEA), pp. 360– 366 (2016) 8. ANSI/ASHRAE: Standard 55 Thermal Environmental Conditions for Human Occupancy (2017) 9. Romero Espinosa, H.S., Vallejo-Coral, E.C., Ortega López, M.D., Martínez-Gómez, J.: Thermal comfort evaluation in a building with phase change materials in different ecuadorian climatic zones. In: Botto-Tobar, M., Zambrano Vizuete, M., Díaz Cadena, A. (eds.) CI3 2020. AISC, vol. 1277, pp. 390–402. Springer, Cham (2021). https://doi.org/10.1007/978-3-03060467-7_32 10. Delgado-Gutierrez, E., Canivell, J., Bienvenido-Huertas, D., Rubio-Bellido, C., DelgadoGutierrez, D.: Ecuadorian social housing: energetic analysis based on thermal comfort to reduce energy poverty. In: Rubio-Bellido, C., Solis-Guzman, J. (eds.) Energy Poverty Alleviation, pp. 209–224. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-910 84-6_9 11. Espinosa, C., Cortés, A.: Confort higro-térmico en vivienda social y la percepción del habitante. INVI 30(85), 227–242 (2015). https://doi.org/10.4067/s0718-83582015000300008 12. Medina, J.M., Rodriguez, C.M., Coronado, M.C., Garcia, L.M.: Scoping review of thermal comfort research in Colombia. Buildings 11(6), 1–27 (2021). https://doi.org/10.3390/buildi ngs11060232 13. Ministerio de Desarrollo Urbano y Vivienda (MIDUVI): Eficiencia energética en edificaciones residenciales (NEC-HS-EE). https://www.habitatyvivienda.gob.ec/documentos-normativosnec-norma-ecuatoriana-de-la-construccion/ (2018) 14. Escandón, R., Suárez, R., José, J.: Protocol for the energy behaviour assessment of social housing stock: the case of southern Europe. Energy Procedia 96(October), 907–915 (2016). https://doi.org/10.1016/j.egypro.2016.09.164 15. Asociacion Española de Normalización y Certificación (AENOR): UNE-EN 15251. Parámetros del ambiente interior a considerar para el diseño y la evaluación de la eficiencia energética de edificios incluyendo la calidad de aire interior, condiciones térmicas y ruido. Madrid (2008) 16. Chartered Institution of Building Services Engineers (CIBSE): The limits of thermal comfort: avoiding overheating in European buildings. CIBSE Technical Memorandum 52 (TM52:2013). Great Britain (2013) 17. MIDUVI: Actualización de prioridad del proyecto: ‘Socio vivienda. https://www.habitatyv ivienda.gob.ec/wp-content/uploads/downloads/2015/06/PROYECTO-SOCIO-VIVIENDA. pdf (2014) 18. Godoy-Vaca, L., Vallejo-Coral, E.C., Martínez-Gómez, J., Orozco, M., Villacreses, G.: Predicted medium vote thermal comfort analysis applying energy simulations with phase change

454

19.

20.

21. 22. 23.

24.

25. 26.

27.

28.

29.

E. C. Vallejo-Coral et al. materials for very hot-humid climates in social housing in Ecuador. Sustainability 13(3), 1257 (2021). https://doi.org/10.3390/su13031257 Litardo, J, Hidalgo-León, R, Coronel, P, Damian, A, Macías, J, Soriano, G.: Dehumidification Strategies to Improve Energy Use at Retailers: A Case Study of a Supermarket Located in Guayaquil, Ecuador. In: Proceedings of the ASME 2020 International Mechanical Engineering Congress and Exposition, vol. 8: Energy. Virtual, V008T08A031. ASME,16–19 Nov 2020. https://doi.org/10.1115/IMECE2020-23930 Instituto Nacional de Meteorología e Hidrología (INAMHI): Catalogo de datos abiertos. https://www.datosabiertos.gob.ec/dataset/?organization=instituto-nacional-de-met eorologia-e-hidrologia-inamhi (2021) ASTM C1046: Standard Practice for In-Situ Measurement of Heat Flux and Temperature on Building Envelope Components (2013) ISO 9869-1: Thermal insulation-Building elements-In situ measurement of thermal resistance and thermal transmittance (2014) Giancola, E., Soutullo, S., Olmedo, R., Heras, M.R.: Evaluating rehabilitation of the social housing envelope: experimental assessment of thermal indoor improvements during actual operating conditions in dry hot climate, a case study. Energy Build. 75, 264–271 (2014). https://doi.org/10.1016/j.enbuild.2014.02.010 Gamero-Salinas, J.C., Monge-Barrio, A., Sánchez-Ostiz, A.: Overheating risk assessment of different dwellings during the hottest season of a warm tropical climate. Build. Environ. 171, 106664 (2020). https://doi.org/10.1016/j.buildenv.2020.106664 de Dear, R., Brager, G.: Developing an adaptive model of thermal comfort and preference. ASHRAE Trans. 104(1), 145–167 (1998). https://escholarship.org/uc/item/4qq2p9c6 Vellei, M., Herrera, M., Fosas, D., Natarajan, S.: The influence of relative humidity on adaptive thermal comfort. Build. Environ. 124, 171–185 (2017). https://doi.org/10.1016/j.buildenv. 2017.08.005 Forero, B., Hechavarría, J.: Análisis de las condiciones de confort térmico en el interior de las viviendas del complejo habitacional socio vivienda 2, etapa 1, en la ciudad de Guayaquil, Ecuador. Guayaquil – Ecuador (2015) Bhikhoo, N., Hashemi, A., Cruickshank, H.: Improving thermal comfort of low-income housing in thailand through passive design strategies. Sustainability 9(8), 1–23 (2017). https://doi. org/10.3390/su9081440 Gaudry, K.-H., Godoy-Vaca, L., Espinoza, S., Fernández, G., Lobato-Cordero, A.: Normativas de energía en edificaciones ante el cambio climático. ACI Av. en Cienc. e Ing. 11(18), 154–171 (2019). https://doi.org/10.18272/aci.v11i2.1285

Implications of Spraying Powder Paint Paúl Caza(B) , Díaz Rodrigo, Víctor López, Cruz Patricio, and Villarreal Pamela Instituto Superior Tecnológico Vida Nueva, Quito, Ecuador [email protected]

Abstract. With the use of ISO 14001 standards for continuous improvement of processes in industrial safety, the implementation of a particle collection system will allow the increase of activities considering the reduction of costs and reduction of production times, increasing the percentage of productivity. For this it is necessary to make a diagnosis of the paint shop, where the parameters of environmental pollution can be found and controlled through the study of particle analysis in space, the environment of the electrostatic paint production area. The objective of this research is to design a cyclone collection system to reduce the affections in the operators and consequently the reduction of pollution in the workshop and the environment. Keywords: Industrial safety · Cyclone · Quality control · Environment · Temperature · Pressure

1 Introduction The spraying method consists of passing the powder paint through the pressurized pipes until it reaches the electrostatic gun, once the powder performs the fluidization, allows the transfer of pressure generated by pumps or compressors by regulating the concentration of air and dust mixture. This parameter is essential for the control of performance, application and correct thickness on the metal surface to be coated [1]. Spraying is an electrical charge process since without this the dust does not adhere to the metal surface, the dust-laden cloud generates an electromagnetic field between the element and the paint, generating a circuit closure between the base material and the electrostatic gun [2]. The spraying of powder paint, generates various implications to operators such as electrostatic discharges, clogging of pores on the skin, respiratory problems. Therefore, the objective of this research is to design a cyclone collection system to reduce the affections in operators and consequently the reduction of pollution in the workshop and environment.

2 Pollution 2.1 Air Pollution Under European air quality legislation, the air pollutant is defined as any substance present in ambient air which may cause harmful effects on human health and the environment [3]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 455–467, 2023. https://doi.org/10.1007/978-3-031-25942-5_36

456

P. Caza et al.

The pollutants can be varied in nature, they come from different emitting sources and produce a negative environmental impact affecting the atmosphere to a greater or lesser degree, either directly or by transforming into various chemical species by reacting with sunlight and other secondary components. In this way, according to the considerations analyzed, the air pollutants according to María Dolores Cima can be classified taking into account different aspects [4]. In general, it can be established that the main pollutants that can be found in large quantities in the environment, their main sources of emission and their effects on the environment and human health. Saturated hydrocarbon derivatives are anthropogenic pollutants generated by the manufacture and use of refrigerants, solvents, sterilizers, etc., causing destruction of the ozone layer. Carbon dioxide and carbon monoxide are produced from natural sources (respiration of living beings, degradation of organic matter and ocean emissions) and anthropogenic sources (burning of fuels, reactions to high temperatures, such as blast furnaces) causing effects that can cause death by replacing oxygen in the blood and generating greenhouse gases. Sedimentables such as dust (diameter > 10 µm) and aerosols [5], fogs (diameter < 10 µm) are pollutants generated by natural sources (soil dust and gaseous emissions) and anthropogenic (fossil fuel burning, industrial processes, traffic, incineration, waste) cause respiratory diseases and interference with photosynthesis [6]. The negative effects that air pollutants can produce depend a lot among other aspects on the area of the atmosphere reached by these pollutant emissions. Air Quality. Air quality is an expression that is related to pure air, the same that is considered to have natrua-les gases in its constitution [7]. In the field of environmental management, air quality is affected by the presence of pollutants in the atmosphere, either in the form of gases or particles. In accordance with the current regulations established in the air quality parameters the maximum concentration ranges allowed in the atmosphere [8], which must not exceed if it is necessary to protect the health of operators and the environment. However, the following checks may be carried out: • Decrease the percentage of pollutants since the beginning of the procedure. • By using appropriate technologies to retain and remove contaminants according to particle size. The extraction towers alone do not allow to generate a solution for the presence of air pollutants because they can be dispersed and displaced to other places. Air monitoring is a basic tool in control programs to determine what pollutants are present in the air, their origin and in what concentration. 2.2 Treatment of Particles Particles are dispersed in a gas stream and can be separated by various techniques, according to their properties [9]:

Implications of Spraying Powder Paint

457

• Diameter: allows to determine the effectiveness and efficiency of particle retention. • Density: determines, along with the diameter, the ability of the particle to move. • Surface properties: for example, the resistivity or ability of the particle to withstand electrical charges. • Properties of the medium: such as ispersere and wet, which affect the properties of the surface. There are different types of mechanisms that allow the process of filtration and retention of particles, for example through gravitational, centrifugal or electrostatic forces. The dry treatment of particles is classified into separators of the type of mechanical and filtration process in which there are mechanisms of different drives and operations, but with a common purpose of performing the extraction of dust to disperse in the air. Mechanical Processes. The mechanical processes are elements of extraction of air particles by dry way, the elements, drives and devices are mechanically driven and are classified in: sedimentation chamber process, cyclone process and by direct interception, fabric filters. Sedimentation Chamber. It is a system that serves as a preliminary step in the purification or finer classification of dust particles greater than 50 µm, its operation lies in the entrance of the air powder to the chamber as indicated in Fig. 2, decreasing its speed, by the size and weight of each solid which causes some particles to leave the air stream caused by the gravitational force falling to the bottom of the container thus reaching the classification or filtering of particles suspended in the air (Fig. 1).

Fig. 1. Sedimentation chamber system [3]

Cyclone is one of the most frequently used mechanical dust collection systems. Cyclones remove particles from the air, based on the inertial impaction principle, generated by a centrifugal force. This mechanism is formed by a sedimentation chamber that allows the gravitational acceleration the same that can be substituted with the application of centrifugal acceleration.

458

P. Caza et al.

Cyclones are suitable for separating particles with diameters greater than 5 µm in some cases up to smaller particles. Its operation includes the entrance of air contaminated with solid particles where it draws a downward spiral path on the outer side and upward on the inner side.

Fig. 2. Diagram of the operation of a cyclone [10]

Direct Intercept or Fabric Filters. Its filtering process is generated by direct interception, impact or inertia [11], so it can separate a wide range of dust types with a diameter of less than or equal to 10 µm and break up the dangerous pollutants present in the air, typical efficiency in this new equipment is between 99 and 99,9%. Equipment of a nonmodern technology has a range of actual operating efficiency ranging from 95% to 99% which makes one of the air extraction and filtering mechanism more efficient than those mentioned. In general, the efficiency of the direct interception system by fabric filter increases at higher filtering speed and particle size (Fig. 3). Filtration Processes. Filtration processes are systems that similarly allow the separation or filtering of contaminated air with particles or solid elements by dry track treatment, the main processes are the type of electrostatic precipitator and deep filtration. Electrostatic Precipitator. Its filtering mechanism is generated by electrostatic force induced by electrical charges produced through a collector electrode of opposite polarity, as can be seen in Fig. 4. Its utility is to promote the particulate having diameters between 0.05–20 µm, allowing to reach a high efficiency however, in particles that have magnetic resistance the costs are increased and its operation is inefficient.

Implications of Spraying Powder Paint

459

Fig. 3. Operation of the fabric filter [10]

Fig. 4. Diagram of an electrostatic filter [18]

Deep Filtration. Deep filtration consists of one in a coarse filter medium as cellulose medium, this “depth” matrix is used to retain particles from the polluted air where they are suspended. The filtration by depth is basically used in processes of polishing of a product, clarification of oils, in the medical industry for operations of fractionation of blood, filtration of insulating oil and elimination of water. The operation of this mechanism consists of retaining solids through a medium depth due to loss of energy, these solids cross a turbulent path when passing through the filtration media; Energy loss is produced because some particles are retained in the matrix. The adhesion of the particles that are retained with this mechanism is the result of a physical attraction produced between molecules [12–15], the surface, filter medium and molecules, this due to the electrostatic attraction that occurs in the system (Fig. 5).

460

P. Caza et al.

Fig. 5. Deep filtration absorption system [20]

3 Methodology Controlling air pollution means eliminating or reducing the concentration of pollutants to an acceptable level, in order to protect the health of people, ecosystems and materials [16]. Therefore, controls can be established to reduce the generation of pollutants from the source, also once produced, retain and eliminate these pollutants using the most appropriate technologies for each specific case. According to Alexander Valencia, chimneys alone are not a solution to air pollution because they only disperse and move pollution elsewhere [17]. The factors to be considered as a control strategy are: • Type of contaminant or particulate • Nature of the source to be controlled. • Conditions affecting pollutants (concentration, temperature, pressure, humidity, etc.), gaseous flow (continuous or discontinuous) and type of process (open or closed). • The legal requirements established. • The necessary treatments for their elimination and/or reduction. Air monitoring can be performed to control pollutants both in emission and in emission, taking into account that pollutants are in higher concentration when measured at the source. In any case, the first stage when defining a control program is to make a diagnosis of the situation through point measurements [18]. Usually, these measurements focus on detecting and quantifying the air quality reference pollutants (see Table 1), although according to the type of production process, the purpose of the monitoring is to analyze the pollutants. There are some methodologies for estimating emissions.

Implications of Spraying Powder Paint

461

Source Sampling. Sampling at the source for more reliable and representative data, although this is not always possible.  Ei (1) ET = Ei = ci × Q × T

(2)

where: Et is the total emission (mass/time). Ei is the emission of a pollutant (mass/time). Ci is the concentration of a contaminant in the gas stream. Q is the volumetric flow. T is the time conversion factor. Emission Factors. The emission factor is that which expresses the amount of pollutant released into the atmosphere per unit of activity data (e.g. kg SO2/l fuel, kg hydrocarbons/inhabitant-year). It provides the best approximation for estimating emissions.   Ec (3) E = A × Fe × 1 − 100 where: E is the emission rate (mass/time). A is the rate of activity. Fe is the emission factor (mass/unit of activity). Ec is control efficiency (%). Material Balance. Estimation of emissions when no other methods are available, useful for point and area emissions. This method assumes that all the contaminant is emitted (e.g. any solvent is evaporated). It is versatile, but requires a deep knowledge of the process.   E = Mi − Mf T (4) where: E is the emission rate (mass/time). Mi is the mass of contaminant at the entrance. Mf is the mass of contaminant at the outlet. T is the temporal setting (time).

3.1 Dimensional Analysis of the Cyclone Considering the problems identified in this research, experimental and applied research should be used thanks to the technological contribution presented in the design oriented to the contribution of comfort within the industrial workshop. In this way the research aims to respond to a certain problem initially raised.

462

P. Caza et al.

Dimensional Parameters. In order to analyze the technical characteristics of the dimensions of the dust particulate collection system, it is intended to consider the primary variables that allow the effective design to be generated, considering the following parameters to be taken into account: • It should be considered that the air flow with which this type of cyclone works has a range of 0.5 to 12 [m/s3 ]. • Cyclone diameter is another important factor since the effectiveness of the system is related to this parameter, since the smaller the diameter the pressure in the system shows an increase in efficiency. • The working temperature depends on the place in this research will be considered an ambient temperature of 26 °C that is presented in the city of Quito. • The velocity of entry is related to the technical values of this cyclone which varies between 15 and 27 [m/s]. • The concentration of particulate matter is in a range of 2 to 230 [g/m3 ] according to the type of cyclone to be designed. In the analysis of the efficiency of the cyclone to be implemented inside the curing furnace, two fundamental aspects must be taken into account: the velocity of entry of the cyclone with respect to the jumping speed of the particulate material, in such a way that when performing a study in which the result is less than 1.35 this will indicate that there is no suspension of the particulate material inside the cyclone, for which it is necessary to determine the equivalent speed of the material and from this data in the constructive process. Material selection to design the structure of the cyclone, some important aspects must be considered that will allow to make a design adjusted to all the requirements and dimensioning that guarantee the correct operation [19], which are described below: • For the calculations of all structural elements all design parameters, forces generated by static and variable loads shall be taken into account. Whereas the cyclone has ventilation-type systems for collecting dust particles. • The structure of the particle collector must have a safe and comfortable design that allows the process of collecting particles generated in the process application of electrostatic paint on the different mechanical elements without causing pollution to the environment and damage to the health of the operator (Fig. 6).

Implications of Spraying Powder Paint

463

Fig. 6. Selection and sizing of the collected particle system raised in the investigation.

The materials selected for the construction and assembly of the structure want to present an excellent galvanic compatibility to avoid corrosion between them, because the structure will operate with chemicals. The assembly of the structure will be made with movable and welded joints to ensure a design that allows easy mobilization. Load Analysis. For finite element analysis, the fixed conditions of the cyclone are considered as well as an internal load that will occur within the cyclone’s cover due to the extraction dust. In addition, a triangular mesh was taken in order to obtain values as close to the real considering turbulence loads as an incompressible fluid (Figs. 7 and 8).

Fig. 7. Application of movement restrictions and mesh type used in the simulation of the load analysis of the collected particles raised in the research.

464

P. Caza et al.

Fig. 8. The motion of the loads applied to the cyclone raised during the investigation can be seen in the figure.

The results obtained can be observed a minimum deformation in the sheet metal which will not cause fractures or deformation of the material at the stage of operation due to material which will hit the internal walls of the cyclone (Fig. 9).

Fig. 9. Load analysis performed the deformation of the particle collector according to the scale of colors raised in the research.

Equivalent speed the equivalent velocity allows to determine the number of particles entering the cyclone, in calculation of the same it is necessary to take into account different factors such as viscosity, density of the particulate material and working temperature. Which is determined by the following equation:  4 × g × μ × (ρp − ρ) W = 3 (5) 3 × ρ2 Jumping speed, the input velocity is considered as the primary factor, due to which the velocity is decreased, which does not allow a high efficiency, this is due to the

Implications of Spraying Powder Paint

465

neutralization that is generated by the centrifugal force, which is why you have the jumping speed that will allow the particles stored inside the cyclone. The jumping speed in the particulate collection system is established by data of the input diameter, initial velocity, among others, by calculating the following equation:  4.9 × W × Kb0.4 × Dc0.067 × 3 Vi2 Vs = (6) √ 3 1 − kb Cyclone Efficiency. The analysis of cyclone efficiency is determined with the Leith and Licht equations, for which in efficiency it is stimulated that particles should be removed 100% this depends on the material properties and dimensions of the cyclone. ⎡ ⎣−2×

ni = 1 − e



 G×Ti×Q×(n+1) Dc3

0.5 n+1

⎤ ⎦

(7)

Pressure Drop. One of the most relevant aspects in the design of the cyclone that is intended to be used in the applicability of the particulate is the pressure drop that can be determined by calculating the number of speed heads with the following equation. NH = K ×

a×b Ds2

(8)

4 Discussion and Results Finally, after the technical analysis throughout this research, the study of the different dust collection systems that can be applied as a preventive measure to the ergonomic risk for operators and the environment has been carried out. Within the design and construction process of the dust collector, the optimal selection was made that meets the technical requirements of operation, generating a production that reduces the damage caused to the health of the operator by exposure to paint in dust, pollution to the environment, considering the calculation of parameters that influence air quality. Of the components that establish the operation of the equipment, it was determined that the cyclone collection system will allow the recovery and reuse of the powder paint, compared to other processes such as the sleeve and fabric type that despite having a good retention of the collected material these do not allow the reuse of the paint due to the pollution that is generated with the mixture of other particles that are in the air. Powder paints due to their feasibility of modifying their chemical composition according to needs, depends completely on its components so the materials that were selected for the design of the particle collector must meet the mechanical properties, chemical and physical agents that extend the life of the system to be implemented in the electrostatic painting process. In the CFD simulation performed on the cyclone system, it was determined that there is no laminar flow within the collection process, which allows predicting a particle

466

P. Caza et al.

accumulation system, which facilitates the recovery of the powder paint during the application in the surface, by parameterizing the spraying on the equipment. It was also possible to determine that the cyclone’s motor system will be driven by a blower that generates a centrifugal motion through blades that generated by the pressure of the engine allow having a high operating range and a low energy consumption thus contributing to the reduction of the implications of powder paint on operators and the environment. Another primary criterion in the design, is the easy handling of equipment and maintenance to be performed considering that it is intended to keep operational the application process of powder paint and recovery in the storage tank that is connected to the cyclone which through a system tire will allow the reuse of paint.

5 Conclusions In this paper we presented the study of the selection of four multi-criteria particle treatment systems analyzed from the mechanical and filtration point of view, where we compared production, recovery, maintenance, environmental pollution, ergonomic risk as main features. The results indicate that there are differences between the aspects that are handled between the different acceptance criteria, therefore the TOPSIS method showed that the mechanical system cyclone type satisfies with a greater percentage the analyzed variables, which is why it is determined as an optimal alternative for the collection and recovery of powder paint, as well as in the reduction of health damage in operators. The design of the cyclone system as an alternative collection generated a particle separator thanks to the laminar flow that the operation through the blower driven by an air pressure allows to absorb all the particles in the powder paint chamber. It is recommended in the future to carry out research that allows to analyze the air quality in the painting process using an analytical method and with an experimental method.

References 1. Anónimo. (2017). Contaminación atmosférica 2. Aránguez, E., et al.: Contaminantes Atmosféricos Y Su Vigilancia. In: Rev Esp Salud Pública, vol. 73 (1999) 3. Arciniégas, S.C.: Diagnóstico Y Control De Material Particulado: Partículas Suspendidas Totales Y Fracción Respirable Pm 10 * (2012) 4. Karina, A., Jorge, M., Carrillo, T.: Proyecto de mejora en los procesos de curado y pintura en una empresa de productos eléctricos. Revista Aristas: Investigación Básica y Aplicada. 6, 6(12), 239–243 (2018) 5. Cabina, T.: Nuevas cabinas de aspiración de pintura en polvo evolucionadas - Tecnicabina S.L (2018). https://tecnicabina.com/nueva-cabina-aspiracion-pintura-polvo/. Accessed 20 Jan 2021 6. Company Spectris, O. Contaminación por partículas y Seguridad Ambiental (2018). https:// mx.omega.com/technical-learning/contaminacion-por-particulas-y-seguridad-ambiental. html. Accessed 20 Jan 2021

Implications of Spraying Powder Paint

467

7. Dalmar protecciones y, P. Cabinas de pintura en polvo (2018). http://blog.proteccionesypintu ras.com/cabinas-de-pintura-en-polvo/. Accessed 20 Jan 2021 8. Directiva 2008/50/CE del Parlamento Europeo y del Consejo, de 21 de mayo de 2008, relativa a la calidad del aire ambiente y a una atmósfera más limpia en Europa. Diario Oficial de la Unión Europea, L 152/1, 11 de junio de 2008.. Directiva 2008/50/CE del Parlamento Europeo y del Consejo, de 21 de mayo de 2008, relativa a la calidad del aire ambiente y a una atmósfera más limpia en Europa. Diario Oficial de la Unión Europea, L 152/1, 11 de junio de 2008 (2008). https://www.boe.es/doue/2008/152/L00001-00044.pdf 9. El sitio de la pintura en, P. Recuperación de Pintura en Polvo en la Fabricación (2017). http:// pinturaenpolvo.org/recuperacion-de-pintura-en-polvo. Accessed 20 Jan 2021 10. Fauchais, P.L., Heberlein, J.V.R., Boulos, M.I.: Thermal Spray Fundamentals: From Powder to Part, pp. 1–1566 (2014). https://doi.org/10.1007/978-0-387-68991-3/COVER 11. He, H., et al.: Effects of oxygen contents on sintering mechanism and sintering-neck growth behaviour of FeCr powder. Powder Technol. 329, 12–18 (2018). https://doi.org/10.1016/J. POWTEC.2018.01.036 12. Industria elaboradora de pinturas. Guía para el control y prevención de la contaminación industrial (2016). http://www.ingenieroambiental.com/4014/pinturas.pdf. Accessed 20 Jan 2021 13. Instituto ecuatoriano de Normalización (INEN). Norma Ecuatoriana de Calidad del Aire (2011) 14. IPM Integraciones y, proyectos metálicos S. A. (n.d.). Cabinas para aplicar la pintura electrostática - Metalmecánica, Proyectos Metálicos y Pintura Electrostática. http://ipmsadecv. com/cabinas-para-aplicar-la-pintura-electrostatica/. Accessed 20 Jan 2021 15. Khezri, S.M., Shariat, S.M., Tabibian, S.: Evaluation of extracting titanium dioxide from water-based paint sludge in auto-manufacturing industries and its application in paint production. Toxicol. Ind. Health 29(8), 697–703 (2013). https://doi.org/10.1177/074823371143 0977 16. Khezri, S.M., Shariat, S.M., Tabibian, S.: Reduction of pollutants in painting operation and suggestion of an optimal technique for extracting titanium dioxide from paint sludge in car manufacturing industries-case study (SAIPA). Toxicol. Ind. Health 28(5), 463–469 (2012). https://doi.org/10.1177/0748233711414611 17. Kleine Deters, J., Zalakeviciute, R., Gonzalez, M., Rybarczyk, Y.: Modeling PM 2.5 urban pollution using machine learning and selected meteorological parameters. J. Electr. Comput. Eng. 2017, 1–14 (2017). https://doi.org/10.1155/2017/5106045 18. Koponen, I.K., Koivisto, A.J., Jensen, K.A.: Worker exposure and high time-resolution analyses of process-related submicrometre particle concentrations at mixing stations in two paint factories A BSTR ACT (2015). https://doi.org/10.1093/annhyg/mev014 19. Ministerio, para la T. E.: Partículas en suspensión (2014). https://www.miteco.gob.es/es/cal idad-y-evaluacion-ambiental/temas/atmosfera-y-calidad-del-aire/emisiones/prob-amb/partic ulas.aspx. Accessed 20 Jan 2021 20. Juan, N., Ugualpe, J.: Diseño de un sistema de extracción de polvo para la empresa INSOMET (división TELARTEC, productora de telas de poli-algodón); perteneciente al Grupo Empresarial Gerardo Ortiz Cía. Ltda (2016)

Heat Transfer Adhesion Factor on Metal Surfaces Paúl Caza(B) , Díaz Rodrigo, Víctor López, and Villarreal Pamela Instituto Superior Tecnológico Vida Nueva, Quito, Ecuador [email protected]

Abstract. This document contains the technical analysis of the detailed engineering of the design characteristics of a curing oven for metallic materials. The curing oven has a production capacity of one thousand grams/hour, which will be distributed in a 100% capacity on the metallic surface. The design process of the resistance oven is complemented with the multi-criteria selection matrices according to the Topsis method and the CFD simulation of heat transfer that generates the analysis of meshing and temperature ranges, by means of which an oven with a modern and functional design that meets the technical and environmental standards is achieved. Keywords: Oven · Heat transfer · Electrostatic painting · Metallic materials · Resistance

1 Introduction The study of the application of heat transfer in the implementation of new technologies within the different production lines, improving the mechanical properties of materials through the application of coatings for which heat is the energy factor that can be transferred from one system to another as a result of the difference in temperature used mainly in paint curing ovens reducing pollution to the environment. The research is based on the study of curing ovens through the application of heat transfer by conduction to metallic materials improving the technical characteristics with the use of coatings that allow reducing the wear caused by corrosion and oxidation generated by environmental and chemical means.

2 Materials Analysis Generally, metal materials are best suited to the process, electrostatic powder coating is also widely used in the manufacture of aluminium. Electrostatic powder does not require a carrier fluid so little or no volatile compounds are emitted in the process.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 468–482, 2023. https://doi.org/10.1007/978-3-031-25942-5_37

Heat Transfer Adhesion Factor on Metal Surfaces

469

2.1 Metals Metals are currently obtained synthetically from minerals found in the earth’s crust. It is established, then, that: “metals are all chemical elements whose best characteristics required by the electrostatic painting process are electrical conductivity and good heat conductivity (except mercury)”. Ferrous Materials. Ferrous metals are considered to be all materials that contain the element iron, in the liquid phase combined with other chemical components that allow the formation of steels and castings. It is necessary to specify a classification of four types of ferrous or ferric metals: irons, whose carbon content is between 0.01 to 0.03%; steels with 0.03 to approximately 1.76% of carbon; castings with 1.86 to 6.67% of carbon content and finally, graphite, which are alloys, obtained after 6.67% of carbon. The high melting point of steel (1600 ◦ C approx.) represents the major advantage in electrolytic coating since it is possible to accelerate the curing process without losing the characteristics or properties of the coating. No Ferrous Materials. Metals such as copper, zinc, lead, tin, aluminium, nickel, manganese is the most representative of this group of materials, but of all those mentioned the most commercially useful is aluminium, it is an excellent conductor of electricity and a good conductor of heat, it also has a commercial utility, especially when subjected to harsh environmental conditions.

2.2 Non-metals The appearance of non-metals varies between solid, liquid, and gaseous, they have certain characteristics that make them unique, such as poor conductors of electric current, heat, and their very low melting points, besides being fragile, they are not ductile or malleable, what is used are the transformation processes such as alloys, mixtures, etc. Polymers. They are macromolecules, which means that in their composition there are monomers (small molecules) which need covalent bonds to join together and replicate as many times as necessary. It is possible to obtain synthetic polymers, which have many industrial applications. Electrostatic paints such as epoxy, polyester-TGIC and epoxy/polyester (hybrid) paints are polymers, each with characteristics and properties that make them usable on an industrial level, whether as anticorrosive surface protection, in decoration or resistant to moderate temperatures. Ceramics. Ceramic materials can be stipulated from two perspectives: traditional ceramic materials (clay, silica and feldspar) and engineering ceramics, consisting of almost pure or pure compounds such as aluminium oxide (Al2O3), silicon carbide (SiC), although they resist the curing temperatures in the oven, it is not feasible to apply the coating by electrostatic painting since one of their properties is low electrical conduction.

470

P. Caza et al.

2.3 Coating Processes by Heat Transfer Electrolytic Process. Corrosion is the process of deterioration of metallic materials through chemical and electrochemical reactions. Corrosion is a problem that has repercussions on the economic, safety and premature wear of materials, so its analysis for mitigation is of great importance in the study. Corrosion as such is a term used to describe the wear of metallic materials generated by chemical and electrochemical reactions that occur, used to describe the deterioration process of metallic materials, both pure metals and alloys through chemical and electrochemical reactions [20]. In most cases produced by corrosion are related to electrochemical reactions, the same that to develop this process needs three elements that is made up of electrodes, the conductive medium, an electrolyte, and the electrical connection that are connected to the electrodes. The method of coating materials by means of the electrolysis process or electrochemical cells, is constituted by two practically metallic conductive electrodes, joined together by a conductive wire, and submerged each one of the electrodes in electrolytic solutions. Each of the solutions are physically separated but can exchange their ions through the salt bridge (tube filled with a gel soaked with a saturated solution), usually potassium chloride is used, since this substance allows a good bidirectional conduction between the two substances of the cells due to the fact that K+ and Cl- have similar electrophoretic mobility. One of the fundamental methods in the process of surface coating by electrolysis is galvanizing, traditionally this coating has been considered as an effective and durable steel protection system that does not require additional treatment. Customer galvanizing is the industrial coating commonly used for surface protection of parts and components exposed to the environment or chemical agents. Electrostatic Painting. Electrostatic painting works by means of a phenomenon known as static electricity, which occurs when in a solid, structural element or body there is an excess accumulation of positive or negative charges and is represented as a dry powder that when applied or subjected to an increase in temperature becomes a fluid giving it a fine appearance with a resistant, uniform and excellent quality finish [5]. The paint applied by this type of sprayer is usually in powder form, but can also be liquid, and is electrostatically charged in different ways. Generally, the system applies a negative electrical charge on the paint during the application process, while other systems apply through a spray-type transfer generated by the spray gun. The paint is propelled by the gun, which rubs against the side and allows a static electrical gain to be generated during the movement. According to Metal Actual Magazine [19] electrostatic paint is one of the most ecological and efficient alternatives in the application of organic coatings to protect metallic materials, which has dry components one hundred percent free of solvents among which we can mention: • Resins are the base material to elaborate the paint and that it generates a hardening at high temperatures, they also give the product shine, improve the chemical resistance, outdoor durability, and heat resistance.

Heat Transfer Adhesion Factor on Metal Surfaces

471

• Hardeners these compounds when mixed with the resins allow the same “cure” that is to say to form a layer of the coating on the piece by the action of the temperature. • Pigments are the inert compounds that can be inorganic, organic, and metallic; the same that have the function of giving the colour to the paint, and the same that are capable of resisting high baking temperatures to which they are subjected, including exposure to solar radiation. • Fillers are substances whose purpose is to provide mechanical properties such as impact resistance, hardness, increase resistance, among others that improve the characteristics of the product to improve its application and quality. • Additives are chemical substances that allow modifying the properties of the binder within which we can mention epoxy polyurethane, polyamine, among others, thus generating the fluidity of the paint for its application. 2.4 Curing Ovens According to Collanqui [2] the curing ovens are applied for heat transfer to different products that are placed inside, and its temperature is higher than that of the environment, whose objective is the heating of materials to perform heat treatments, drying of materials and curing of electrostatic paint. Convection Curing Ovens. Electric resistance convection ovens generate heat by means of electrical resistors, which are located around the oven; the location allows raising the outside temperature by means of a coil that is wrapped with refractory or metallic material. Generally, these furnaces are found in applications where the temperature can be controlled according to the type of resistance used. Ultraviolet Radiation Curing Ovens. These ultraviolet radiation-curing ovens are highly efficient when using energy issues; however, they are rarely used because this heat transfer process can cure not all electrostatic paints. For this oven to cure special additives are required to generate the proper chemical reaction, however, these are very expensive and are only sold upon request. Infrared Panel Curing Ovens. The furnace is operated by catalytic infrared panels that work by means of a chemical reaction between a gas, the catalysing membrane, and the oxygen in the air. Internally, the panels have an electrical resistance that heats them before the gas passes through and then exits into the furnace chamber. When the gas comes into direct contact with the oxygen, the chemical reaction is generated and this in turn generates the thermal radiation without the need for a flame as in the convection oven, without generating environmental pollution. Currently, the most widely used oven for the powder paint curing process is the forced convection oven due to its low operating cost; however, the disadvantage with the others is the time it takes to reach the internal temperature of the oven.

472

P. Caza et al.

2.5 Conventional Curing Room Systems Electrical Systems. This type of ovens works with the implementation of electrical resistances and a heat transfer generated by forced convection. Also referred to as electrothermal, these curing rooms allow the electric current to be conducted through a resistive element that is around the walls. According to Glick, N., & Shareef, I. [4] there are also drying ovens that allow the temperature to be raised from the outside, through a heater that takes the form of a coil that is located in the tube of the refractory material. These rooms are essential since they allow controlling the required temperature. In addition, this furnace requires the surrounding air to homogenize the temperature inside the furnace, which generates a good energy supply system. Infrared System. This drying system has a greater technological advance compared to the others, since it includes a number of applications due to its usefulness with different materials such as metal, wood, polymers, and other substrates; allowing to achieve great advantages in terms of effectiveness and speed in the drying process. Ruffatto, D. [3] states that infrared systems consisting of panels that generate electromagnetic radiation, allow heat transfer from these elements to the element, body, or coating to be used. One of the main sources of energy production are those generated by ultraviolet, radio frequency and infrared sources. The main advantage of this system is that it does not require air transport by forced convection, compared to other systems; therefore, ovens that work with infrared are more efficient and effective due to their speed and operational cost.

3 Methodology According to the question posed at the beginning of the research, it allowed to analyse the design and modelling of the curing oven, as well as the optimal materials to be used in its manufacture through the correct selection considering the mechanical, chemical, and thermal properties that will determine the adherence of the powder paint on different metallic pieces, through the descriptive and experimental methodological study. According to the analysis of the adherence with the electrostatic paint on the metallic elements, it is necessary to use thermocouples that allow determining the internal temperature of the oven, since the design will work by means of resistances, therefore the heat transfer is a primordial factor. By means of the adhesion test it will be possible to analyse the strength of the union between the surface and the coating, the quality of the process from the preparation of the element, hardness when scratched, as well as to detect possible discontinuities in the coating if they exist, which should be eliminated by applying a new layer or by controlling the oven temperature. 3.1 Materials The application of electrostatic paints to ferrous-based substrates is expected to provide the best performance per square meter, as well as a more durable finish and corrosion

Heat Transfer Adhesion Factor on Metal Surfaces

473

resistance. There are a great variety of resins for the electrostatic paint coating process such as polyester and epoxy, however, there are other products that meet the characteristics of durability and resistance required for surface protection such as acrylic or polyurethane paints. However, a series of characteristics make electrostatic painting superior to other surface coating processes. The electrostatic booth has had a significant advance in industrial applications, and in the automotive industry, better finishes are obtained, more professional with greater durability, with less waste of residual powder. The surface coating is durable on carbon steels or ductile irons, the powder coating is applied dry and then cured with heat input inside an oven. The electrostatic painting process can be automated in the assembly line, in the cleaning stage, and in the powder application stage, even during the subsequent storage stage; the residue can be recycled up to 90% with the appropriate equipment (Table 1). 3.2 Treatments

Table 1. Comparison of surface coating processes Surface Coating Processes Advantages

Disadvantages

Electrostatic painting

Surface coating thickness varies according to need with a single application Long coating life Suitable for saline or acidic environments Prevents chemical reaction with oxygen in the air Recycling of a large percentage of electrostatic powder residues Low contamination rate

Medium complexity infrastructure Exposure to electrostatic dust requires a personal protection system against solid particles The curing process is based on the control and application of a heat source

Paints with chemical solvents

Fast application Little operator training Basic compressed air equipment Surface coating on non-ferrous materials

Expels a large quantity of contaminating particles Possibility of accidents due to the flammability of the solvents and occupational diseases (continued)

474

P. Caza et al. Table 1. (continued)

Surface Coating Processes Advantages

Disadvantages

Galvanizing

Long coating life Prevents chemical reaction with oxygen in the air Suitable for saline or acidic environments

Large infrastructure for the zinc plating process Complex galvanizing process It is not possible to apply a subsequent coating layer Difficulty in obtaining zinc lingo-tes Contaminating residual liquids

Nickel plating

Long coating life in acidic environments Highly decorative surface finish Applications

High cost of the electrolytic surface coating process Limited applications

Chrome plating

Extended coating life in corrosive environments Decorative surface finish Suitable for use in domestic applications

High cost of the electrolytic surface coating process Limited applications During the chrome plating process, the operator is exposed to chemicals such as chromium or acids

3.3 Ovens To perform the curing oven selection analysis, the TOPSIS ranking-based discrete multicriteria decision method was used. The TOPSIS method selects the best decision alternatives by considering the ideal solutions and the worst ideas in a systematic ranking and evaluation of the alternatives in terms of their importance and degree of usefulness [11]. This method is based on the idea that it is desirable for a given alternative to be located at the shortest distance from an ideal or positive ideal alternative that represents the best (positive or simply ideal idea) and at the greatest distance from an anti-ideal alternative that represents the worst (negative or anti-ideal idea). The ideal alternative not necessarily observed is determined from the best values of the set of alternatives, in the same way the anti-ideal alternative is an alternative that does not necessarily integrate the set of real alternatives and is determined from the worst values of the set of alternatives. In this case, we have a multi-criteria problem with seven criteria and four alternatives, the desirable criteria must be selected, therefore, the best alternative is the one that maximizes these criteria. The criteria that represent something undesirable will be selected with the lowest possible value. We give the values to the different attributes for each alternative, as presented in Table 2.

Heat Transfer Adhesion Factor on Metal Surfaces

475

Table 2. Criteria and alternatives assigned to allow the selection of the curing oven raised in the investigation Criteria

Production capacity (%)

Energy Consumption (KW/h)

Environmental Pollution (%)

Production (Kg)

Investment

Maintenance Cost $/year

Production Volume (gr)

Infrared Oven

90

10

65

120

17000

2500

800

Electrical Resistances Oven

90

7,26

10

120

14000

1200

1000

Burners Oven

85

5,2

90

120

20000

2000

500

Convection Oven

70

5,4

85

120

17000

1500

400

The next step is to generate a normalized matrix where it corresponds to the square root of the assigned criterion over the sum of the square of the criteria of all the alternatives. Table 3 shows the normalized matrix of the multi-criteria analysis by the TOPSIS method. Table 3. Standardized matrix collected in the investigation Criteria

Production capacity

Energy consumption

Environmental pollution

Cost production

Investment

Maintenance cost

Time production

Ductile cast iron

0,534

0,691

0,463

0,5

0,496

0,669

0,558

Carbon low steel

0,5347

0,502

0,071

0,5

0,408

0,321

0,698

Low alloy steel

0,505

0,359

0,642

0,5

0,583

0,535

0,349

Carbon high steel

0,415

0,373

0,606

0,5

0,496

0,401

0,279

To draw the weighted normalized decision matrix, we assign values to each of the criteria in a percentage weight according to a scale of importance for the selection of the appropriate equipment, in which the weighted matrix corresponds to the equivalent multiplication of the normalized matrix by the weight assigned to each of the criteria, as shown in Table 4. Finally, the “Euclidean” distance is determined, which corresponds to the minimum distance from an ideal value to an alternative. Table 5 shows a minimum value in which the second option is determined as the best alternative, which corresponds to the electric resistance-curing oven.

476

P. Caza et al. Table 4. Normalized weighted matrix surveyed in the investigation

Criteria

Production capacity (%)

Energy consumption (kW/h)

Environmental pollution (%)

Cost Production (kg)

Investment

Maintenance Cost

Production Volume (m3 )

Infrared Oven

0,144

0,187

0,088

0,055

0,045

0,033

0,017

Electrical Resistances Oven

0,144

0,136

0,014

0,055

0,037

0,016

0,021

Burners Oven

0,136

0,097

0,122

0,055

0,053

0,027

0,010

Convection Oven

0,112

0,101

0,115

0,055

0,045

0,020

0,008

A. Ideal A. ANTI Ideal

0,1122

0,0971

0,1219

0,055

0,0525

0,0334

0,0209

0,1443

0,1868

0,0135

0,055

0,0367

0,0160

0,0083

Table 5. Euclidean distances surveyed in the investigation Solution Ideal Positive Si+

Solution Ideal Negative Si-

Punctuation de Rendition Pi

Ranking Alternatives Ci

0,101

0,077

0,433

2

0,122

0,053

0,302

1

0,027

0,142

0,840

4

0,021

0,137

0,865

3

3.4 Furnace Design Parameters The indispensable parameters for the design of the curing oven are presented below. The minimum design requirements to be taken into account for the development allow analysing the most adequate vision of the different alternatives. Among the parameters to be taken into account are temperature, adhesion, mechanical strength, and weld ability. The parameters to be taken into account considering the regulations and codes in force at national and international level, which allow guaranteeing the safety of the users of the cabin. Within the design process and curing oven, the selection of materials is an important factor. The curing booth is made up of internal walls, roof, and floor, which have thermal insulation and coating. Figure 1 shows the consideration of the aforementioned parameters. Table 6 shows the materials used in the construction of the oven considering the optimum properties and the processes to be carried out. Temperature measurement every 10 min allowing to observe a constant heat inside the oven on the elements to be thermally treated, the test of each specimen will be carried with a record that allows to collect all the necessary information of the behaviour of the paint in function of the temperature supplied on the oven, the reading mode and the sampling will be by means of thermocouples or thermal pyrometers. The average

Heat Transfer Adhesion Factor on Metal Surfaces

477

temperature value of the charge will be determined, as well as the specific heat of the steel used at the operating temperature, with Eq. 1. Tm =

Tm´ax − Tm´in 2

(1)

Once the average temperature has been determined, the value of the specific heat of the steel at room temperature is analysed. The technical characteristics of the material allow obtaining the specific heat required by the load according to Eq. 2.   (2) Q = ct × Cp Tm´ax − Tm´in The internal and external heat values allow determining the amount of radiation coefficient to the outside of the curing booth, using Eq. 3. hr = σ × ε ×

4 Ts4 − Tmin Ts − Tmin

(3)

It is necessary to analyse the global heat transfer coefficient, due to its relation with all the thermal resistances of the system, for which equation four will be used. U =

1 Rt × Ap

(4)

3.5 Dimensioning and Geometry of the Structure The sizing of the structure of the curing oven must consider some important aspects that will allow analysing the design adjusted to all the technical requirements that ensure the safety of the operators and the superficial improvement of the elements to be treated, the parameters are described below: • For the structural calculation of the oven, all static and variable loads must be considered, the same that allow the thermal insulation system to be considered. • The structure of the curing oven must have a safe and comfortable design that allows the baking process of the paint on the different mechanical elements without causing any type of insecurity to the operators, due to moving or fixed parts of the structure, among others. • The materials selected for the manufacture and assembly of the structure must have an excellent galvanic compatibility to avoid corrosion between them, since the structure will operate with chemicals, and will avoid the dissipation of heat to the outside. The assembly of the structure will be made with movable and welded joints to guarantee a design that allows its easy mobilization. Table 6 shows the materials used in the construction of the furnace. Nowadays it is very important to consider the variables involved in the construction of a mechanical system or structure, since this allows making post construction decisions maximizing time and resources in the manufacturing stage, in such a way that with

478

P. Caza et al.

Table 6. Selection of curing booth materials in the manufacture of the oven data collected in the investigation Appliance

Material

Characteristics

Internal walls

Galvanized steel tool

It has a high resistance to wear due to corrosion and oxidation Its maximum operating temperature is 720 ◦ C

Ceiling and floor

Galvanized steel tool

It has a high resistance to wear due to corrosion and oxidation Its maximum operating temperature is 720 ◦ C

Internal structure

Galvanized structural square tube

Steel structure, good weld ability, high corrosion, and oxidation resistance

Thermal insulation

Glass wool

Works at high temperatures Avoids heat dissipation, thanks to its low thermal conductivity and easy application

the SolidWorks software a structural system was developed estimating the working conditions as close to the real as possible, such as variable loads and the own weight of the structure, in Fig. 1 the structure of the furnace can be observed taking into account the dimensions, functioning and operation of the furnace. In the same way, the conditions of the walls are analysed taking into account the thermal insulation between the external and internal plates of each wall in order to reduce heat losses in the working stage and operation of the curing oven. In the CAD design stage, the operability and visualization of the equipment in its manufacturing stage is determined, making decisions to improve the equipment.

Fig. 1. Structural design of the wall, roof and floor of the furnace erected in the investigation.

It is important to consider that heat losses in the operation stage of the furnace can also be dissipated through the floor and ceiling of the equipment, so the design develops a double-sided system with glass wool between the top and bottom similar to the walls,

Heat Transfer Adhesion Factor on Metal Surfaces

479

the CAD modelling allows to characterize and visualize the thickness of the ceiling, floor and walls taking as reference the pipe measurements for the structure. CFD Simulation of Temperature Transfer in the Curing Oven. For the simulation we used internal temperature ranges of 575 ◦ K, at which the paint achieves greater adhesion in the metal parts of the coating, also considered the environmental conditions outside the oven as temperature, pressure and relative humidity under normal conditions of 296 ◦ K, 71940.75 Pa and 74%, respectively. It is very important to consider the type and size of mesh in the development of the simulation since this depends on the results that the software gives us the closest to the real thing, in the simulation a mesh size of 10 mm was considered, taking into account the size of the furnace and the capacity can be considered as an adequate mesh size. The internal working conditions in the curing oven were taken into account to determine the heat transfer that is generated between the electrical resistances that are housed in the sidewalls of the oven and the material to be painted, thus having a constant operating temperature of 575 ◦ K. The results obtained in the simulation show a decreasing temperature range in which there is no heat dissipation, and the variation is minimal, taking into account the insulation of the walls, floor and ceiling of the curing oven as shown in Fig. 2.

Fig. 2. Curing oven temperature analysis collected in the investigation

The internal operating pressure has a small decrease in relation to the external pressure, estimating the oven exit conditions, since it works with the entry and exit door of the material to be painted.

4 Discussion and Results Finally, after the technical analysis throughout this research, a study of the different surface treatments that can be applied to the different metallic materials has been carried out in order to avoid the wear generated by corrosion and oxidation due to the environment and chemical agents. Within the design process and curing oven, the selection of materials is an important factor, because the curing booth consists of internal walls, roof and floor, an internal structure, thermal insulation, and coating, which allows the adherence of the powder paint according to the temperature and cleanliness of the mechanical elements, for which

480

P. Caza et al.

different parameters that influence the curing of mechanical elements must be analysed and calculated. From the components that establish the powder paint, it was determined that there is only a small temperature variation in the curing between one and the other, which will allow an excellent adherence in comparison with other surface coating processes with diluents, which allows the handling in less time and a single layer in the elements that have been submitted to the curing process with electrostatic paint in contrast to other similar processes. The mechanical and chemical properties obtained from the surface coating of electrostatic paint depend entirely on its components (minerals, resins, and pigments), due to this phenomenon it is possible to have a wide range of combinations, since powder coatings are feasible to modify individually according to the needs of creating coatings with unique properties. Currently it is possible to define powder coatings in four groups: polyester, epoxy, hybrid (polyester/epoxy) and polyester, thanks to the characteristics of the electrostatic paint it is feasible to cure without major variants in the oven by electrical resistance since the exposure parameters due to thermal shock vary very little, therefore, it is possible to speak of an initial calibration of temperatures with which excellent surface finishes are achieved. In the multi-criteria selection analysis by the Topsis method, the different acceptance criteria for the curing ovens studied were determined, where it was observed that the resistance oven has a greater volume capacity in production, which contributes less to environmental pollution and has a lower investment in terms of manufacturing compared to convection, burner, and infrared ovens. Likewise, it was determined that the resistance furnace has a high energy consumption when compared to the convection furnace and burners, but lower than the infrared furnace when working with the same production value, which is a variable to be taken into account when selecting at the time of manufacturing. Another main criterion that was analyzed was the cost of maintenance in the paint curing equipment, where it was determined that the value to invest in this aspect in a resistance oven is lower compared to other ovens, because it does not have a chimney to remove polluting gases, Similarly, the resistors do not require continuous maintenance and their useful life ranges around 500 h in 21 days of work when compared with the consumption of an industrial gas that lasts about 40 h in 5 days, as for the infrared panels have a useful life of 400 h in 17 days, which must always use energy for its operation and any contaminant that enters in contact will deteriorate the equipment.

5 Conclusions This work presented a study of the selection of four curing ovens using the TOPSIS multi-criteria analysis, where production, energy consumption, maintenance, environmental pollution, and production cost were compared as the main characteristics. The results indicate that there are differences between the aspects that are handled among the different acceptance criteria, for which the TOPSIS method indicated that with the values assigned in the Euclude distance table, it projected a very high value in the electric

Heat Transfer Adhesion Factor on Metal Surfaces

481

resistance oven, since it had a lower value than the other assigned options, which led to determine an ideal alternative in the selection of the resistance oven. The design and construction of the powder paint curing oven considered as the best alternative is of resistances that ensure an operational temperature of 150 ◦ C inside the cabin, thanks to the configuration of the walls and adequate selection of the materials that allow to have an optimal insulation avoiding the dissipation of heat through them, which allows to always have a high production capacity with an average energy consumption, complying with the operational and environmental requirements established by national and international standards in the powder paint curing process. Further experimentation using other types of multi-criteria selection methods is recommended in the future to evaluate changes in the ranking of the acceptance criteria.

References 1. Choi, J.W., Chun, W.P., Oh, S.H., Lee, K.J., Kim, S.I.: Experimental studies on a combined near infrared (NIR) curing system with a convective oven. Prog. Org. Coat. 91, 39–49 (2016). https://doi.org/10.1016/j.porgcoat.2015.11.004 2. Collanqui Yana, B.S.: Proceso de secado de pinturas en horno tipo cabina para acabado de muebles metálicos. Universidad Nacional Del Altiplano (2010). http://repositorio.unap.edu. pe/handle/UNAP/10463 3. Dadkhah, M., Ruffatto, D., Zhao, Z., Spenko, M.: Increasing adhesion via a new electrode design and improved manufacturing in electrostatic/microstructured adhesives. J. Electrostat. 91, 48–55 (2018). https://doi.org/10.1016/j.elstat.2017.12.005 4. Felipe, J., Ramirez, E.: Estudio de factibilidad para renovación de tecnologia en hornos de curado de pintura electrostatica en la industria de elevadores (2009). Universidad EAFIT. https://repository.eafit.edu.co/handle/10784/4353 5. Glick, N., Shareef, I.: Optimization of electrostatic powder coat cure oven process: a capstone senior design research project. Procedia Manuf. 34, 1018–1029 (2019). https://doi.org/10. 1016/j.promfg.2019.06.093 6. Harrison, N.R., Luckey, S.G., Cappuccilli, B., Kridli, G.: Paint Bake Influence on AA7075 and AA7085, 28 March 2017. https://doi.org/10.4271/2017-01-1265 7. Hern, J., Su, M.: Efecto de la composición química del baño en la microestructura y resistencia a la corrosión de los recubrimientos de zinc por inmersión en caliente: Una revisión Effect of Chemical Bath Composition on Microstructure and Corrosion Resistance of Zinc Coat, pp. 40–52 (2020) 8. Incropera, F., Dewit, D.: Fundamentals of Heat and Mass Transfer. In: BMC Public Health, Seventh Ed., vol. 5 (2017). 9. Luddey, J., Arévalo, M., Pérez-muñoz, D., Millan, A.R.: Resistencia a la corrosión en ambiente salino de un acero al carbono recubierto con aluminio por rociado térmico y pintura poli aspártica Corrosion resistance in saline environment of a carbon steel coated with aluminum by thermal spray and painting poly. 30(1), 21–31 (2016) 10. Minkowycz, W.J., Sparrow, E.M., Murthy, J.Y., Abraham, J.P.: Handbook of Numerical Heat Transfer: Second Edition. Handbook of Numerical Heat Transfer: Second Edition, pp. 1–968 (2009). https://doi.org/10.1002/9780470172599 11. Niamsuwan, S., Kittisupakorn, P., Suwatthikul, A.: Enhancement of energy efficiency in a paint curing oven via CFD approach: case study in an air-conditioning plant. Appl. Energy 156, 465–477 (2015). https://doi.org/10.1016/j.apenergy.2015.07.041

482

P. Caza et al.

12. Noon, W.B.: Industrial painting techniques. In: Beadle, J.D. (ed.) Product Treatment & Finishing. MEE, pp. 143–150. Macmillan Education UK, London (1972). https://doi.org/10.1007/ 978-1-349-01203-9_17 13. Rico, Y., Carrasquero, E.: Efecto de la composición química en el comportamiento mecánico de recubrimientos galvanizados por inmersión en caliente: una revisión The Effect of Chemical Composition on Mechanical Behavior Of Galvanized Coatings By Hot Dip: A Review. Revista de Ciencia y Tecnología 30–39 (2017) 14. Ortega Sánchez, G.F., Fernando, G.: Diseño, construcción e implementación de un prototipo de un horno de secado (curado) de pintura automotriz y pruebas de pintura en las probetas al final del proceso (2016). http://192.188.51.77/handle/123456789/14140 15. Pérez Domínguez, L., Macías García, J., Sánchez Mojica, K., Luviano Cruz, D.: Comparación Método multi-criterio TOPSIS y MOORA para la optimización de un proceso de inyección de plástico. Mundo FESC 14(14), 98–105 (2017) 16. Piratoba, U., Vera, E., Ortiz, C.: Superficial, Electrochemical and Compositional Characterization of Zinc Nickel Electrocoatings ELECTROCOATINGS, vol. 75 (2008). https://revistas. unal.edu.co/index.php/dyna/article/view/1782 17. Power, T.: (n.d.). Hornos de curado para pintura electrostática. https://powdertronic.com/hor nos-de-curado-para-pintura-electrostatica-2/. Accessed 12 April 2020 18. Rodríguez, G.: (n.d.). Recubrimientos: No escatime en la preparación de las superfi cies. Metal Actual, pp. 35–39 19. Sakundarini, N., Taha, Z., Abdul-Rashid, S.H., Ghazila, R.A.R.: Optimal multi-material selection for lightweight design of automotive body assembly incorporating recyclability. Mater. Des. 50, 846–857 (2013). https://doi.org/10.1016/j.matdes.2013.03.085 20. Uribe, C.L.: Pintura en Polvo Un recubrimiento ecológico y eficiente. Metal Actual 9, 25–31 (2008). http://www.metalactual.com/revista/9/pintura_en_polvo.pdf 21. Valença, D.P., Alves, K.G.B., De Melo, C.P., Bouchonneau, N.: Study of the efficiency of polypyrrole/ZnO nanocomposites as additives in anticorrosion coatings. Mater. Res. 18(Suppl 2), 273–278 (2015). https://doi.org/10.1590/1516-1439.371614 22. Yamabe, H.: Electrostatic painting. J. Jpn. Soc. Colour Mater. 73(10), 512–516 (2000). https:// doi.org/10.4011/shikizai1937.73.512

Virtual Laboratory of Electronic Instrumentation Based on a Programming Proposal Focused on Systems Yngrid J. Melo Q.1(B) , Andrés E. Castillo R.2 , Enrique I. Valencia V.1 Edgar A. Bravo D.1 , and Wilson G. Simbaña L.1,2

,

1 Instituto Tecnológico Universitario Rumiñahui, Sangolquí, Ecuador

[email protected] 2 Universidad Politécnica Territorial de Aragua Federico Brito Figueroa, La Victoria, Venezuela

Abstract. In teaching processes in careers such as electricity and electronics, it is necessary to learn how to handle instruments for signal measurements such as the Oscilloscope and the Function Generator. The problem is that this equipment is expensive and also difficult to maintain in good conditions when they are constantly used. An alternative to this problem is the use of virtual equipment, which is developed using computers and, in a way, almost similar to the physical equipment, which can perfectly help the student to acquire theoretical and practical knowledge and to form the required competencies. The research presented in this article has as its objective the construction of a prototype of a virtual laboratory with signal measurement equipment, using a methodology for the production of threedimensional software with object-oriented and event-driven programming, based on the authors’ years of experience in the construction of this type of product. The prototype was developed with Java and its Java3D API. The result was two virtual devices, a Function Generator, and an Oscilloscope integrated into an interactive, photo-realistic and three-dimensional software, to reproduce handling practices and knowledge of these devices as in a real laboratory. It was concluded that the research brings benefits to the scientific and university community in general by making this methodology available for future projects of 3D virtual laboratories in any field of knowledge. This tool has applicability within the context of a real problem. Keywords: Virtual laboratory · Electronics · Java3D · Function generator · Oscilloscope

1 Introduction In the teaching-learning process in various scientific, technological, or technical fields, such as electricity and electronics, students need to learn to handle instruments for measuring time-varying electrical signals. As a result, they need to learn to handle instruments such as the Function Generator and/or the Oscilloscope with which they can perform different practices whose purpose is to familiarize and train themselves with the use of this equipment. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 483–496, 2023. https://doi.org/10.1007/978-3-031-25942-5_38

484

Y. J. Melo Q. et al.

The needs that arise daily in different scenarios of technological education motivate and encourage the use of increasingly sophisticated and better-designed technological tools. In this sense, laboratory practices, a training strategy applied in many institutions of higher education whose careers are of the technological type, are essential to obtain the necessary knowledge to be applied in the professional field. With them, students are familiarized with the necessary instrumentation, while applying technology to confront the behavior of models with the real workings through simulation, emulation, and experimental tests; in this way, the achievement of significant learning in specific environments is sought. In addition, we can also add that the use of laboratories in the pedagogical practice of areas such as science and engineering is the only way for students to acquire skills in the handling of different equipment and materials of daily use in their professional performance [1]. As pointed out in [2], laboratory practice is the pedagogical strategy used to acquire competencies and procedural skills and is therefore used in a variety of academic programs, usually in synchronization with their theoretical subject. However, articulating all the physical structure and instrumentation needed to set up a real laboratory requires a large monetary investment. The execution of physical laboratory plant projects also requires costly infrastructure that is arduous to maintain in satisfactory condition. At the same time, this type of laboratories requires the participation of the student in person and at a specific time in order to have access to the equipment [3]. In addition to the high investment to set up a laboratory in an educational institution, another limitation is that, depending on the equipment and instrumentation needed, it can also be quite expensive, and space is needed to locate it and host its users. Besides this, a continuous increase in student enrollment leads to a situation of limited supply, in which the student only has a segment of time to use the laboratory, far below the time required to become fully familiar with the individual equipment, as well as with the configurations and experiences of use following the guidelines of formal laboratory practice. Taking into consideration the above facts, the virtual laboratory emerges as an alternative, where practices can be performed at any time and place, by any group with or without disabilities, with personal autonomy, and can be replicated in unlimited quantities. As a consequence of the pandemic derived from COVID-19, virtual laboratories became an imperative need to train students in different engineering and technology programs in university institutes, in this sense. The state of the art on the subject of virtual laboratories at present is very broad, works have been developed in this field such as the following: Development of virtual instrumentation applied to telecommunications laboratories using the Red Pitaya board [4]; Laboratory practices in the courses of electronic engineering and telecommunications programs of the Universidad Nacional Abierta y a Distancia [5]; Implementation of laboratory practices in the virtual education of electronic engineering and telecommunications programs [6]; Virtual laboratories decisive tool in the teaching-learning process of computer science engineering [7]; Design of Laboratory Practices in Electronics with ICT [8]; ‘VirLabNet”: A virtual laboratory based on web technologies for experimenting

Virtual Laboratory of Electronic Instrumentation

485

with TCP/IP network topologies, in Veterinary Science [9]. Other important references are the Imposition of virtual laboratories in 21st -century education [10]; Real laboratories versus virtual laboratories in computer science careers [11]; the Development of a virtual earthquake engineering lab and its impact on education [12]; The Virtual Laboratory as a didactic tool in engineering education [13]. For the development of the laboratory, we first used the methodology for software development based on the iterative and incremental model in the phases of analysis, design, construction, and testing. For the specific construction phase, the authors used their methodology based on their experience with this type of 3D software products, with object-oriented programming languages, specifically for virtual laboratories, independent of their area or field of action. In this virtual laboratory, up to three pieces of equipment for signal measurement were integrated and made to interact in the same space: an oscilloscope and two signal or function generators. As a result, it is possible to show that with 3D technology, photo-realism and interactivity it is possible to recreate experiences similar to the real ones regarding the handling of specialized equipment and its interconnections and interoperability. The developed software will have didactic-technical functions to support the teaching-learning process in technological careers associated with electronics and electricity.

2 Methodology The research presented here is intended to create an interactive, photo-realistic and threedimensional software, which allows one to reproduce own practices for the handling of measuring instruments and signal management virtually. The type of research, being a software product, is applied or technological research, which according to [18], is the type of research that is applied to technological processes and is aimed at solving problems in the production processes of goods and services in any human activity. The referenced author also points out that this type of research “is aimed at improving, perfecting or optimizing the functioning of systems, procedures, norms, current technological rules in the light of advances in science and technology” (p. 3). 2.1 Materials In respect of the materials used to develop the virtual laboratory, the following were available: Hardware: a laptop computer with CORE I5 processor, 2.70 GHz, and 8 Gb RAM; as for software: Windows 10 Home Operating System, Net-Beans 12.4 as IDE, Java programming language (JDK 15.0) and the Java 3D API (1.5.1). In addition, Blender was used for the modeling and rendering of the three-dimensional objects, such as the oscilloscope, the function generator, and the furniture of the virtual laboratory. IBM® Rational Rose® was used as a CASE tool for the elaboration of the UML diagrams. It is also important to mention the references used as materials for the development of the virtual laboratory such as: [14, 19, 20], which, although some of them are more than 5 years old, represent the beginnings of this type of projects.

486

Y. J. Melo Q. et al.

2.2 Methods Concerning the methodology used, since it was a software product, the steps of the iterative and incremental software development life cycle were followed and four phases were determined: analysis, design, coding, testing, and debugging. Since it was a prototype, the implementation phase was not taken into account. The basic elements of this prototype of the virtual laboratory are the measurement equipment, specifically an Oscilloscope and two Function Generators. Concerning the analysis phase, the necessary research was carried out to obtain the software requirements, especially the equipment, its elements, its operation, and use were studied. It is important to mention that in the design of the equipment we tried not to make a similarity to commercial brands but an own design that contained all the necessary buttons for the basic handling of the equipment. As for the design and coding phases, in these two phases, the software product was developed through a method of the authors based entirely on object-oriented programming and event-driven programming. This method was conceived based on 3D software developments, created with free technological tools mentioned in the materials section, these elements and their methodology were directly integrated into the project. In this regard, one of the most developed visions in the field of engineering is the systemic vision, which is why the method created and used in the development of the prototype is based on a systemic conception, with which engineering problems are solved using systematic computer programming. The method used also incorporates the use of some design patterns that were created thinking in this systemic programming and directly related to the production of this type of software. It should be noted that this represents a scientific contribution of the authors for other researchers and software developers aligned with this type of product. Systemic programming involves identifying the virtual laboratory first as a complete system, which in turn is divided into several subsystems that are independent of each other and complementary. Each subsystem was developed independently and to each of them was applied the method that was designed based on scene graphs, UML diagrams, and the model-view-controller pattern. The design of the virtual laboratory was divided into four subsystems: the illumination subsystem, the visualization subsystem, the furniture subsystem, and the equipment management subsystem. Figure 1 below shows a general system diagram for the virtual laboratory in correspondence with the systemic programming.

Virtual Laboratory of Electronic Instrumentation

487

Fig. 1. - Virtual laboratory system components

Each subsystem is viewed and coded and then transformed into a Java library and each piece of equipment added to the lab is a subsystem in itself and treated in the same way, all subsystems were developed independently and integrated as components of the virtual lab. In this sense the philosophy of the work was to go from conceptual to detail, this means that the design is top-down while the implementation is bottom-up due to the nature of the objects. Each subsystem mentioned is represented by objects and each has its event handler. In the elaboration of the software design, in each subsystem the steps were followed: 1. Elaboration of the scenarios of each one of the foreseen situations between user and system. 2. From these scenarios, elaboration of the use case diagrams. 3. Elaboration of class diagrams. 4. Elaboration of the associated communication diagrams. From these diagrams, methods to be incorporated by each of the classes involved in these diagrams are identified. 5. Elaboration of the state diagrams where they were necessary to show the change between states. 6. The scene graphs are elaborated because they allow the codification of the articulation of the different elements that form the graphical scene and of their degree of independence and mobility. 7. The elements involved in the graphic scenes were coded from the scene graphs. Within the programming, different design patterns were created as a result of the application of the method used. Firstly, the Object Mobility pattern was created, whose function is to give the elements of the virtual laboratory the property of being able to

488

Y. J. Melo Q. et al.

move; this pattern, together with another pattern created and named Event Controller, is in charge of managing all the objects within the system. The Graphic Element Builder was also created, this pattern is an essential element, it is an abstract class called Graphic Element Handler, and has five abstract methods that handle in a modular way the processes of creating objects, adjusting capabilities, articulating elements, initial adjustments, and final adjustments. It is important to note that these patterns were used both in the coding of the virtual lab and in the construction of each team. First, the function generator and then the oscilloscope were coded and tested individually. An image of this can be seen in the results (Fig. 3). Once the equipment was tested, it was inserted into the laboratory together with the furniture. Another important aspect to mention is that, by impregnating the systemic vision to the equipment, the subsystems become modules, and the methods that mean signal handling, and state changes, for example, become processes. This form drives the creation of proprietary design patterns. On the other hand, the choice of order of the equipment to be developed is strategic, as we go from basic equipment to more sophisticated equipment, we find elements that are common to the following equipment and this allows each development to contribute elements to a library of common objects, which in this research was called Component Library, so the development of the Function Generator contributed elements such as buttons, rotating and sliding, the abstract classes Graphic Objects and Telecom Equipment, The development of the oscilloscope provided among other things Graphic Displays, Cursors that will be used in the development of the next equipment, the Spectrum Analyzer, which in turn, will take advantage of all the components and will provide processing modules based on the Fast Fourier Transform (FFT) and this will be part of all the components that mean elements to be measured. In addition to what was stated in the previous paragraph, it is important to mention the use of spreadsheets, in handling both the aspects of the mathematical models related to the processes, as well as graphs to check those models. This is what happened with the generator, where the formulas that generated the types of signals and modulated signals were tested and checked with the graphics, as well as the design of the oscilloscope screen, including cursors and messages. Some of the disruptive use of spreadsheets can be found in [21, 22]. After the design and coding, the necessary tests were carried out to determine the usability and interoperability of the equipment in the laboratory, for which two practice manuals were prepared using the equipment. One practice manual refers to the handling and individual operation of the equipment and the other to the simultaneous operation of the equipment. User manuals were also prepared for each piece of equipment.

3 Results The results of the research are presented below in two fundamental aspects, the design based on the method described in the previous chapter and the construction of the virtual laboratory prototype. Concerning the design, the most important aspect of the research was the patterns created, in which the handling of the conditions or adjustments for defects was carried

Virtual Laboratory of Electronic Instrumentation

489

out in a constructor, but the initial adjustments that are common to all the constructs were carried out in the initial adjustments method. The names of the packages, processes, and libraries make sense from the point of view of systems and subsystems. The vision is that it was coded for future maintenance and updating of the code since the programming was based on mathematical models that are susceptible to modification. Whether due to technological improvements or new user requirements, modification is a vital aspect of this type of programming. Among the usability aspects of the system, for the initial canvas, it is necessary to use the maximum of the monitor screen and in it, the menus or dialog boxes between the system and the user appear and disappear at the user’s will. Another important aspect is that the software has buttons to insert in the graphic scene in previously configured environments where the equipment is already arranged in a predetermined way. As for the modeling of the software, UML was used to develop the virtual laboratory system and for each subsystem, the class, communication, sequence, and state diagrams, among others, are shown in this article, for space reasons, the general class diagram of the designed software (Fig. 2).

Fig. 2. Class diagram of the virtual laboratory system

As mentioned above, we first proceeded to the development and testing of the individual equipment, both the function generator and the oscilloscope, and then we integrated the two devices in the same virtual space for their interconnection, details of each of the devices are shown below. Details Function Generator (Fig. 3) – Waveform types: sine, square, triangular, and ramp. – 3 types of modulation AM, FM, and FSK.

490

Y. J. Melo Q. et al.

Fig. 3. Coded function generator.

– Information display to show chosen waveforms, modulation type, whether the displayed values are carrier or modulator, and whether the displayed value is amplitude, frequency, or DC level. Dimensions of the displayed figure. – All keys are multi-functions and with them, you can change the signal type (Carrier, Modulator, or Composite Signal). The modulation type (AM; FM; FSK), waveform (Sine, Square, Triangular, and Ramp), and parameter type (Frequency, Amplitude, or DC Level). – The knob is rotated with the mouse if the mouse cursor is over the knob and the left button is pressed the knob rotates counterclockwise, if the right button is pressed the knob rotates clockwise. The increment that causes the movement appears on the display.

Fig. 4. Coded function oscilloscope

Oscilloscope Details (Fig. 4) – 2 Channels. – Markers for measuring amplitude and amplitude differences and time and time differences. – Screen with graphs of the signals and display of dynamic information on voltage scale and time scale. – Controls to display horizontal cursors, vertical cursors, or both.

Virtual Laboratory of Electronic Instrumentation

491

– Controls to choose the cursor to be moved and whose value will be displayed on the screen. – Threshold control (the minimum value for which the signal will be displayed, once it reaches that value). – Modifications of the voltage scales are made with knobs as is the modification of the chosen measurement cursor. The knobs are freely rotatable, and the values they modify are shown on the display. Once the two devices were coded, they were inserted into the same virtual space to perform the connection tests (Fig. 5).

Fig. 5. Insertion of equipment in the same virtual space for testing.

In respect of testing the prototypes of the equipment, we had the collaboration of an engineering expert in the handling of this equipment, whose evaluation was based on the usability, similarity, and functionality of the equipment presented and its virtual environment. The signals generated by the equipment were compared in terms of frequency, amplitude, and type, and were compared with various values and wave types. The result according to the expert was satisfactory concerning physical equipment. It is necessary to emphasize that the ideal is to have real equipment, with preferences for mock-ups and with preferences for the so-called learning objects. To the extent that equipment costs, there is a preference for cheaper systems. The idea of the laboratory is to emulate the real equipment, as far as possible, that is why the objectives are not linked to the paradigm of learning objects, they are linked to the appearance, to look as real as possible, the usability, operation as a piece of real equipment, to use buttons, knobs, to have graphic displays as similar as possible to the real equipment and the mathematical model that the outputs behave similarly to the real equipment. Thus, under these objectives, the comparison is with real equipment and not having them for comparison, the technical opinion of professionals with experience and knowledge was decisive. Continuing with the tests, for the demonstration of the prototype, five basic configurations were created, including furniture and equipment. The program has menu bars for its operation. The different configurations created to manage the equipment within the virtual laboratory are shown and described below.

492

Y. J. Melo Q. et al.

The initial configuration shows the laboratory interface with the bench and the three pieces of equipment, one oscilloscope and two signal generators in the cabinet (Fig. 6), this is the initial interface of the laboratory with its respective menu options.

Fig. 6. Configuration 1 of the virtual laboratory.

Configuration 2 lowers a signal generator to the counter so that it can be used individually, the equipment can be brought closer for better visualization and operation of the keypad (Fig. 7).

Fig. 7. Configuration 2 of the virtual laboratory.

Configuration 3 lowers the oscilloscope so that it can be used individually, and the equipment can also be brought closer for better viewing and operation of the keypad (Fig. 8). Configuration 4 lowers a function generator and the oscilloscope so that they can be used simultaneously, the equipment is connected, and can also be approached with the scroll keys on the keyboard for better visualization and operation of the equipment’s keypad (Fig. 9).

Virtual Laboratory of Electronic Instrumentation

493

Fig. 8. Configuration 3 of the virtual laboratory.

Fig. 9. Configuration 4 of the virtual laboratory.

Finally, configuration 5 lowers the two function generators and the oscilloscopecopy so that they can be used simultaneously, the devices are connected, and can also be approached with the scroll keys on the keyboard for better visualization and operation of the devices’ keypad (Fig. 10).

Fig. 10. Configuration 5 of the virtual laboratory.

494

Y. J. Melo Q. et al.

To conclude with the presentation of the results, it is important to emphasize that the objective of this virtual laboratory is that the students can simulate, through laboratory practice, the configuration of the function generator and the oscilloscope for correct visualization of the signal, and the reason for its development is more for economic than strategic reasons. In this way, they can improve their skills in handling basic electronic equipment from a virtual space and achieve the necessary dexterity to handle physical equipment without fear of damaging them.

4 Conclusions and Discussion In conclusion, the objectives proposed in the research were achieved. The information management and the bibliographic review about the developed topic were carried out, thus obtaining the state of the art of research; the method for the design and development of the equipment and the virtual laboratory in the third dimension was elaborated with the Java 3D programming language, product of previous works about the one finalized in this research; the virtual space and the equipment inserted in the laboratory were designed and finally, the prototype of the virtual laboratory of measuring instruments in electricity and electronics was built. Tests of this prototype were carried out with a specialist teacher in the area of electronics using the practice manuals prepared. The observations made were taken into account and will be considered for the completion of the software that will serve as a support tool for teaching in these areas first at the Instituto Tecnológico Universitario Rumiñahui, and then can be replicated in other educational institutions that require it. It is important to note that this prototype will serve as a basis for other virtual laboratory projects in other areas. As for the evaluation by students, in this case, it is not relevant since they would first have to know and work with real equipment and the evaluation would be based on the comparison and not on the comfort or not they feel when using them. Even though that what is presented in this article is the development of software, much of its importance lies in the way it was programmed in the Java 3D language. It was fully coded and oriented to objects and events, with a systemic vision that provided design patterns, several design patterns were created for the community of developers that will be presented in greater detail in another scientific article, and a systemic programming method was created that also represents a contribution to engineering research and whose results will be presented in a future article. Future Work As future work is contemplated the completion of the details of the prototype and its implementation in the Instituto Tecnológico Universitario Rumiñahui to be tested by students of the careers of Electricity and Electronics. In this section, it is also important to point out that metaverse technology is being studied, which are virtual environments where people can interact through an avatar and which are a metaphor of real life and where they can interact with objects, for example, of a virtual laboratory. This technology will be incorporated in the future because it would undoubtedly be an academically enriching and novel experience. The incorporation of immersive 3D, which would be

Virtual Laboratory of Electronic Instrumentation

495

the first step to entering the metaverse, is more than feasible, Java 3D has the structure and supports drivers for elements such as the Joystick as well as 3D vision helmets, and what has been done so far is usable. Acknowledgments. The authors would like to thank the Instituto Tecnológico Universitario Rumiñahui (ISU- ISTER), Universidad Politécnica Territorial de Aragua (UPTA and the Secretaría de Educación Superior, Ciencia, Tecnología e Innovación (Senescyt).)

References 1. Kapilan, N., Vidhya, P., Gao, X.Z.: Virtual laboratory: a boon to the mechanical engineering education during covid-19 Pandemic. High. Educ. Future 8(1), 31–46 (202) 2. Ochoa, P.L.M., Cárdenas, A.A.G., Arenas, D.Y.T.: Diseño de laboratorios virtuales para la práctica de estudiantes de ingeniería. In: Edgar Serna, M., Editor, Revolución en la formación y la capacitación para el siglo XXI. Instituto Antioqueño de Investigación, Medellin (2021) 3. Álvarez, J.C.A., Integration of the TIC from the production of virtual laboratories. In: Referencia Pedagógica. Centro de Referencia para la Educación de Avanzada CREA, La Habana (2017) 4. Giusseppe Perretti, R.F., Carlos Mejías, Carlos Aponte, Development of virtual instrumentation applied to telecommunicationslaboratories using the Red Pitaya board. In: Revista Ingeniería UC, pp. 266-275. Facultad de Ingeniería, Universidad de Carabobo, Valencia, Venezuela (2018) 5. Camelo Quintero, E.F.: Prácticas de laboratorio en los cursos de los programas de ingeniería electrónica y telecomunicaciones de la Universidad Nacional Abierta y a Distancia. In: Electrónica y telecomunicaciones. Universidad de ls Sabana: Bogotá, Colombia (2017) 6. Camelo-Quintero, E.: Implementación de prácticas de laboratorio en la educación virtual de los programas de ingeniería electrónica y telecomunicaciones. In: Revista Virtualmente. Universidad EAN, Bogotá, Colombia (2019) 7. Javier Heredia, D.C.: Los laboratorios virtuales herramienta decisiva en el proceso de enseñanza-aprendizaje de la ingeniería en ciencias informáticas. In: Revista de la Dirección de Informatización de la UCPEJV. Universidad de Ciencias Pedagógicas Enrique José Varona, La habana, Cuba (2018) 8. Cano, J., et al., Diseño de Prácticas de Laboratorio en Electrónica con TICs. Rev. Tecnología y Ciencia (33), 119–130 (2018) 9. Crespo, A.A., Hernandez, J.C., Furch, R., Nicolau, S.: VirLabNet: Un laboratorio virtual basado en tecnologías web para experimentar con topologías de redes TCP/IP. In: Ciencia Veterinaria. UNiversidad Nacional de la Pamapa, La Pampa, Argentina (2018) 10. Vergara Rodríguez, D.: Imposición de los laboratorios virtuales en la educación del siglo XXI. Revi. Eduweb 13(2), 119–128 (2019) 11. Zaldívar-Colado, A.: Laboratorios reales versus laboratorios virtuales en las carreras de ciencias de la computación. IE Revista de investigación educativa de la REDIECH 10, 9–22 (2019) 12. Guerrero-Mosquera, L.F., Gómez, D., Thomson, P.: Development of a virtual earthquake engineering lab and its impact on education. Dyna 85, 9–17 (2018) 13. Santiago, D.D.R.-G., B.; Melián-Martel, N.: El Laboratorio Virtual como herramienta didáctica en las enseñanzas de Ingeniería. In: VII Congreso de Innovación Edicativa y Docencia en Red, U.P.d. València, Editor. Valencia, España (2021)

496

Y. J. Melo Q. et al.

14. Liliana Tiberio, Y.R.L.F.: Desarrollo de una herramienta didáctico-técnico 3D para el estudio de las Ecuaciones de Maxwell. In: Telecomunicaciones. Universidad Politécnica Territorial del Estado Aragua FBF, La Victoria (2017) 15. Chanfón, J.G., Crespo, M.R.G., Carmona, G.B.: Impacto de la introducción de los laboratorios virtuales en la educación superior. In: Superior, M.D.E., Editor Congreso Universidad (2016) 16. Hirshfield, L.J.K., Milo, D.: Cultivating creative thinking in engineering student teams: can a computer-mediated virtual laboratory help? J. Comput. Assist. Learn. 37(2), 587–601 (2021) 17. Zuñiga A., J.E., Albarracin L., Laboratorios virtuales en el proceso enseñanza-aprendizaje en Ecuador. In: Revista Dilemas Contemporáneos: Educación, Política y Valores. Asesorías y tutorías para la investigación científica en la Educación Puig-Salabarría S.C, Ecuador (2019) 18. Esteban, N.: Tipos de Investigación. Repositorio institucional - USDG (2018) 19. Castillo, A.E.: Objetos Java que representan equipos de telecomunicaciones y que inter-operan en un mismo espacio virtual. In: Facultad de Matemática, Física y Computación. Universidad Central “Marta Abreu” de Las Villas (2011) 20. Castillo, A.: Desarrollo del software Sateliton II: para la visualización 3D e iterativa de las posiciones y movimientos de los satélites venezolanos Simón Bolívar y Miranda. In: Telecomunicaciones. La Victoria, Universidad Politécnica Territorial del Estado Aragua FBF., La Victoria (2014) 21. Simbaña L., et al.: Disruptive use of spreadsheets in the teaching-learning process of technical scientific subjects. In: Botto-Tobar, Miguel, Zambrano Vizuete, Marcelo, Díaz Cadena, Angela (eds.) CI3 2020. AISC, vol. 1277, pp. 362–373. Springer, Cham (2021). https://doi. org/10.1007/978-3-030-60467-7_30 22. Andrés, E., et al.: Optimization in the handling of large amounts of data for reading, processing and graphing EEG data in excel. In: Vizuete, M.Z., Botto-Tobar, M., Cadena, A.D., Durakovic, B. (eds.) Innovation and Research - A Driving Force for Socio-Econo-Technological Development: Proceedings of the CI3 2021, pp. 464–478. Springer International Publishing, Cham (2022). https://doi.org/10.1007/978-3-031-11438-0_37

Thermal-Mechanical Properties of Recycled PVC Used in Schrader Valve Caps Jose Vicente Manopanta-Aigaje1(B) and Diana Peralta-Zurita2 1 Instituto Universitario ISMAC, Quito, Ecuador

[email protected] 2 Universidad Internacional SEK Ecuador, Quito EC170134 Quito, Ecuador

[email protected]

Abstract. The living conditions force man to use synthetic polymers, called plastics. Their resistance to decomposition by humidity generates their accumulation in sanitary landfills. The polyvinyl chloride (PVC) has remarkable physicalmechanical characteristics and low cost, therefore, it is used to manufacture pipes and containers for medical supplies, toiletries and cosmetics. Thermal characterization of post-consumer flexible PVC from mechanically recycled talcum containers is analyzed to manufacture valve caps for automotive tires. Thermal tests are performed in the INNER laboratory according to ASTM 1131, D2240, E1269– 11, D1525, D3418, D79208 standards by infrared spectroscopy to identify the material and its percentage in the composition, differential scanning calorimetry to determine thermal degradation, glass transition temperature and its mechanical properties are based on the ASTM D38–02 standard. Results that will be used for the elaboration of a valve cap prototype by gravity casting of PVC in a polyester resin mold. Keywords: Polymer · Characterization · Spectroscopy · Infrared · Differential calorimetry

1 Introduction The development of the automotive field at present has a great growth. While it is true that steel, aluminum and other metals are still used in structural parts and components of a car, needs of users and the development process of the automotive industry have stimulated the progressive use of other materials in the construction of cars, among which the use of plastic materials stands out [1]. Where all automotive elements are in high demand, the research project refers to the study of an alternative material for the manufacture of automotive caps. Among the most commonly used plastic materials in the automotive industry are thermoplastics, thermosets and elastomers [2]. Thermoplastics include PVC, which is generally produced in pipes, cosmetology containers and talcum powder containers. Polyvinyl chloride (PVC) is one of the most widely used materials due to its chemical and mechanical resistance properties [3]. PVC has become widespread in soft drink and water containers, in the manufacture of components for the automotive © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 497–509, 2023. https://doi.org/10.1007/978-3-031-25942-5_39

498

J. V. Manopanta-Aigaje and D. Peralta-Zurita

industry, housing, clothing and all types of consumer goods [4]. Plastic recycling is an alternative and an area of opportunity to solve raw material savings and reduce pollution. Plastic is a recyclable element that can be applied in the automotive field, especially in vehicle components [5]. PVC has several important aspects such as finish, product durability, high strength, and useful life [6]. It is marketed in a rigid form, but its loss of rigidity can be generated by increasing the temperature between 104 °C and 205 °C, which is an advantage for maneuverability in all activities by applying the physical, thermal and mechanical properties [7]. This research consists of thermal characterization of post-consumer flexible PVC samples subjected to mechanical recycling, since this material is found in talcum powder containers and shampoo containers, which are not subjected to recycling or to any process of recovery of this material, and even in recycling plants it has a very large place. PVC is an openformula material because it accepts various additives or plasticizing liquids [8]. It can be stated that the recycled material has good mechanical and thermal resistance, i.e., it can be applied to the automotive field with confidence and safety.

2 Materials and Methods 2.1 Material The characterization of the mechanical and thermal properties of post-consumer flexible PVC from containers that contained personal hygiene products will be subjected to a mechanical recycling process. PVC can be recycled in the following ways: (a) mechanical recycling, (b) chemical recycling, (c) energy recycling and (d) solvent recycling [9]. For the elaboration of thermal-mechanical tests, the analysis and tests are carried out by means of experimental measurements, analysis and testing in accordance with American Society of Testing Materials (ASTM) standards.

3 Methods 3.1 Physical Identification Methods Density. It begins by weighing the PVC sample, in this case 1.386 g, then it is introduced into the tank containing distilled water, the ambient temperature is 20.8 °C according to ASTM D792 08. Table 1 shows that it is within the range of the PVC compared in the laboratory (Fig. 1).

Thermal-Mechanical Properties of Recycled PVC

499

Fig. 1. INNER density meter

Physical identification of PVC containers: “SPI (Society of the Plastics Industry) code, which was proposed in 1981 in the United States and today is an ASTM (American Society of Testing Materials) standard, is used to identify the composition of packaging worldwide” [10]. Code 3 shown in Fig. 2 states that it is PVC.

Fig. 2. Side and bottom of a PVC container, Blenastor C.A.

The Beilstein test is a simple method to determine the presence of a halogen (chlorine, fluorine, bromine and iodine). A green flame, as shown in Fig. 3, demonstrates the presence of halogen, i.e., it is Polyvinyl Chloride.

Fig. 3. Colored flame produced by PVC in an alcohol flame.

500

J. V. Manopanta-Aigaje and D. Peralta-Zurita Table 1. Specific gravity of polymers [11].

Specific gravity [g/cm3]

Polymer Polypropylene PP

0.90 – 0.91

High Density Polyethylene HDPE

0.95 – 0.97

Low Density Polyethylene LDPE

0.92 – 0.94

Polystyrene PS Polyethylene thalate PET

1.05 – 1.07 tereph-

Polyvinyl chloride PVC

1.38 – 1.39 1.16 – 1.35

3.2 Spectroscopic Methods. According to ASTM Standard Spectroscopic methods have brilliantly simplified this work. A structure can be determined in a few hours [12]. These methods are based on the fact that organic molecules absorb electromagnetic radiation of different wavelengths by the vibrations of their electrons, atoms and groups of atoms; the absorbed wavelength depends on the molecular structure and gives information about it. The wavelength of the absorbed light packet corresponds to the energy [13]. E =h∗v

(1)

where: E: Energy H: Planck’s constant v: Frequency 3.3 Infrared Spectroscopy (FTIR) The raw material is analyzed by transmission from 4000 to 400cm−1 by attenuated total reflectance (ATR) from 4000 to 600 cm−1 . The spectrometer used is the Thermo Sientific Nicolet iS50 FTIR model that uses the Fourier transform for data processing that has a wave resolution of 80–20000 cm−1 , the standard that governs the operation is ASTM E 1421 [14].

Thermal-Mechanical Properties of Recycled PVC

501

3.4 Glass Transition Temperature (Tg) In accordance with Standard Test Method for Transition Temperatures of Polymers By Differential Scanning Calorimetry. This fact is manifested in the DSC measurements, where the Tg depends on the thermal history of the sample, particularly on the cooling rate which determines the initial glassy state of the polymer to be studied, as well as on the subsequent heating rate of the apparatus during the thermogram acquisition [15]. The collected raw material (talcum powder containers) was ground to a grain size of 40 μm, then 20 mg was used and placed inside an aluminum cell which was sealed with another cell (aluminum crucible). It was carried out with a Calorimeter HDSC PT 1600, with a temperature range from −150 °C to 1750 °C with a vacuum capacity of 10–5 mbar, heating rate from 0.01 to 100 °C/min in a nitrogen atmosphere with a flow rate of 24 mL/min. 3.5 Thermogravimetric Analysis in Accordance with Standard Test Method for Compositional Analysis by Thermogravimetry. The resulting analysis is called, respectively, dynamic and isothermal [16]. The thermogravimetric analyzer QA 5000 from TA Instruments has an accuracy of ± 1μg, the test is carried out in a nitrogen atmosphere with a flow rate of 50 mL/min from the initial temperature up to 800 °C according to ASTM D 3418. 3.6 Tensile - Deformation Test The tensile test is carried out in a universal testing machine MTS, model T 5002, according to ASTM D638, test specimen number IV, with a testing speed of 50.8 mm/min (Fig. 4).

Fig. 4. Test specimen for tensile test with Amsler vertical universal tensile testing machine. (Source: ESPE’s Laboratory of Mechanics of Materials)

502

J. V. Manopanta-Aigaje and D. Peralta-Zurita

ASTM D638 - 02ª, Standard Test Method for Tensile Properties of Plastics, gives the dimensions of test specimen number IV for mechanical tests [17]. The recycled PVC test specimens standardized for the uniaxial tensile test are shown in Fig. 5.

Fig. 5. Tensile test specimen according to ASTM D638 - 02ª, Standard Test Method for Tensile Properties of Plastics.

4 Analysis and Results 4.1 Infrared Spectroscopy Identification (FTIR) (ASTM142 Standard) Figure 6 shows the 1423 cm−1 value; a signal corresponding to out-of-plane bending that confirms the C-H vibrational stretching of the aliphatic CH2 of the signals in the region near 3000 cm−1 . On the other hand, the signals near 1330 and 1255 cm−1 are attributed to C-H finned shaped deformation modes intensified in some cases by the proximity of the chlorine atom linked to the same carbon (Table 2).

Fig. 6. Infrared spectrogram result of recycled PVC

Thermal-Mechanical Properties of Recycled PVC

503

• When comparing the found graph to the paper CHEMICAL RECYCLING OF POLY (VINYL CHLORIDE): ALKALINE DECHLORINATION IN ORGANIC SOLVENTS AND PLASTICIZER LEACHING IN CAUSTIC SOLUTION, the value of the bands will be 3100–2800 cm−1 [18].

Table 2. The relative absorptivity of the v (C–Cl) bands at 820–615 cm−1 , and v (C–H) bands at 3100–2800 cm−1 . Relative absorptivity

Polyvinyl chloride virgin

Polyvinyl chloride degraded

v (C–Cl)

0.4750

0.1761

v (C–H)

0.1127

0.0844

v (C–Cl/C–H)

4.2147

2.0854

It is mentioned that the value found versus the one compared to the paper is within the range to identify it as PVC. The program also indicates a first-degree straight line, see Fig. 7, the formula is Y = A*X.

Fig. 7. Result of the percentage of PVC in the sample

Figure 7 shows the equation for the line where A is the ratio factor of the formula A = 0.000512856. In the figure, comparing results, it is observed that the absorbance is within the permissible values considered in the line as PVC. Considered with a concentration of 81.28%. 4.2 Thermogravimetric Analysis Figure 8 shows a representation of the glass transition temperature Tg with respect to the concentration of plasticizers, dioctyl phthalate DOP and diisobutyl phthalate DIPB, for the system plasticized by DOP, which after the second point (C = 20%) already altered

504

J. V. Manopanta-Aigaje and D. Peralta-Zurita

Fig. 8. Thermogram result of recycled PVC

the previous linear trend (red line), i.e., for the third point (C = 40%), the relationship between Tg and concentration no longer decreased in the same magnitude. Table 3 shows the temperatures at which the system reached the degradation concentration. Table 3. Thermogravimetric temperature Glass transition temperature Start

80.56

Final

81.60

The temperatures will allow the plasticizer to be recovered, but when the temperature has exceeded this value, the solvent can no longer be recovered and this is due to the system reaching the critical plasticizer concentration, i.e., at this concentration the possibility of interaction ceases to be effective and, from this point on, the plasticizer is prevented from solvating [19]. 4.3 Melting Point by Differential Scanning Calorimetry DSC (ASTM D 3418 Standard) The degradation temperature only establishes the starting point, there the combustion process takes place with strongly exothermic reactions, which is an irreversible process. Industrially, it is necessary to know when the material begins to be unusable due to the effect of the heat to which the polymer has been subjected, see Fig. 9.

Thermal-Mechanical Properties of Recycled PVC

505

Fig. 9. Melting point result. (Source: (INER))

The degradation temperature Tg, the temperature at which the transition from the glassy state to the soft rubbery state begins, is the inflection point of the curve. The average of the temperature range is generally taken. 4.4 Yield Stress by Tensile Test. Tensile-Deformation Mechanical Tests ASTM D638 - 02ª, Standard Test Method for Tensile Properties of Plastics, gives the dimensions of test specimen number IV for mechanical tests. The test will be done in an axial tensile equipment, AMSLER model I0II at a test speed of 50 mm/min. Place the ends of the test specimen in the upper and lower jaws, adjust and operate the hydraulic traction system. The recycled PVC test specimens standardized for the uniaxial tensile test [20] (Fig. 10).

Fig. 10. Result of the tensile test according to ASTM D638 - 02ª, Standard Test Method for Tensile Properties of Plastics.

Table 4. Mechanical test results So: Cross sectional area

Result

Lu: Last length

4.14 mm

Fe: Yield load

229.50 N

Fm: tensile load

261.35 N

Re: Yield strength

55.25 Mpa

Rm: Tensile strength

63 Mpa

A: Elongation

10.75%

506

J. V. Manopanta-Aigaje and D. Peralta-Zurita

4.5 Thermal Simulation of the Prototype According to the climate variables paper shown in the figure, the ambient temperature will be 27.04 °C in 2020, which is taken as the reference value for the simulation [21] (Table 4).

Fig. 11. Thermal simulation for the recycled PVC prototype

The temperature used will not affect the element created, since it has a degradation temperature of 81 °C and is three times higher than the ambient temperature. 4.6 Simulation of Axial Load Inside the Valve Cap Head Assuming a failure in the Schrader valve closure system, the force with which the air inside the tire pushes on the valve cap can be calculated to be 2.3 lb for 80 psi internal pressure and 10 mm valve cap diameter. A value of 5 lb is taken for the simulations. The simulation results are shown in Fig. 11 and 12.

Fig. 12. Mechanical simulation of tire valve caps

The simulation for the internal axial load in prototypes made of standard PVC, recycled PVC and polypropylene, shows similar values, the most critical area is the

Thermal-Mechanical Properties of Recycled PVC

507

one around the head of the valve cap with a Von Mises stress of 2,140.038 N/m2 . The definition of the area is more accentuated in standard PVC, followed by recycled PVC and finally polypropylene. In any case, with values lower than the Von Mises stress 12,340.668 N/m2 of standard PVC, value specified as maximum indicating the resistance of the material in the simulation carried out. 4.7 Simulation of the Application of an External Torque to the Prototype In the closing or opening operations a torque must be applied to the knurled surface of the valve cap, for the simulations a value of 0.15 Nm is assumed [22].

Fig. 13. Mechanical simulation with torque on tire valve caps

The torque value according to the dimensions of the valve cap represents a manually applied finger force of 3 kg, a high load for a cap mechanism.The recycled PVC sample has a good amount of plasticizers used as additives, unlike the standard PVC sample, hence the difference in elongation that each presents. Recycled PVC can be subjected to stresses in the proportional zone better than standard PVC and polypropylene, which works exaggeratedly in the plastic zone. The mechanical tensile tests were performed at room temperature Tamb = 22 °C, which is much lower than the glass transition temperature Tg = 81.13 °C, for recycled PVC, so according Figs. 12 and 13 the material does not fail. 4.8 Pressure Exerted on the Tire Valve Cap The inflation pressure of a small to medium size vehicle tire can be considered as 30 psi [23], according to most manuals the force on the inside bottom of the valve cap can be calculated using the following basic formula where the diameter is taken from Fig. 14: F = P ∗ A = 30

π ∗ 0.31252 pulg 2 lb ∗ = 2.30 lb pulg 2 4

The thread is of the unified extra fine type, in which according to tables the area of stress is 0.0625 in2 . The normal stress produced in a single thread in the area of stress

508

J. V. Manopanta-Aigaje and D. Peralta-Zurita

Fig. 14. Tire cap stress

will be: σ =

F 2.30 lb = = 36.8 psi At 0.0625 pulg 2

The diameter of extra fine unified thread is considered as 5/16 = 0.3125 in. The yield strength of PVC by tensile test is Sy = 55.25 MPa = 8013.33 psi. As can be seen, the applied stress σ is much lower than the yield strength, which means that the valve cap resists the pressure. The same reasoning can be made for the other materials.

5 Conclusion When comparing the samples, it can be observed that it is within the permissibleranges to be able to mention that it is pure PVC and that the graphs have a similarity at the moment of comparing them, which means that it is PVC with a high percentage for its reuse in the automotive field. It was found that the material of the post-consumer hygiene talcum containers are PVC resins with a content of 82.18% of PVC and 18.72% corresponds to additives or plasticizers that are added to the standard resin to give it flexibility characteristics. When creating the valve cap with recycled PVC, plasticizers should be added to improve the flexibility of the material since it has a very good mechanical resistance. The tensile tests indicate the resistance values of the material placed under real conditions. By means of mathematical models, it is observed that our PVC has very good resistance properties. Therefore, PVC can be used in tire valve caps. According to mathematical models, it is a very resistant material. Recycled PVC reaches a yield strength of 55.25 MPa, while standard PVC only reaches 15.65 MPa, which means that recycled PVC has a greater proportionality zone, i.e., in the manufacture of mechanical items and automotive parts there are better possibilities of applying mechanical loads to the part, in other words, recycled PVC offers better design and construction possibilities compared to standard PVC.

References 1. Ferrer, H.E.S., Fajardo, M.A.C.: La importancia de la planificación de la producción en una empresa de conformado con PVC. Polo del Conocimiento: Revista científico-profesional 5(10), 440–457 (2020) 2. http://adriangonzalezeaf.blogspot.com/2012/04/principales-plasticos-utilizados-en-el.htm

Thermal-Mechanical Properties of Recycled PVC

509

3. Alfonso, J.M.: Characteristics of the thermal degradation of vinyl plastisols. Alicante, Spain: Special degree project at the University of Alicante to apply for the title of Doctor of Chemical Sciences (1996) 4. Castellanos Vargas, A.: Estudio de la respuesta térmica de polímeros termoplásticos PVC espumado y PMMA en situaciones de incendio-análisis de propiedades mecánicas y caracterización por combustion (2014) 5. Mexichem. Perspectivas del Reciclaje de PVC (2010). https://docplayer.es/18826473-Perspe ctivas-del-reciclaje-de-pvc.html 6. Martínez Flórez, E.L., Montañez Munar, J.: Estudio de lechadas asfálticas para pavimentos con la incorporación de residuos de plástico tipo PET (2021) 7. Saénz, M.: Elaboración de artículos plásticos para el hogar. Boletín mensual de análisis sectorial de MIPYMES, Ministerio de Industrias y Productividad (2016). Recuperado de https:// www.flacso.edu.ec/portal/pnTemp/PageMaster/1ek76ttdig4y5etomj1ag37vquo89.pdf 8. Identificación de plásticos, Escuela Colombiana de Ingeniería Julio Garavito, Facultad de Ingeniería Industrial, Edición 2008–1 recuperado de 9. Gómez García, C.: Caracterización térmica y mecánica de polibutilenterftalato (PBT), reforzado con fibra de vidrio. Universidad Politécnica de Cartagena, Tesis doctoral (2012) 10. Primo, E.: Química Orgánica Básica y Aplicada. Editorial REVERTE, S.A, Barcelona (2007) 11. Castellano, M.: Métodos de análisis térmico (2015). Recuperado de https://docplayer.es/894 1260-Metodos-de-analisis-termico.html 12. Coreña Alonso, J., Méndez Bautista, T.: Relación estructura propiedades de los polímeros. Educ. Química. 21(4), 291–294 (2010). Recuperado de http://www.scielo.org.mx/pdf/eq/ v21n4/v21n4a6.pdf 13. Vincent, B., Mathot, F.: Métodos de análisis térmicos, Segunda Edición, Editorial John Wiley, Hoboken (2009) 14. Maciel Júnior, R.P., Gouveia Filho, M.D., Deus, Ê.P.D.: Análise do envilecimiento do polietileno de media densidad por monitor amento de propriedades mecánicas (2015) 15. La espectroscopia molecular en la caracterización del PVC, recuperado de https://steemit. com/stem.../la-espectropia-molecularen-la-caracterizacion-del-pvc 16. Aldás, M., Inca, F.: Reciclaje de PVC a partir de tarjetas de identificación plásticas para la obtención de un pegamento de tubería. In: Congreso de Ciencia y Tecnología ESPE , vol. 10, no. 1, pp. 176–181 (2015) 17. Blazevska-Gilev, J., Spaseska, D.: Reciclaje químico de poli (cloruro de vinilo): decloración alcalina en disolventes orgánicos y lixiviación de plastificantes en solución cáustica. Revista de la Universidad de Tecnología Química y Metalurgia 42(1), 29–34 (2007) 18. González Horrillo, N.: Espectroscopia InfrarroPa de Transformada de Fourier (FTIR) en el estudio de Sistemas Basados en PVC. Universidad del país vasco, San Sebastián, España (2005) 19. Blanco Álvarez, F.: Propiedades Mecánicas, Rotura (2016). Recuperado de http://www6un iovi.es/usr/fblanco/TemaII.2.7.PROPIEDADESMECANICAS.pdf.p2819. Presión de Aire en las Llantas | Goodyear 20. Inca, F., Quiroz, F., Aldaz, M.: Recuperación de Policloruro de Vinilo (PVC) obtención de materiales plásticos (2016) 21. Maradei Garcia, M.F., Valencia Otero, A.F., Espinel Correal, F.M.: Estudio sobre la influencia del diámetro de apertura en la fuerza ejercida por cada dedo. In: Revista Politécnica - marzo 2016, vol 37, no.°2, Revista de Salud Pública (2016) 22. González Osorio, B.B., Barragán Monrroy, R., Simba Ochoa, L., Rivero Herrada, M.: Influencia de las variables climáticas en el rendimiento de cultivos transitorios en la provincia Los Ríos. Ecuador. Centro Agrícola 47(4), 54–64 (2020)

Consequence of a Geriatric Psychomotricity Program on the Quality of Life of Older Adults Veronica Molina , Nuria Galárraga , Gabriela Enríquez , Rocío Duque(B) , and Ismenia Araujo Instituto Superior Tecnológico José Chiriboga Grijalva Con Condición de Universitario, EC100150 Ibarra, Imbabura, Ecuador [email protected]

Abstract. The quality of life of older adults has been analysed from various perspectives, where health, nutrition, physical activity and family support have been prioritized due to the economic sustenance and accompaniment that they could provide, constantly leaving aside the affective aspect, as one of the pillars that could guarantee a state of emotional balance in older adults. Through this research project, a proposal focused on geriatric psychomotricity is aimed at strengthening the affective ties of the elderly at the “Luz y Vida” Association in the city of Ibarra, in collaboration with the closest nuclear family members. It was based on a comprehensive evaluation of the socio-family, emotional, cognitive and functional levels, showing that approximately 50% of older adults have a between probable and established level of depression; 60% have mild to severe cognitive impairment, barely 20% share a lot of family recreation time and 50% of adults have a level of functional dependency. Based on these data, a proposal is presented, in which, areas of weakness are jointly addressed, through psychomotor activities adapted to the needs of older adults from their own family, directly involving their closest relatives, in order to achieve the strengthening of affective ties that provide greater stability and better days. The proposal results from a combination of actions that starts from family orientation and awareness about the individual status of adults and the need for a commitment of affective support through a short training process, strengthened through home visitations, aiding these families, using a practical guide to psychomotor activities that can be easily applied at home. Finally, after its application, it is projected to re-evaluate these older adults in order to demonstrate the proposal’s effectiveness. Keywords: Geriatric psychomotricity · Affective bonds in the elderly · Movement in the elderly; · Psychomotor activities adapted to the elderly

1 Introduction In this research, two terms referring to the conditions of elderly adults are analyzed. These terms are: aging and old age; recognizing the former as autonomous and being the path to wisdom, while the latter is considered as synonymous with a need for dependency and, therefore, considers that aging is not a disease. Aging can be felt because it is caused by © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 510–523, 2023. https://doi.org/10.1007/978-3-031-25942-5_40

Consequence of a Geriatric Psychomotricity Program

511

basic cell degeneration, however, old age causes a deterioration of the spirit that speeds up the decay itself, not allowing the immune system to recover [1]. From this point of view, adaptation to aging will depend on the positive attitude of the older adult, the opportunities they have, how they are able to make the most of the moments and, those spaces that generate thoughts, sensations, pleasant perceptions, joy, comfort, emotional strengthening, that encourages one to live this stage and enjoy it in the company of family, friends, life partners or simply alone. With the aforementioned, it is clearly defined that the attitude of older adults towards this stage of life makes a notable difference, allowing them to live both extraordinary, or unfortunate moments in their aging process [2]. Aging, being a subject of analysis from different perspectives, has made needs analysis of people going through this stage of life and who constitute a vulnerable population that requires a certain type of specialized care, possible. Aging has been referred to as a natural and inevitable process that involves different genetic, biological, physiological, socio-environmental and cultural factors, manifested in the gradual deterioration of the organism [3]. However, it is also considered that “aging is not equivalent to becoming ill, nor does old age mean illness, but rather a constant dialectic of gains and losses throughout life, in which there are morphological, physiological, biochemical and psychological changes, with multiple biopsychosocial factors” [4]. The quality of life of older adults has been under analysis from different angles such as health status, physical activity, family relationships and emotional conditions, since all of these can influence healthy aging. From the psychomotricity point of view, “Physical activity and psychomotor practices in older adults contribute to active, healthy and satisfactory aging” [5]. Corporeality and movement are essential in all phases of human life and in old age, they have a special significance in the adaptation process to the progressive changes. The participation of the elderly in well-organized programs, based on body mediation, can become a key strategy for achieving healthy, active and satisfactory aging. Personal competence will benefit from psychomotor and/or physicalsports practice, promoting personal, social and cultural experiences in order to improve their physical, relational and, of course, emotional conditions. [5]. According to the figures presented by ECLAC (Economic Commission for Latin America and the Caribbeans) in 2018 [6], worldwide, between 2015 and 2030 the population aged 60 and over will rise from 900 million to more than 1,400 million people. This represents an increase of 64% in just 15 years, this being the age group with the highest growth. In Latin America, the aging process occurs faster, going from 70 million older people to 119 million in the same period, which represents an increase of 59%. It can be stated that 7 out of every 100 Ecuadorians are older adults, which is equivalent to 1,212,461 individuals who have exceeded the 65-year-old barrier. In other words, 6.2% of the Ecuadorian population is going through a process of chronological aging, having a major incidence in the female segment [7]. These figures lead to a deep reflection on the side of both the government and socio-family groups, regarding the care and protection of the elderly; since growth rate is increasing and, it is essential to act from prevention and care programs in the face of the most common difficulties that occur and in which involves the family. There are currently some theories that try to explain aging from different perspectives; from whichever side it is taken, there is always a consideration of physiological

512

V. Molina et al.

theories as normalizers of physical deterioration caused by the passage of time and, in turn, causing organic, immunological deterioration, accumulation of stressors during existence, biochemical and metabolic changes, accumulation of waste products and cellular aging, while social theories that explain the influence of cultural and social factors on aging are taken as a non-normalizing reference. The social theory of activity refers to the role changes that older adult must assume but that, since they are not clearly defined, cause confusion. According to this theory, if the new roles do not replace the previous ones, anomie tends to become internalized and the individual gradually becomes maladjusted, even with himself. Successful aging presupposes the discovery of new roles or means of conserving old ones. For this ideal to be attained, it is necessary to attribute new socially-valued roles to elderly. The social theory of continuity projects that the last stage of life prolongs the previous stages and maintains that social situations may have a certain discontinuity, but that adaptation to different situations and lifestyles are mainly determined by habits and behaviors, tastes acquired throughout life; and therefore, social adaptation to old age is determined by the past [8]. Without doubt, some older adults are reluctant to accept the changes that occur throughout their lives and that affect such important aspects as: their health, physical appearance, conditions of mobility, autonomy and independence, nutrition, social integration, job performance etc.; generating behavioural and emotional changes that many times prevent an adequate process of adaptation to new individual circumstances [9]. The main psychomotor, cognitive and socio-affective changes that can occur in the elderly are: less security in walking and displacement, loss of some degree of static balance, progressive decrease in muscle tone, coordination, flexibility and strength, difficulties for voluntary relaxation; appearance of de-structuring symptoms in the body patterns and difficulties in recognizing one’s own body. Others are, difficulties in the spatial and temporal scheme, inability to maintain a correct orientation and organization of space and time, which conditions its relationship with the environment, decrease fluid intelligence, sensory perception, attention, and memory; with long-term memory being more affected. There is also a change of roles, physical appearance, and family structure. On some occasions, they are cut off from their profession, receive less financial compensation, become increasingly dependent, the opportunities for social contact derived from work decrease and, therefore, creating a greater amount of free time [10]. Changes in these three aspects could occasion reactions like avoidance of new actions due to the effort and resources involved, tendency to routine towards situations that they have mastered, and to which their difficulties are less evident, a depressed state of mind generated by the succession of losses that they suffer, emotional liability, anxiety and a certain irritability-aggressiveness at specific times, a tendency to loneliness and introversion, greater social isolation and a sedentary lifestyle in older adults. Risk factors that make older adults more vulnerable, in addition to those aforementioned, are: living in economically depressed, socially and geographically remote regions, not having the necessary support and care, suffering from high blood pressure, having a low purchasing power, living in solitude and isolation, lack of family and social integration, having fewer responsibilities and with it personal devaluation. A lot of free time can cause stress and anxiety. Any normal life event can become a stressful factor due to the suddenness of its occurrence; not allowing time for advance preparation

Consequence of a Geriatric Psychomotricity Program

513

[11]. Another high-risk factor for the elderly is lack of physical activity and sedentary lifestyle, since it increases the risk of cardiovascular disease, increased blood pressure, cholesterol, decreased visual and auditory perception, loss of skeletal muscle strength and atrophy [12]. These risk factors as a whole constitute a threat to life, each from its own aspect, since they affect, to a lesser or greater degree, the physical, emotional and mental stability of the elderly, who would be easily exposed to declines in health, functionality, cognitive impairment, depressive states and even death. Therefore, it becomes imperative to work to attain a better quality of life in the last years of every human being. This means, improving the perception that individuals have of their place in existence, in the context of the culture and value system in which they live and in relation to their goals, expectations, norms, and concerns. It is a very broad concept, influenced in complex ways by the subject’s physical health, psychological state, level of independence, social relationships, as well as the relationship with the essential elements of the environment. [13]. Quality of life can be achieved to the extent that; the older adult receives recognition from significant social relationships, that their state of health is stable, that the degree of daily routine functionality is adequate, that they perceive the possession of emotional well-being, that they perform physical activity continuously, that the family, social, psychological, physical, health, economic and functional conditions generate a positive personal vision of the realities of older adult lives; These are determinants considered as important factors in the quality of life to lead [2]. Similarly, mental well-being, which refers to the state in which the individual is aware of his own abilities, can cope with the normal stresses of life, work productively, and is able to contribute to his community. The self-perception of quality of life will be determined by conditions such as physical, social, religious well-being and the practice of their own values and beliefs [12–15]. It is precisely for this reason that the perception of the older adult must be respected and not judge their condition by what others may or may not consider important [13]. In recent years, it has been considered significant to work on active aging, considering this condition as the ability of older adults to carry out their activities autonomously, under adequate physical, cognitive, economic and social performance that allows a recognition of productivity and inclusion in the society, in such a way that the older adults achieves a feeling of youth and activity that distances them from traditional concepts of associating older adults with passivity and decrepitude, which has caused social behaviors that lead to the exclusion and even forgetting of people who can still contribute to their families and to society in general [16]. The practice of physical activity allows for the improvement of various physical, mental and emotional aspects in older adults and favors the conservation of muscle tone, flexibility, coordination and balance; so as to enable greater body stability and posture control. Additionally, physical activity is necessary for effectiveness in emotional awakening, so much that it improves self-esteem, relieves symptoms of depression and provides tools for integration and stress control, thus delaying physical and cognitive deterioration in aging [17]. Physical-recreational activities and the creation of affective bonds constitute an integral alternative for the occupation of free time and improvement of the quality of life of the elderly; Therefore, the objective of this research has been to develop a proposal

514

V. Molina et al.

focused on geriatric psychomotricity aimed at strengthening the affective ties of the elderly at the “Luz y Vida” Association in the city of Ibarra, with the participation of the members of the nucleus closest relative.

2 Methodology In this work, a field investigation was applied in which data was collected directly on participants’ lifestyle. The sample population, 20 older adults (ages between 65 and 90 years) who are members of the “Luz y Vida” Association in the city of Ibarra. A medical history-type data collection technique was used. This was created through the application of five evaluation instruments. Firstly, a sociodemographic record, to get basic lifestyle data, then a Gijón Socio-Family Assessment scale; to determine the level of family support. Next, a Yesavage Geriatric Depression scale, with which depressive states are determined. The Lawton-Brody index was applied, to evidence aspects of independence in instrumental actions of daily life and finally, the Mini-Mental scale, to determine the cognitive status. The results of the instrumental applications were simplified through a basic statistical treatment, allowing ease to the respective data by establishing access to the real conditions of the elderly under different aspects and the needs that justify the final proposal, using strategies focused on geriatric-psychomotor skills and that directly link the relatives of the elderly in plan execution.

3 Results and Discussion The data obtained from the application of instruments allowed an overview of the family, cognitive, emotional and functional conditions of the older adults participating in the research, through whom, the specific needs of the sample are evidenced in Table 1. As seen in Table 1, 95% of the analyzed sample is female, so it can be considered that this study will have a gender bias in characteristics. Regarding marital status, only 30% of the participants are married and have the physical presence of a partner; while 70% of the participants are widowed, separated, single or divorced and therefore, without a life partner. The age of most of the sample ranges between 65 and 75 years (60%), while 30% are between 75 and 80 years old and 10%, between 85 and 90 years old. In the educational level analysis, it was observed that 80% of the analyzed sample attended only primary school; The occupation of 95% of the sample is “homekeeper”. In terms of living conditions, 20% live alone and the remaining 80% have the physical presence of their partner, a child or another relative; but only 20% of the sampled population report having a satisfactory frequency of family recreation; while the remaining 80% consider that recreation time in the company of their family is little or almost nil. Regarding the frequency of family recreation, Laurencio, Jiménez and Sánchez in 2017, determined that “there is a clear correlation between positive and pleasant affective experiences and duration of life; Therefore, it could be argued that if these experiences are not modified, the prognosis will be unfavorable for their state of health and their consequent quality of life” [17]. In 2017, Miguel Ángel Posso and a team of researchers from the Universidad Tecnica Del Norte in the city of Ibarra, Ecuador, carried out an investigation to demonstrate some

Consequence of a Geriatric Psychomotricity Program

515

Table 1. Characteristics of the sample Civil status

Number of adults

%

Single

2

10

Married

6

30

Divorced

3

15

Separated

1

5

Widowed

8

40

Feminine

19

95

Masculine

1

5

65 to 70 years

9

45

70 to 75 years

3

15

75 to 80 years

6

30

85 to 90 years

2

10

Lives alone

4

20

With spouse

6

30

With a child

8

40

With family member

2

10

Hardly ever

5

25

A little

11

55

A lot

4

20

Gender

Age range

Living condition

Frequency of familial recreation

important indicators of the existence of family exclusion, These are: abuse, abandonment and poor family communication [19] According to data presented in said research, 20.7% of older adults live alone and despite the fact that 78.7% maintain a family relationship either with their spouse or another relative, it is detailed that this condition does not guarantee adequate care or that such elderly persons have a better quality of life than others who live alone. The same research details that only four out of ten older adults carry out activities with their family. A truly alarming data, since these same aged ones expect family compensation for years of care and attention through affection, moral support and financial help [19] In another investigation carried out in 2016, with a group of older adults from the city of Ambato in Ecuador, it can be evidenced that 53% of the family members surveyed state that family recreational activities are almost never shared with older adults and this fact causes a deterioration in the social relationships and socio-psycho-physiological conditions of the elderly. [20].

516

V. Molina et al.

These data should lead to a reflection on the characteristic living conditions of older adults, taking into account that, in this study, most of the sample is made up of home keepers who have invested unquantifiable effort in caring for their families, performing job-support activities, reproductive, economic and political spheres. Being part of a vulnerable population, there should be a consideration by the family in the involvement of all members of the family so that their aged ones can actively participate in social engagements and family interactions within their own environment, with the intention of creating a welcoming, loving and valued community, guaranteeing an emotional condition that allows them to face the discomforts or inconveniences generated in this period of their life. The result obtained in terms of the frequency of fun activities in the family environment could call to question the high risk of reduced quality of life for older adults given the absence of positive family affective relationships. In Fig. 1, one can see how 50% of those evaluated perceive their level of family support. Considering, mainly, the absence of significant family conflicts and provision of financial support from their family. 25% perceive a lack of family or social support due to the loss of closeness with their relatives or the non-availability of economic support. Also, 25% of the aged sample population consulted perceive a severe sociofamily deterioration, which, in terms of quality of life, translates into a deterioration of the same. 100%

Elderly Adults (%)

75%

50% 50%

25%

25%

Intermediate (n=50)

Severely deteriorated socio-family (n=50)

25%

0% Good (n=10)

Fig. 1. Type of family support among elderly adults

It has been suggested that older people are a particularly vulnerable population group, given the biological and social conditions that they must sometimes experience. The risk factors, in this sense, are determined by the lack of personal and economic resources, loss of relationships with the familial environment, inability to integrate into the community and get access to state protection policies [20]. Posso, in 2007, [18] had reported the existence of several important indicators that demonstrate family exclusion,

Consequence of a Geriatric Psychomotricity Program

517

such as abuse, abandonment and poor communication within the family. Although it is true that this investigation was carried out in the rural sector of the city of Ibarra, it is also shown to cause similar occurrences in the urban areas of the country. That is to say, the negative perception that is generated in the elderly due to the absence or little communication with family affects both the urban and rural areas. It can be inferred that in the local context of the Ibarra township, older adults are exposed to risk conditions derived from the lack of support from the family environment; This is a trigger that affects emotional conditions causing a feeling of abandonment and lack of protection. In Fig. 2, from the results obtained, it is observed that 50% of the elderly in the program showed no sign of depression. 35% of the population, equivalent to 7 older adults, have symptoms of probable depression and finally established depression is observable in 3 sample members, equivalent to 15%. Same may be correlated with the results obtained in the family assessment where there was a 50% deterioration in family relationships. These elements appear as risk factors for the deterioration of the quality of life in older members of the community.

Elderly Adults (%)

100% 75% 0,5% 50%

0,35%

25%

0,15%

0% No depression (N=10) Probable depression (n= 7)

Established depression (n= 3)

Fig. 2. Depression in elderly adults

In the study “Depression, cognition and quality of life in active older adults”, carried out in 2015 in Valdivia, Chile, with a population of 30 elderly sample members, who had practiced some type of physical activity. Evaluating amongst others, depression, it was observed that 16.6% presented established depression and 26.7% mild depression, while 56.7% did not have depressive symptoms [11]. These results are similar to those provided in this investigation. In the municipality of Wichanzao, Peru in 2020, Alvarado carried out an investigation with 65 older adults who underwent, among others, the geriatric depression scale, Yesavage Test, in which the results showed 81.5% of older adults with severe depression and 18.5% with mild depression; reflecting that the higher the self-esteem of older adults, the lower the degree of pressure. [21]. The data analyzed shows that when comparing the depression scale with the conditions of physical activity and the self-esteem of the older adult, it could be concluded

518

V. Molina et al.

that the more active an older adult is, with its accompanied high self-esteem, the lower the occurrence of any degree of depression. It can be inferred that both exercise and the construction of a positive self-image can improve the psychological health conditions of the elderly in terms of the presence of symptoms of depression. Figure 3 shows how 50% of the older adults participating in this study report being independent in daily living, while the other 50% have some degree of mild, moderate or even total dependency, therefore, requiring attention and support from the people in their immediate environment. This signifies that most of the sample population are dependent older adults.

Elderly Adults (%)

100%

75% 50%

50%

50% 35% 25% 10% 0% Dependencia totalDependencia moderadaDependencia leve

Independencia

Fig. 3. Level of independence in elderly adults.

In the physical activity and psychomotricity studies on the elderly, their contributions to active, healthy and satisfactory aging, carried out by Menéndez & Brochier in 2011, [22] it is mentioned that “the participation of the elderly in well-organized programs, based on body mediation can become a key strategy for achieving healthy, active and satisfactory aging”, favoring the independence of the elderly. (p.190). At the functional level and also, data emerging from the Lawton and Brody Scale application, results are related to the degree of mobility possessed. Therefore, the population of this study is comprehensively suitable for the use of psychomotor strategies applicable to the particular aging conditions, in which all the areas that can be addressed from this practice are combined. The need to also intervene in the families of the elderly becomes evident, integrating these younger members into activities that allow the elderly ones to reaffirm the affective ties with their children, grandchildren, siblings, spouses, relatives or closest people who contribute to the strengthening of their self-esteem, security and respect. In the Fig. 4, it can be evidenced as 80% of the older adults evaluated show symptoms of cognitive impairment at different levels; while only 20% display normal conditions in this aspect.

Consequence of a Geriatric Psychomotricity Program

519

Elderly Adults (%)

90%

60% 45%

30%

20%

20% 15%

0% Normal (n= 4)

Slight deficit (n=4)

mild cognitive Severe cognitive impairment (n=9) impairment (n=3)

Fig. 4. Cognitive states of elderly adults.

When analysing the mental conditions of the elderly participants in the study by Menéndez and Brochier, 2011, [22] it was observed that 86.7%, despite continuing to perform a certain level of physical activity and maintain some degree of social interaction, showed mild cognitive deficit while only 13.3% maintained a normal cognitive state. It can be concluded that populations of the elderly community suffer from this type of deterioration due to the very nature of the change corresponding to this stage of life; therefore, the new circumstances of life must adapt to the needs of these conditions; making the environment much more stimulating for the maintenance of cognitive functions. In order to strengthen the family affective bonds and apply the knowledge of geriatricpsychomotricity acquired so far, psychomotor techniques aimed at the field of aging is here, proposed, demonstrating effectiveness in the areas of movement, cognition, emotion and socialization by integrating the relatives of older adults in the development of psychomotor activities, promoting the possibilities of active and positive aging through psychomotor practice. 3.1 Proposal Description The analysis carried out guides the research towards creating a proposal that can meet the expectations of improving the quality of life for the elderly. In this proposal, activities exclusively designed for the aged, with a focus on psychomotor development, which includes the participation of family members, interacting directly in familiar environments. The activation of positive emotions will be driven by movement of actions that at the same time reinforce cognitive functions such as; memory, attention, thinking, orientation, amongst others, with a wider vision of comprehensively benefiting older adults based on the practice of Geriatric psychomotricity with repercussions on strengthening family affective ties.

520

V. Molina et al.

The proposal was made up of four specific components that were applied sequentially; a period of familiarization and orientation, workshop participation, follow-up home visits, and progress evaluation comes once its application is completed: Project Familiarization with the Families of the Elderly. Results presentation of the evaluation carried out on the elderly subjects was scheduled, in order to share the general and individual situation of each one regarding the emotional, cognitive, functional state and their perception regarding the family relationship. This activity was carried out through a workshop in which both older adults and their close relatives participated, with the aim of raising awareness about the need for accompaniment at this stage of life; In which great changes must be assumed, made more bearable with emotional presence, closeness and better communication with relatives. In addition, the elderly ones were oriented about the possibility of going through this stage of life with optimism, with a willingness to change, with a resilient attitude in the faces of this natural process, to be passed through with joy, knowing that they are a part of a community that still values them, from a family that welcomes them with love and above all, having a feeling of being masters of their own existence and worthy of living better days. Joint Workshops between the Elderly and their Families Based on Psychomotor Activities. Three model workshops are carried out based on psychomotricity sessions, in which interaction between older adults and their families is achieved. This is with the aim of strengthening communication, strengthening affective ties, recognizing the need and contribution of each member of the family to the emotional well-being of older adults. The tone of these workshops is joint family work and the relationships of friends and relatives with the elderly. Home Visitations to each Family. Three visits are made to the home of each aged participant in order to guide families in the execution of activities that generate movement, emotion and cognitive activation appropriate to their own environment and in order to take advantage of their available resources. Monitoring of the individual situation and the needs of family orientation is carried out. The first visit is used to deliver a guide of family integration activities that can be applied from home whereby any family member can participate. The following visits will be for follow-ups and observing fulfilled commitments. Contrast Evaluation to Observe the Effects of the Proposal. To carry out the proposal, approximately three months is put forward, after which, a new evaluation is carried out on the elderly subject, with the same instruments of the initial evaluation, to determine the effects caused after the application and to demonstrate the response of both the family and in the adults. Outcomes are expected mainly in the changes in affective conditioning.

Consequence of a Geriatric Psychomotricity Program

521

4 Conclusions The results obtained in this research allow us to conclude that: The evaluation of the socio-family status allows us to affirm that the studied population of older adults are mostly housewives, with a primary school degree, most of whom live in the company of a family member. However, family recreation with others is scarce. And family support is impaired. These constitute a predominant factor in the incidence of depression. In the local context of the Ibarra township, older adults are exposed to risk conditions derived from the lack of family support. This is a trigger that affects emotional conditions, causing a feeling of abandonment and lack of protection. Both exercise and the construction of a positive self-concept can improve the psychological health conditions of the elderly in terms of depression. Populations of older adults suffer from cognitive and functional deterioration due to the very nature of the changes corresponding to this stage of life. Therefore, the new circumstances of life must adapt to the needs derived from these conditions; making the environment much more stimulating for the maintenance of cognitive and functional abilities. The research carried out shows a great need to improve the family life conditions of the elderly in the city of Ibarra. This has led to the offer of a proposal to strengthen family affective ties and the application of geriatric-psychomotricity through directed psychomotor techniques to the aging, demonstrating effectiveness in the areas of movement, cognition, emotion and socialization; integrating the relatives of older adults in the development of activities to promote the possibilities of active and positive aging through psychomotor practice. The activities are exclusively designed for older adults with a psychomotor objective, which includes the participation of family members and interaction with their close family environment. The activation of positive emotions driven by movement actions that at the same time reinforce cognitive functions. It was made up of four specific components: Introduction and orientation, participation in workshops, follow-up home visits, and progress evaluation. The most important contribution of this research is the achievement, through the application of this proposal, of an improvement in the quality of life of the elderly through psychomotor and cognitive interventions, increasing satisfactory family recreation, providing the presence of partner, child or other family members and family retribution for years of care and attention through affection, moral support and financial aid. Hence, the importance of transferring this proposal to other groups and residences for older adults as a contribution to improving quality of life, with the intention of feeling accompanied, welcomed, loved and valued; guaranteeing an emotional condition that allows them to face the discomforts or inconveniences that could be generated in this period of their lives.

References 1. González, M.: Programa de intervención para el fomento del bienestar emocional en personas mayores. Revista INFAD de Psicología. Int. J. Develop. Educ. Psychol. 1(2), 37–46 (2016)

522

V. Molina et al.

2. Olivares, D., Martínez, L., Oquendo, L., Crespo, F.: Calidad de vida en el adulto mayor. Varona 61, 1–7 (2015) 3. Paredes Arroba, T.: Fortalecimiento de vínculos afectivos y participación integral de la familia del grupo de adultos mayores nuestra Señora de la Elevación de la Parroquia Santa Rosa del Cantón Ambato (Bachelor’s thesis, Universidad Técnica de Ambato, Facultad de Jurisprudencia y Ciencias Sociales, Carrera de Trabajo Social) (2017) 4. Chong Daniel, A.: Aspectos biopsicosociales que inciden en la salud del adulto mayor. Revista Cubana de medicina general integral 28(2), 79–86 (2012) 5. Menéndez, C., Kist, R.L.: actividad física y la psicomotricidad en las personas mayores: sus contribuciones para el envejecimiento activo, saludable y satisfactorio. Textos Contextos (Porto Alegre) 10(1), 179–192 (2011) 6. CEPAL. Envejecimiento, personas mayores y Agenda 2030 para el Desarrollo Sostenible. Santiago: Naciones Unidas (2018) 7. Valdivia, P.: Envejecimiento y atención a la dependencia en ECUADOR (2020) 8. García López, M.V., Rodríguez Ponce, C., Toronjo Gómez, A.M.: Enfermería geriátrica. DAE, Barcelona (2012) 9. Osorio, M.: La salud de los adultos mayores. Washington, DC Recuperado de. https://iris. paho.org/bitstream/handle/10665.2/51598/9789275332504_spa.pdf (2011) 10. Fuenmayor, G., Villasmil, Y.: La percepción, la atención y la memoria como procesos cognitivos utilizados para la comprensión textual. Revista de artes y humanidades UNICA 9(22), 187–202 (2011) 11. Guerrero, N., Yépez-Ch, M.C.: Factores asociados a la vulnerabilidad del adulto mayor con alteraciones de salud. Universidad y Salud 17(1), 121–131 (2015) 12. Chasipanta, W., Analuiza, E., Gaibor, J., Torres, Á.F.R.: Los beneficios de la actividad física en el adulto mayor: Revisión sistemática. Polo del Conocimiento: Revista científico-profesional 5(12), 680–706 (2020) 13. Rubioi, D., Rivera, L., Borges, L., González, F.: Calidad de Vida en el Adulto Mayor. Varona (61), 1–7. https://www.redalyc.org/pdf/3606/360643422019.pdf (2015) 14. Maza, D.: La Familia y su influencia en la calidad de vida de los Adultos Mauores. Tesis previa a la obtención del título de Trabajo Social. Loja. Universidad Nacional de Loja (2015) 15. Callejas, M., Marín, M., Ruíz, G.: Calidad de vida en la vejez.: Propuesta metodológica y teórica para su caracterización. Fondo Editorial FCSH (2019) 16. Klein, A.: Del anciano al adulto mayor: Procesos psicosociales, de salud mental, familiares y generacionales. Plaza y Valdes (2015) 17. Rodríguez, Á., García, J. y Luje, D.: Los beneficios de la actividad física en la calidad de vida de los adultos mayores. EmásF: Revista Digital de Educación Física (63), 22–35 (2020) 18. Laurencio, S., Jiménez, E., Sánchez, Y.: Vivencias afectivas y factores condicionantes en adultos mayores sin relación de paeja. MEDISAN 21(1), 102–107 (2017). http://scielo.sld. cu/scielo.php?script=sci_arttext&pid=S1029-30192017000100012 19. Posso, M., Pinto, H., Andrade, W., Quelal, P., Aroca, A.: Diagnóstico de la exclusión familiar del adulto mayor en el sector rural del cantón Ibarra. CienciAmérica Revista de divulgación científica de la Universidad Tecnológica Indoamérica 6(2), 142–148 (2017) 20. Paredes, T.: Fortalecimiento de vínculos afectivos y participación integral de la familia del grupo de adultos mayores “Nuestra Señora de la Elevación” de la parroquia Santa Rosa del cantónAmbato. Trabajo de graduación, previa a la obtención del Título de Licenciada en Trabajo. Ambato, Tungurahua, Ecuador: Universidad Técnica de Ambato. https:// 0.05). Keywords: Abaca · Geotextiles · Natural fiber · Natural colorant · Sustainable textiles

1 Introduction For centuries, fishermen have used associative behavior, first targeting natural floating objects formed from tree debris and logs, then deploying manmade floating devices to attract tuna and facilitate schooling [1, 2]. Since 1990 artificial floating devices, damaged in the environment have been used [3, 4], fishermen deploy some 100,000 fish aggregating devices in tropical oceans each year [18]. Around 40% of the global catch of tropical tuna consists of the use of floating devices [5]. Tuna and pelagic species move freely until they perceive floating objects and orient themselves to fish-aggregating devices at distances of 4 to 17 km [17]. The world market needs tuna products with good traceability to prevent pathogenic consumer diseases and conserve tuna and seafood © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 524–534, 2023. https://doi.org/10.1007/978-3-031-25942-5_41

Laboratory-Scale Determination of the Influence of Temperature, Time, and Mordant

525

stocks [6]. To increase the catch rate of tuna and tuna-like species, the artisanal fishery has resorted to the use of a variety of gears such as: surface gill net, purse seine, hand line and mid-water hand line, which have an average life of two years [7]. Fish aggregating devices made of non-biodegradable materials are lost and abandoned at sea and transfer toxins and microplastics to marine organisms causing adverse effects [8]. Little is known about the transformations of plastics in seawater, including degradation time scales, weakened, by ultraviolet radiation, chemical degradation, wave mechanics, plastics break into pieces each time more smalls. There is great concern about FADs in tuna ecology because FADs made of synthetic material could act as ecological traps [9]. Tiny particles of plastic waste or microplastics can affect biodiversity, aquatic ecosystems, food safety and human health [19], 61% of fishing gear, such as: ropes, twines, buoys and synthetic nets are found in open waters [20]. Natural fibers offer several advantages compared to synthetic fibers. They are biodegradable and can be used to make plantations. One of the natural fibers of great interest is raw abaca fiber. Of the natural fibers, abaca fiber is considered the most resistant [15] because of its high cellulose content, exhibits exceptional mechanical strength and resistance to salt water decomposition [10]. In order to add more strength and durability to these natural fibers, the addition of dyes or pigments has been chosen. Natural pigments according to their structural characteristics can be classified into tetrapyrroles, carotenoids, flavonoids, curcuminoids, betalains, quinones, they can be found abundantly in the plant and animal kingdoms and in microorganisms. Its use has expanded not only in the phar-maceutical or food industry, but also in the textile industry [11]. The presence of functional groups in the chemical structure of the dyes provides UV protective properties [12]. Flavonoids constitute a very diverse group of metabolites derived from phenylpropa-noids, which are characterized by having a chemical structure of C6-C3-C6 with the presence of two aryl rings joined by another heterocyclic, providing a color from yellow to blue. Therefore, based on the diversity of colors and their high solubility in water, they expand their use as excellent dyes for dietary supplements and textiles [13]. Flavonoids are present in numerous plants and plant-based foods. They are characterized by containing several natural antioxidant properties that help human health, prevent aging and help prevent the wear of fibers in the textile area [11]. There are several plants studied for the extraction of dyes, one of the plants that stands out the most is the Peruvian Ambrosia. This plant has been studied as an antioxidant and repellent agent, among other characteristics, due to the presence of flavonoids and glycosides [14]. Peruvian Ambrosia, also known as Marco, is a traditional medicinal plant, also used for dyeing textiles. Flowers and leaves are used in dyeing. The objective of this research is to determine at laboratory scale the influence of temperature, time and mordant on the tensile strength and elongation of abaca yarn dyed with extract (Ambrosia peruviana) subjected to seawater, through the analysis of the chemical processes for the maximum utilization of the raw material. 1.1 Materials and Methods Location. The research is carried out in the laboratory of the Textile Career of the Universidad Técnica del Norte, located in the province of Imbabura Canton Ibarra, Altitude 3050 m above sea level.

526

E. S. Mora Muñoz et al.

In this study, 2.5 Ktex numbered abaca thread obtained from the province of Manabí is used. The thread has an initial resistance of 543.36 N. For dyeing, the abaca yarn is washed with Eriopond detergent at a concentration of 0.5 gr/l at 80 °C for 30 min. Then the dyeing was carried out in the autoclave HG-TC200B DYEING CONTROLER by the exhaustion method. Potassium aluminum sulfate (KAI(SO4 )2 · 12H2 O) [16] of technical grade with 99% purity and copper sulfate (CuSO4) at 98% were used as mordants. For dyeing, the natural dye, aqueous extract of Marco (Ambrosia peruviana), was used at a concentration of 50% in relation to the weight of the material. The dyeing of the abaca yarn is carried out at 80 and 100 °C, for 60 and 120 min, with a bath ratio of 1:20, with direct mordanting using 3% aluminum potassium sulfate and 3% copper sulfate. Tensile Test and Yarn Elongation Test. The tensile strength and elongation of the abaca yarns were determined using the JAMES HEAL dynamometer. Model: TITAN 5. Samples are prepared according to EN ISO 2062:2009 specifications, after conditioning the samples. The test is represented by the tensile strength (N) and elongation (%) diagram. The maximum tensile force is calculated, as well as the maximum deformation. Abaca yarn samples were subjected to seawater to determine strength loss for 60 days. Experimental Design. A multilevel factorial design composed of 24 runs was created. The degrees of freedom for error is set at 15 As study response variable is the tensile strength (N) and elongation (%). The operating parameter is the concentration of salt in the water from 33 to 36 g per liter. The order of the experiments is completely randomized. Finally, with the statistical package STATGRAPHIS®, Centurion XV, version 15.2.05, the analysis of variance was performed. Table 1 presents the factors studied: temperature, time and mordant, and their respective levels and units. A 200 ml stainless steel tincture beaker was used as the experimental unit. Table 1. Study factors. Factors

Low

High

Levels

Units

Temperature

80,0

100,0

2

(°C)

Time

60,0

120,0

2

(min)

Mordant

-1,0

1,0

2

2 Results The results illustrated in Table 2 show a significant effect of the treatments studied on tensile strength and elongation percentage. The variable response of tensile strength (N) and elongation (%) of the abaca yarn is determined using James Heal’s dynamometric

Laboratory-Scale Determination of the Influence of Temperature, Time, and Mordant

527

Table 2. Experimental design and treatments. Block

Temperature

Time

(°C)

(min)

1

80

60

1

80

120

Mordant

Tensile strength

Elongation

(N)

(%)

–1

212,27

5,16

1

398,85

6,26

1

80

120

–1

392,89

4,83

1

100

120

–1

270,38

5,37

1

80

60

1

271,81

5,44

1

100

60

1

341,52

4,46

1

100

120

1

234,34

5,76

1

100

60

–1

324,10

5,30

2

80

120

–1

295,70

4,92

2

100

60

–1

263,78

4,62

2

80

120

1

351,41

4,73

2

100

120

1

238,92

4,61

2

100

120

–1

334,10

5,18

2

80

60

–1

344,75

4,87

2

100

60

1

400,32

5,48

2

80

60

1

315,79

5,01

3

100

60

1

388,53

4,97

3

100

60

–1

330,00

5,60

3

80

120

1

368,36

5,23

3

100

120

–1

302,23

4,43

3

80

120

–1

376,54

6,40

3

80

60

1

287,14

4,07

3

100

120

1

296,83

5,97

3

80

60

–1

415,62

5,96

Aluminum potassium sulfate: –1; Copper sulfate: 1

equipment according to the specifications of EN ISO 2062:2009 and the following results are obtained. The resistance of the abaca thread dyed with the copper sulfate mordant and the Ambrosia peruviana extract at a temperature of 100 °C and 60 min, submerged in seawater after 60 days, presents the optimum resistance of 376.79 N, considering an initial resistance in raw of 543.36 N (Table 3) and dyed before exposure to seawater of 517.27 N (Table 4), causing a loss of resistance compared to the raw yarn of 30.65% and of the yarn dyed before exposure 27.16%.

528

E. S. Mora Muñoz et al. Table 3. Tensile strength and elongation of raw abaca yarn

Raw abaca yarn

Tensile strength

Elongation

(N)

(%)

1

498,00

8,97

2

570,66

12,18

3

561,42

11,69

Average

543,36

10,95

Table 4. Tensile strength and elongation of abaca yarn dyed at a temperature of 100 °C and 60 min with copper sulfate. Raw abaca yarn

Tensile strength

Elongation

(N)

(%)

1

503,76

7,57

2

477,46

9,08

3

570,59

7,53

Average

517,27

8,06

The elongation of the abaca thread dyed with the copper sulfate mordant and the framework extract at a temperature of 100 °C and 120 min submerged in seawater at 60 days, presents the optimum elongation percentage of 5.44667%. (Table 3), considering a raw elongation of 10.95% (Table 4) and yarn dyed before exposure to seawater of 9.906% (Table 5), causing an elongation loss of 50.26% compared to the unbleached yarn and 45.02% compared to dyeing before exposure to sea water. Schemes follow the same formatting. Table 5. Tensile strength and elongation of abaca yarn dyed at a temperature of 100 °C and 120 with copper sulfate Raw abaca yarn

Tensile strength

Elongation

(N)

(%)

1

528,51

7,77

2

412,17

10,41

3

493,27

11,54

Average

517,27

9,906

The dyeing with Peruvian ambrosia and copper sulfate mordant at 100 °C and 60 min presents the optimal tensile strength of 376.79 N subjected to seawater.

Laboratory-Scale Determination of the Influence of Temperature, Time, and Mordant

529

2.1 Analysis of Variance (ANOVA) for Resistance Table 6 presents the data of the analysis of variance of the resistance of the abaca yarn, the values of the variability of the resistance in parts for each of the effects are gathered and the statistical significance test is demonstrated, the mean square is compared with the estimation of the experimental error. In this analysis P values < 0.05 for the 2 effects, indicating that they are significantly different from zero at the 95.0% confidence level. The R-squared statistic shows that the model thus adjusted is 58.5645% of the variability of the resistance. The R-squared statistic adjusted for degrees of freedom is 31.9274% is more suitable for comparing models with different number of independent variables. The estimated standard error shows that the standard deviation of the residual is 47.2698. The mean absolute error (MAE) of 28.5634 is the average value of the residuals. The value of the Durbin-Watson (DW) statistic is 1.4692, there is evidence of correlation between the residuals. Since the P value is less than 5.0%, this indicates a possible serial correlation at the 5.0% significance level (Tables 7 and 8). Table 6. Analysis of variance (ANOVA) for resistance Font

Sum of squares

A:Temperature

3903,54

B:Time

51,2753

C:Mordant

Gl

Mean Square

Reason -F

Value-P

1

3903,54

1,75

0,2074

1

51,2753

0,02

0,8818

41,2388

1

41,2388

0,02

0,8939

AB

20875,4

1

20875,4

9,34

0,0085

AC

602,803

1

602,803

0,27

0,6116

BC

1628,88

1

1628,88

0,73

0,4076

ABC

10437,5

1

10437,5

4,67

0,0485

Blocks

6673,07

2

3336,54

1,49

0,2583

2234,43

Total error

31282,0

14

Total (corr.)

75495,7

23

R2 = 58,5645% R2 (adjustment por g.l.) = 31,9274% Standard error del est. = 47,2698 Mean absolute error (MAE) = 28.5634 Durbin-Watson statistic (DW) = 1.4692 (P = 0.0273) Residual autocorrelation of Lag 1 = 0.0466124 Mathematical Model for Tensile Strength. The equation of the mathematical model is as follows: Tensile strength = −353, 958 + 7, 57242(Temperature) + 8, 79903(Time) − 582, 142(Mordant) − 0, 0983083(Temperature)(Time) + 6, 75742(Temperature)(Mordant) + 5, 98164(Time)(Mordant) − 0, 0695139(Temperature)(Time)(Mordant).

530

E. S. Mora Muñoz et al. Table 7. Shows the regression coefficient for the resistance factor.

Coefficient

Estimate

Constant

– 353,958

A:Temperature

7,57242

B:Time

8,79903

C:Mordant

– 582,142

AB

– 0,0983083

AC

6,75742

BC

5,98164

ABC

– 0,0695139

Table 8. The regression coefficient for Resistance. Displays the values of the study factors: temperature, time, mordant and their combinations showing the respective values. Factor

Low

High

Optimum

Temperature

80,0

100,0

100,0

Time

60,0

120,0

60,0

Mordant

-1,0

1,0

1,0

Goal: maximize Resistance since there is no information on the subject, it is considered a useful reference value. Optimal value = 376,79. Mordant: Alum: –1; Sulfate: 1

Figure 1 presents the Pareto diagram, this diagram indicates that there is a representative influence in the correlation of the study factors temperature and time, in the same way, the correlation of the factors, temperature, time and mordant. 2.2 Analysis of Variance (ANOVA) for Elongation In Table 9, the analysis of variance of the Elongation of the abaca yarn presents the variability values for each of the effects, and then the statistical significance test for each effect is demonstrated, the mean square is compared with the estimate of the experimental error. In this analysis, 0 effects have a P value of less than 0.05, indicating that they are significantly different from zero at the 95.0% confidence level. The R-squared statistic shows that the model, thus adjusted, is 24.4359% of the variability of elongation. The adjusted R-squared statistic of the degrees of freedom is 0.0% is the most appropriate for comparing models with different numbers of independent variables. The estimated standard error presents that the standard deviation of the residuals is 0.662033. The mean absolute error (MAE) of 0.445104 is the average value of the residuals. The DurbinWatson (DW) statistic value is 2.75242 (P = 0.9560) evidence that there is a significant correlation of the residuals based on the order in which the data is presented. Since the P-value is greater than 5.0%, there is no evidence of serial autocorrelation in the residuals at the 5.0% significance level.

Laboratory-Scale Determination of the Influence of Temperature, Time, and Mordant

Standardized Pareto Chart for Resistence

+ -

AB ABC A:Temperature BC AC B:Time C:Mordant 0

1

2 Standardized effect

3

4

Fig. 1. Standardized pareto chart for resistance

Table 9. Analysis of variance for elongation Font

Sum of squares

Gl

Mean square

Ratio -F

Value-P

A:Temperature

0,0532042

1

0,0532042

0,12

0,7327

B:Time

0,315104

1

0,315104

0,72

0,4108

C:Mordant

0,0176042

1

0,0176042

0,04

0,8440

AB

0,0392042

1

0,0392042

0,09

0,7693

AC

0,192604

1

0,192604

0,44

0,5182

BC

0,513338

1

0,513338

1,17

0,2974

ABC

0,00770417

1

0,00770417

0,02

0,8964

Blocks

0,845508

2

0,422754

0,96

0,4051

Error total

6,13603

14

0,438288

Total (corr.)

8,1203

23

R2 = 24.4359% R2 (adjusted for g.l.) = 0.0% Standard error of the est. = 0.662033 Mean absolute error (MAE) = 0.445104 Durbin-Watson statistic (DW) = 2.75242 (P = 0.9560) Residual autocorrelation of Lag 1 = 0.403421

531

532

E. S. Mora Muñoz et al. Table 10. Shows the regression coefficient for the elongation

Coefficient

Estimate

Constant

4,18167

A:Temperature

0,00741667

B:Time

0,0159444

C:Mordant

-0,788333

AB

-0,000134722

AC

0,00358333

BC

-0,0005

ABC

0,0000597222

Mathematical Model for Elongation. The equation of the mathematical model is as follows: Elongation = 4, 18167 + 0, 00741667(Temperature) + 0, 0159444(Time) − 0, 788333(Mordant) − 0, 000134722(Temperature)(Time) + 0, 00358333(Temperature)(Mordant ) − 0, 0005(Time)(Mordant) + 0, 0000597222(Temperature)(Time)(Mordant)

The Values of the Variables are Detailed in Their Original Units. Table 11 shows the combination of study factor levels: temperature, time and mordant that maximizes yarn elongation in the indicated region, setting the high and low limits at that value. Optimum value = 5.44667%. In the absence of information, this is considered a useful reference value. Table 11. Elongation Optimization Factor

Low

High

Optimun

Temperature

80,0

100,0

100,0

Time

60,0

120,0

120,0

Mordant

–1,0

1,0

1,0

Standardized Pareto Chart for Elongation: Figure 2 presents the Pareto Diagram, this diagram indicates that there is no representative influence between the study factors: temperature, time and mordant. This indicates that the model fits very well with 95% confidence.

Laboratory-Scale Determination of the Influence of Temperature, Time, and Mordant

533

Standardized Pareto Chart for Elongation

+ -

BC B:Time AC A:Temperature AB C:Mordant ABC 0

0,4

0,8 1,2 1,6 Standardized effect

2

2,4

Fig. 2. Standardized pareto chart for elongation

3 Conclusions In conclusion, of the results obtained the abaca thread dyed with the copper sulfate mordant and the Ambrosia peruviana extract at a temperature of 100 °C and 60 min to zero days yields the initial tensile strength value of 517.27 N (Table 4), however, the analysis by mathematical modeling yields the value of tensile strength 376.79 N subjected in seawater for 60 days, with a strength loss of 27.17%, being the best treatment for strength. As for the dyeing with Peruvian ambrosia and the copper sulfate mordant at 100 °C and 120 min, it presents the optimal elongation of 5.446% subjected to seawater according to Table 10, with a loss of 45.02% concerning yarn-dyed before exposure, being the best treatment to obtain the best result for elongation.

References 1. Pérez, G., Dagorn, L., Deneubourg, J.-L., Forget, F., Filmalter, J.D., Holland, K., et al.: Effects of habitat modifications on the movement behavior of animals: the case study of Fish Aggregating Devices (FADs) and tropical tunas. Mov Ecol. 8(1), 47 (2020). https://doi.org/ 10.1186/s40462-020-00230-w 2. Wan, R., Zhang, T., Zhou, C., Zhao, F., Wang, W.: Experimental and numerical investigations of hydrodynamic response of biodegradable drifting Fish Aggregating Devices (FADs) in waves. Ocean Eng. 244, 110436 (2022). https://doi.org/10.1016/j.oceaneng.2021.110436 3. Dupaix, A., Capello, M., Lett, C., Andrello, M., Barrier, N., Viennois, G., et al.: Surface habitat modification through industrial tuna fishery practices. ICES J. Mar. Sci. 78(9), 3075–88 (2021). https://doi.org/10.1093/icesjms/fsab5 4. Guerrero, P.: ¿Qué falta por hacer para mejorar el manejo de la pesca con plantados en el Pacifico Oriental?. Org.ec. [citado el 20 de junio de 2022]. https://www.wwf.org.ec/?365362/ plantados

534

E. S. Mora Muñoz et al.

5. Barrera, J.G.: Socios de TUNACONS alcanzan el compromiso de reemplazar los plantados por “EcoFADs”. Actoresproductivos.com. Actores Productivos, [citado el 20 de junio de 2022] (2022). https://actoresproductivos.com/2022/01/03/socios-de-tunacons-alcanzanel-compromiso-de-reemplazar-los-plantados-por-ecofads/ 6. Bodin, N., Amiel, A., Fouché, E., Sardenne, F., Chassot, E., Debrauwer, L., et al.: NMRbased metabolic profiling and discrimination of wild tropical tunas by species, size category, geographic origin, and on-board storage condition. Food Chem. 371, 131094 (2022). https:// doi.org/10.1016/j.foodchem.2021.131094 7. Widyatmoko, A.C., Hardesty, B.D., Wilcox, C.: Detecting anchored fish aggregating devices (AFADs) and estimating use patterns from vessel tracking data in small-scale fisheries. Sci Rep. 11(1), 1–11 (2021). https://doi.org/10.1038/s41598-021-97227-1 8. Gilman, E., Musyl, M., Suuronen, P., Chaloupka, M., Gorgin, S., Wilson, J., et al.: Highest risk abandoned, lost and discarded fishing gear. Sci. Rep. 11(1), 7195 (2021). https://doi.org/ 10.1038/s41598-021-86123-3 9. Van Sebille, E., Wilcox, C., Lebreton, L., Maximenko, N., Hardesty, B.D., van Franeker, J.A., et al.: A global inventory of small floating plastic debris. Environ. Res. Lett. 10(12), 124006 (2015). https://doi.org/10.1088/17489326/10/12/124006 10. Simbaña, E.A., Ordóñez, P.E., Ordóñez, Y.F., Guerrero, V.H., Mera, M.C., Carvajal, E.A.: Abaca. In: Handbook of Natural Fibres. Elsevier, Amsterdam, p. 197–218 (2020) 11. Computational investigations on interactions between DNA and flavonols. Biointerface Res. Appl. Chem. 12(6), 8117–27 (2021). https://doi.org/10.33263/briac126.81178127 12. Adeel, S., Naseer, K., Javed, S., Mahmmod, S., Tang, R.-C., Amin, N., et al.: Microwaveassisted improvement in dyeing behavior of chemical and bio-mordanted silk fabric using safflower (Carthamus tinctorius L) extract. J. Nat. Fibers 17(1), 55–65 (2020). https://doi.org/ 10.1080/15440478.2018.1465877 13. Rani, N., Jajpura, L., Butola, B.S.: Sustainable coloration of protein fibers using Kalanchoepinnata leaf extract. J. Nat. Fibers 19(1), 115–30 (2022). https://doi.org/10.1080/15440478. 2020.1731904 14. Yánez, C.A., Rios, N., Mora, F., Rojas, L., Diaz, T., Velasco, J., et al.: Composición quimica y actividad antibacteriana del aceite esencial de Ambrosia peruviana Willd. de los llanos venezolanos. Rev. Peru. Biol. 18(2), 149-151 (2011). https://doi.org/10.15381/rpb.v18i2.245 15. Armecin, R.B., Sinon, F.G., Moreno, L.O.: Abaca fiber: a renewable bio-resource for industrial uses and other applications. In: Hakeem, K.R., Jawaid, M., Rashid, U. (eds.) Biomass and Bioenergy, pp. 107–118. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-075 78-5_6 16. Baaka, N., Ben Ticha, M., Haddar, W., Amorim, M.T.P., Mhenni, M.F.: Upgrading of UV protection properties of several textile fabrics by their dyeing with grape pomace colorants. Fibers Polym. 19(2), 307–312 (2018). https://doi.org/10.1007/s12221-018-7327-0 17. Moreno, G., et al.: Fish aggregating devices (FADs) as scientific platforms. Fish. Res. 178, 122–129 (2016). https://doi.org/10.1016/J.FISHRES.2015.09.021 18. Pérez, G., et al.: Correlated random walk of tuna in arrays of fish aggregating devices: a field-based model from passive acoustic tagging. Ecol. Modell. 470, 110006 (2022). https:// doi.org/10.1016/j.ecolmodel.2022.110006 19. Borrelle, S.B., et al.:Why we need an international agreement on marine plastic pollution. In:Proceedings of the National Academy of Sciences, U.S.A, vol. 114, no. 38, pp. 9994 – 9997 (2017). https://doi.org/10.1073/pnas.1714450114 20. Morales-Caselles, C., et al.: An inshore–offshore sorting system revealed from global classification of ocean litter. Nat. Sustain. 4(6), 484–493 (2021). https://doi.org/10.1038/s41893021-00720-8

Cryptocurrencies Towards Financial Innovation in the Microenterprise Sector Jessica Quispe(B)

, Cesar Segovia , Rubén Jaramillo , and Darwin Arias

Instituto Tecnológico Vida Nueva, Quito 170126, Ecuador [email protected]

Abstract. Currently, due to changes in monetary systems, there is the entry of new transactional providers in the development of electronic commerce, one of the difficulties faced by microenterprises is the limited capacity to control the management of cryptocurrencies due to the fact that in Ecuador it is still unknown, for this reason, there are no authorized financial entities that regulate and monitor the use of crypto-assets, so the following research has been carried out with the objective of identifying the possibility of using crypto-assets in commercial transactions in microenterprises, which can be considered as a financial innovation, in order to approach the study, the following methodology was determined with an exploratory qualitative approach, where historical studies and analysis of theories were carried out, in addition to the use of data collection instruments in the form of a survey of a study group of 243 microenterprises in southern Quito, which provided information about their incomes. Considering the information analyzed, it was determined that over time financial transactions can adapt to changes that arise in the use of platforms, and due to the adaptation of the market, it is proposed that the use of electronic currencies can be applied to the commercial exchange carried out by microenterprises. Keywords: Commercial transactions · Cryptocurrencies · Blockchain · Bitcoin · Financial innovation

1 Introduction The dynamic changes in society, along with the rapid growth of technologies, have influenced the markets in such a way that they have transformed the way business is done, one of these changes involves cryptocurrencies, as a new payment mechanism. The 21st century society has set a milestone in the way people invest, speculate and even gamble in the markets. The generation of cryptocurrencies requires solving a computationally heavy problem, which can obtain a certain amount of cryptocurrencies as payment for its spent resources [1]. The Central Bank of Ecuador, according to studies, states that transactions with virtual money are carried out on various digital platforms and on the Internet, which is not controlled because it is not a competence of the state, everything they execute is in accordance with what is established in Article 94 of the Organic Monetary and Financial Code. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 535–547, 2023. https://doi.org/10.1007/978-3-031-25942-5_42

536

J. Quispe et al.

1.1 Financial Transaction It is considered as a term that is fully applicable in everyday life. Established as an activity carried out between two individuals, including the exchange of goods or services for a certain amount of money. Thus, it is called a transaction that applies to economic markets in which capital is used in the payment of a product [2]. In this sense, it is thought of as the exchange between a buyer and a seller in which fiduciary money is somehow involved. In a banking exchange, there are different processes. On the one hand, the acquisition of a product or service and, in the second instance, the disbursement for the receipt of an input. In finance, it is not necessary to pay for the acquisition with physical money, so new forms of payment can be considered, such as bitcoins as a transactional means of payment [3]. States that the notions of space and time are a determining factor when making use of the Internet, which makes the transmission and analysis of information force the birth of a new world together with its interaction components, which emits new terms such as electronic computer capitalism, the promise of insights, and the world of computer heuristics and the world of computer heuristics, where the human being is seen as one of the animals on the verge of extinction or slave of an economic circle [4] therefore, just as in the past cyberspace was born, now we observe the beginnings of a crypto space, not only with cryptocurrencies, but also with documents and crypto economy. Digital Financial Innovation Due to the different changes and progress in the management of digital platforms, the private sector has sought to incorporate new services that allow the adequate use of new products in economic activities, including affordable financial services due to the pandemic and the global financial crisis [5]. Financial Inclusion States that it can be understood as the universal path to a range of extensive banking services, which has been making great strides in recent years, despite having a changing economy, according to World Bank data showing that 1.2 billion people have opted to handle banking services between 2011 and 2017, as a result, several companies have been expanding their range of services, with micro-loans, savings accounts and insurance. Currently, several payment applications have been incorporated, based on interfaces and quick response codes (QR), allowing new types of financial services to be acquired [6]. Digital Financial Innovation Bitcoin is considered as an alternative financial transaction that allows to homologate several effects in the exchange of currencies, mainly in commercial transactions such as: buying, selling and investing without giving importance to the item, this operation is considered an innovation that helps to reduce financial costs that are credited without any reason in banking products. [7] manifests that the innovation to this new money system is very restricted and the inability to issue Bitcoin makes it a peculiar asset, with

Cryptocurrencies Towards Financial Innovation

537

characteristics related to a field of profitability and risk as a diversifying effect, in which companies and organizations should pay attention. Digital Billfolds Digital wallets are similar to banks and provide storage services for users’ cryptocurrencies. Customers deposit into the wallet by transferring their cryptocurrencies to the blockchain addresses of the wallet services and can make payments by sending directly from online wallet addresses. Instead of keeping their public/private key pairs offline, users use a traditional username and password pair to access their wallets. Cryptocurrencies [8] defines that cryptocurrencies today have attracted unprecedented public attention, all transactions made with these cryptocurrencies, such as Bitcoin and Litecoin are permanently recorded in ledgers, these can be generally public in relation to fiduciary currencies. This creates as a result a complex financial network that allows the study of the relationship between its various characteristics. In cryptocurrency, the one that stands out the most is the bitcoin, being a revolutionary digital payment system that does not require the presence of a third party such as a Central Bank, Commercial Bank, Savings Cooperative, etc. Cryptocurrencies have the ability to be used as a means of payment and deposit of value without the need for a regulatory body to authorize the transaction (Table 1). Cryptocurrencies are defined as the digital means of disbursement and exchange that is affected through a medium called blockchain, these digital currencies are encrypted algorithms that allow buying and selling transactions. [9, 10] affirms that cryptocurrencies possess specific characteristics (Table 2); Table 1. Cryptocurrency characteristics Cryptocurrency

Characteristics

Finite

[10] Cryptocurrencies are finite, it means there is a limited number of them

Storage

They can be stored on external disks or in the cloud, guaranteeing their durability and availability

Decentralization

It is a decentralized digital medium; it means it is not regulated by any government or central financial institution

Security

Transactions through an encryption system using blockchain technology

Note. The characteristics of cryptocurrencies can be determined from the following table

Blockchain The Blockchain is considered a series of blocks which consists of a database set up as a public ledger of all transactions that have been executed in the movements of the participants. All transactions recorded in this public ledger are verified in the system. Once the information has been entered, it cannot be deleted [1].

538

J. Quispe et al. Table 2. Types de cryptocurrencies

Cryptocurrency

Characteristics

Traditional operators for money transfers (MTO) are financial companies, not considered banks, whose function is to make international fund transfers using cross-border networks [1] Bitcoin

It is considered the first cryptocurrency created by Satoshi Nakamoto in 2009. Inspired by Nick Szabo’s studio called as “bit gold”

Altcoin

Altcoin refers to alternative currencies. An altcoin is any currency other than Bitcoin [2]

Cryptocurrencies

They are the alternative currency system proposed as a response to climate and financial change, with an emphasis on efficiency and freedom

Privacy Coin

One that uses a consensus model and a cryptographic technique such as Zero Proof Knowledge, the disadvantage of this cryptocurrency is that it does not allow to observe the account status [3]

Stable coin

Those that create a 1:1 relationship between a fiat currency and the US dollar. One can say a stable crypto-asset in relation to the U.S. dollar [4]

Note. The following table shows the different ways in which currencies are viewed in digital purses

This blockchain contains specific and verifiable records of every exchange you have made [2]. As an example, it is easier for someone to steal a fruit from a fruit basket stored somewhere where no one can see it than to steal a fruit stored in a public place, which is being watched by thousands of people [3]. The following are some characteristics of the blockchain that are essential for the management of the technology in microenterprises (Table 3). Bitcoin Bitcoin is the name given to the type of virtual currency, which serves as a means of electronic exchange, which allows the purchase of products, I feel it is centralized, that is to say that there is no control entity that regulates it in a standardized way, and is responsible for its issuance and movements. Digital currencies such as Bitcoin, SELTcoin, Ether, Solar Coin have been around since 2009, and have been considered a threat to the traditional banking system, due to their decentralized control, currencies such as Bitcoin have attracted much attention and there is currently very little literature on the field of cryptocurrencies. Bitcoin and other cryptocurrencies, being innovative technologies, are rapidly invading the financial market, changing the world economy in a big way, however, the acceptance of this technological means of transaction among consumers is low. By 2020, more than 7,000 cryptocurrencies are actively traded in over 20,000 online transactions. Their market

Cryptocurrencies Towards Financial Innovation

539

Table 3. Characteristics of the blockchain Blockchain

Characteristics

Decentralization [11] Users of this technology have greater control over their assets without having to rely on third parties such as traditional economic regulators Security

[12] The distributed and encrypted nature of this technology provides the security stored in the blockchain and the information shared between each of the nodes, applying asymmetric keys in all transactions, which are signed and make them difficult to breach

Traceability

[12] The blockchain can be traversed and the operations carried out on a given address can be known

Privacy

[13] This is one of the fundamental characteristics of the blockchain, considered the greatest strength for the management of cryptocurrencies in the market

Note. The characteristics of the blockchain can be determined from the following table

value reached 300 billion US dollars, taking into account that these currencies do not have any backing that generates confidence as a tangible asset. [14]. 1.2 Cryptocurrencies as a Financial Innovation [15] manifests that the emergence of new values and technological changes are alternatives that force companies to incorporate new commercial exchange options, which allow several choices of financial linkage between suppliers, manufacturers, distributors and customers as options when making strategic alliances that force them to improve the business. The agilization of costs and the easy handling of money, allows the use of cryptocurrencies to be an alternative for the financial transaction process, which only requires a peer-to-peer system, without the use of intermediaries, being a technological advance has allowed the use to be simple without the need-to-know complex computer systems. Payment methods have undergone various changes in their form throughout history, which has led them to adapt to changes involving paper and cash transactions, as well as plastic. However, with new technologies and access to the Internet, new forms of payment have emerged in the digital panorama.

2 Methods The following research has a qualitative approach of exploratory type and bibliographic documentary analysis in databases of classic and contemporary authors, where a record of 119 indexed articles was found, of which those directly related to the proposed topic were analyzed. These were analyzed 8 from Scopus, 46 from the Web of Science and 65 from Springer Link and as a second instance an instrument was applied to collect data, which allowed to know the flow of capital managed by microenterprises and the characteristics of cryptocurrencies and blockchain.

540

J. Quispe et al.

2.1 Bibliometric Analysis For this topic, the search was made in 3 databases: Scopus, Web of Science and Springer Link, which yielded the following results (Table 4): Table 4. Database analysis Equation

Database

ALL ( ( ( ( ( ( “Microenterprise”) AND ( “bitcoin”) OR ( “Cryptocurrency transactions”))))))

Scopus

Document results 8

Web of Science

46

Springer Link

65

Note. The following table shows the data analysis of the books analyzed

From the information processed in VOS viewer from the searches, there are two focuses of information. The first one focuses only on the topic of cryptocurrency transactions, as evidenced by the following concurrence of words (Fig. 1):

Fig. 1. Analysis of concurrence words in articles

Cryptocurrencies Towards Financial Innovation

541

In the case of the second approach to the information processed, there is a concurrence of words mixing cryptocurrency transactions with business management, tourism and micro-enterprises, as evidenced by the following image (Fig. 2):

Fig. 2. Conceptual analysis of articles related to the topic of study

Considering the above, the following articles were prioritized for analysis with respect to the topic under investigation (Table 5): Table 5. Bibliometric analysis Year

Paper

Bibliometric analysis

2015

Creating value together: the emerging design space of peer-to-peer currency and exchange

The objective of the following research is: To analyze the encryption and anonymity involved in the use of systems based on cryptocurrencies and their security in the financial field, the methodology used by the authors was the design of algorithms and mathematical calculations to know the level of encryption that have systems based on cryptocurrencies, having as results that the mathematical model proposed is valid, and can be applied to anonymous cryptocurrencies based on blockchain, the results demonstrate the viability and efficiency of their scheme

2020

Blockchain technology adoption behavior and sustainability of the business in tourism and hospitality SMEs: an empirical study

The following research aims to propose a cryptocurrency model and an instrument that can contribute to the microfinance environment, considering a research methodology in design sciences used in information systems and production engineering, obtaining as results by the authors legislative and financial aspects that impact the market

2022

Building financial resilience

The objective of this paper is to highlight several emerging theoretical, methodological and applied research challenges of data analysis of blockchain and cryptocurrencies, in particular, that will be of interest to the general statistical community, the methodology employed and proposed has a series of open research questions, resulting in demonstrations in statistical analysis through viable calculations and equations to the proposed work

(continued)

542

J. Quispe et al. Table 5. (continued)

Year

Paper

Bibliometric analysis

2020

Acceptance of financial transactions using blockchain technology and cryptocurrency: A customer perspective approach

In the present paper, the authors inquired into customer behavior towards cryptocurrency transactions that are compatible with blockchain technology. The methodology used by the author determines the application of surveys for a group of international individuals from different backgrounds and experiences that use monetary transaction technologies for domestic and international uses. This study analyzed the surveyed data with Partial Least Squares Structural Equation Modeling (PLS-SEM) using (Smart PLS 3.2) for analysis

2021

A methodology for selection of a blockchain platform to The objective of this paper is to develop a methodology develop an enterprise system to select the most suitable blockchain platform using domain-specific blocks. The methodology selects an experimental type of blockchain platform. The results demonstrated in the research reveal that SMART, which makes it that does not require any special, complex software, while MACBETH, DCE and CA require special software, which makes it difficult to use

2020

On the role of local blockchain network features in cryptocurrency price formation

2019

An efficient linkable group signature for payer tracing in The objective of the following research was to analyze anonymous cryptocurrencies anonymous cryptocurrencies and their relevance in users who intend to preserve their privacy when performing online transactions. The authors use the documentary research method and focuses on the financial management that would cause the use of this monetary inter-exchange tool, resulting in the cost of the setup time and the algorithm tends to be a constant increasing group member, which almost costs 41,125 ms. This is because the variables in the initialization are fixed, which is not affected by the number of users in the group. Therefore, it takes approximately the constant time

The paper aims to highlight several emerging theoretical, methodological and applied research challenges of blockchain data analysis that will be of interest to the wider statistical community. The methodology applied by the authors focuses on a chain and, therefore, does not implicitly take into account higher-order relationships between chains. The authors conclude that chainlets, in particular and extremes, appear to contain unique predictive information about cryptocurrency market movements, including shocks, and thus serve as possible indicators of hidden cryptocurrency risks

(continued)

Cryptocurrencies Towards Financial Innovation

543

Table 5. (continued) Year

Paper

Bibliometric analysis

2019

Research challenges and opportunities in blockchain and cryptocurrencies

The following research aims to solve difficult computational problems to limit the rate at which new blocks are created. The authors use the descriptive research method by conducting an in-depth analysis of the drawbacks and resources to be used in order to support blockchains. The authors’ results analyze proofs of work using Bitcoin, on computational systems that have no other useful application than securing the blockchain

2021

Knowledge discovery in cryptocurrency transactions: A survey

The research objective of the different authors is to present the management of financial transactions as a tool vulnerable to privacy. The research methodology used is exploratory because it analyzes the information presented about the topic of study, allowing qualitative analysis of its procedures describing different techniques with the objective of determining the potential users of this technology As a result, it is determined the standardization of analysis processes to create a strategic base of secure transactions where people who exchange values are not victims of cyber-attacks

2019

Sustainable development and cryptocurrencies as private The objective of the research is to analyze the debate money on the effectiveness and impact of taxes on international financial transactions. The author uses the descriptive research method because it punctuates the adversities of taxes in the different capital markets, obtaining as results those financial transactions allow markets to be a subject of public and also political interest, accessing to economic effects

2021

Cryptocurrencies for microfinance

The objective of this work involves a study of the fundamentals, strategies and tools of machine learning as a step to address the issues of fraud in different financial transactions that handle algorithms. The methodology used by the authors is a design of algorithms that can be challenging for the financial environment which does not allow balancing the appropriate distribution of investigations that can cause fraud. As a result, fraud situations can be applicable to different approaches to financial transactions, which can be supervised

(continued)

544

J. Quispe et al. Table 5. (continued)

Year

Paper

Bibliometric analysis

2016

Cryptocurrencies and business ethics

The following research aims to establish different consumer protection schemes and financial education segmenting incorporation alternatives in the branches of cryptocurrencies. The methodology addressed is the application of four cryptocurrency projects such as Ethereum, Binnance Coin, Bitcoin and Litecoin, in order to know the panorama of the use of cryptocurrencies as means of payment and investment. The results obtained is the financial inclusion in the commercial sphere, promoting financial education through programs that improve the expansion of products and these can be adapted to the needs of the population

2020

Transforming business using digital innovations: the application of AI, blockchain, cloud and data analytics

The following research aims to analyze the characteristics of cryptocurrencies, and their incorporation, in order to achieve the impact of the existing financial blockage. The research method is historical because it focuses on the economic management and financial impact that would cause the use of this monetary exchange tool. The result is to have a changing recursion as evidence of the consequences of financial operations, which will allow evading the sanctions imposed when making charges and payments without the need of an intermediary and this will be an advantage to be taken into account

Note. The following table shows the data analysis of the analyzed articles

Research Instrument A total of 243 surveys were conducted among the microenterprises in the commercial sector of South Quito, in the Guamaní sector, where a survey composed of 14 questions where responses were identified in relation to the level of income, which the results of the survey showed the following (Table 6): Table 6. Approximate income level Approximate monthly income level of your microenterprise Frequency From 1 to 1000

Percentage

Valid Percentage

Cumulative percentage

199

81,2

81,2

81,2

From 1001 to 5000

39

15,9

15,9

97,1

Greater than 5000

7

2,9

2,9

100,0

245

100,0

100,0

Total

Note. The following table shows the approximate level of monthly income of microenterprises

From the survey applied to microenterprises, it can be seen that the perception and criteria of microentrepreneurs in relation to their income is favorable in 65.7%, being

Cryptocurrencies Towards Financial Innovation

545

considered medium or acceptable, which denotes a symptom of economic improvement in productive activities.

3 Results According to the authors [7, 15] the different changes in financial transactions, determine the implementation of cryptocurrencies as alternatives in the use of payments in commercial activities, due to the financial dynamics that have sought new forms of financial services where cryptocurrencies have been seen as an alternative in commercial activities. [16] mentions that the use of different cryptographies is complex which causes a more systematic exploration in its application, for this it is necessary that there are alternatives for continuous learning in financial transactions, allowing to have an advantage in terms of security and control, this allows the microenterprise market to have more payment options, due to innovation and technology that advances at an accelerated pace which will allow microenterprises to have an opportunity to manage new financial scenarios. The information obtained was that 81.2% of the microenterprises have monthly incomes from $1 to $1,000, which is significant or acceptable. It is important to emphasize that in Ecuador microenterprises are born of entrepreneurship or the need that citizens have for a source of income to support and sustain their families, due to the lack of sources of employment that provide a fixed monthly income and economic instability that the country is suffering with this microenterprises can make use of cryptocurrencies as an alternative financial transaction the same that allow them to innovate and manage new markets, which would force them to expand growth opportunities and explore different financial scenarios.

4 Discussion Contrasting Analysis of the Approaches to Cryptocurrencies The use of digital currencies, which are decentralized from state or bank controls, represent an electronic payment alternative [16], this proposal by Santoshi Nakamoto, was based on cryptography and complex computer rhythms within a distributed network of computers, connected to each other through the Internet to process data generated to be used as a method of payment. The use of cryptocurrencies in microenterprises in Ecuador, represent an opportunity to change the income scenario, which, by diversifying their payment methods, implement a DTL distributed accounting technology, the registration of ownership and certification of digital rights, and finally cryptographic protection and transaction processing. [17] states that, in their financial operations, they would turn the Ecuadorian microenterprise into a competitor, going from monthly revenues of 1 to 1,000 to more than 5,000, due to the opening of markets. Due to the digitalization of business and the contrasted analysis of the information collected in the use of cryptocurrencies, the integration of crypto assets is proposed as an alternative for the management of commercial exchanges between distributors,

546

J. Quispe et al.

suppliers and customers, including bitcoin, since there are elements that are essential for its incorporation, such as: the inputs, outputs, the identifier and the commission rate, which allows it to be offered to microenterprises and to generate profitable economic gains by working with cryptocurrencies. As can be seen in the following figure, the operation of cryptocurrencies between micro-companies should be as follows (Fig. 3): 1. MICROENTERPRISE A WANTS TO SEND MONEY TO MICROENTERPRISE B 6. THE MONEY IS APPROVED FROM MICROENTERPRISE A TO MICROENTERPRISE B.

2. SUCH A TRANSACTION MUST BE REPRESENTED IN THE NETWORK AS A BLOCK

3. THE BLOCK MUST BE TRANSMITTED ON ALL NETWORKS

5. THE BLOCK IS ADDED TO THE CHAIN AND THIS PROVIDES THE RECORD OF THE TRANSACTIONS

4. THE TRANSACTION IS APPROVED AS VALID

Fig. 3. How blockchain works in financial transactions

5 Conclusions The present research allows us to consider that technological innovation allows microenterprises to have an active commercial system, it should consider the use of alternatives of commercial exchange which is of help between distributors, suppliers and customers which expedite the payment of commercial transactions. There are different types of cryptocurrencies that can be adaptable to the handling of commercial transactions which are a quick solution for microenterprises between setbacks and bottlenecks due to the number of intermediaries that must go through for the purchase of a product. Microenterprises, due to the working capital they manage, should consider the use of new alternatives for commercial transactions, this may facilitate the inclusion of financial innovations and make it more affordable for customers to shop where purchases are expedited. The essential elements for the use of cryptocurrencies such as bitcoin must be carefully established to allow microenterprises to adapt smoothly to the handling of commercial transactions and to take full advantage of them.

References 1. Auer, R., Claessens, S.: Regulación de las criptomonedas: evaluación de reacciones del mercado (2018)

Cryptocurrencies Towards Financial Innovation

547

2. Iván, X., Herrera, E., Navarrete Mendieta, G., Sulang, E., Chiriboga, W.: Yachana ¿Pueden ser reguladas las criptomonedas? Caso Bitcoin y Libra,” YACHANA Revista Científica, vol. 10, no. 1, pp. 23–37. https://orcid.org/0000-0002-7807-6078 3. Ávila, E.: ¿Son los Bitcoins medios de pago? In: NOTARIO, vol. 8 (2019) 4. López-Rodríguez, C.E., López-Ordoñez, D.A., Poveda-Aguja, F.A., Lancheros-Pachón, J.: Cryptocurrencies as means of financial transaction: perspectives on the population of Bogotá, Colombia 5. Group of Thirty, Digital currencies and stablecoins: risks, opportunities, and challenges ahead 6. Goldfarb, A., Tucker, C.: Digital economics. J. Econ. Lit. 57(1), 3–43 (2019). https://doi.org/ 10.1257/jel.20171452 7. “El Bitcoin: una revisión de las ventajas y desventajas de las transacciones comerciales con dinero virtual,” Cienc. Lat. Rev. Cient. Multi. 5(6), 13040–13059 (2021) https://doi.org/10. 37811/cl_rcm.v5i6.1306 8. Mahmoud, Q.H., Lescisin, M., AlTaei, M.: Research challenges and opportunities in blockchain and cryptocurrencies. Internet Technol. Lett. 2(2), e93 (2019). https://doi.org/ 10.1002/itl2.93 9. Al-Rakhami, M.S., Al-Mashari, M.: A blockchain-based trust model for the internet of things supply chain management. Sensors 21(5), 1–15 (2021). https://doi.org/10.3390/s21051759 10. La Blockchain Fundamentos, Aplicaciones y Relación con otras Tecnologías Disruptivas Dolader, Bel Y Muñoz” 11. Nuryyev, G., et al.: Blockchain technology adoption behavior and sustainability of the business in tourism and hospitality SMEs: an empirical study, Sustainability (Switzerland), 12(3) (2020). https://doi.org/10.3390/su12031256 12. Nanayakkara, S., Rodrigo, M.N.N., Perera, S., Weerasuriya, G.T., Hijazi, A.A.: A methodology for selection of a Blockchain platform to develop an enterprise system. J. Ind. Inf. Integr. 23, 100215 (2021). https://doi.org/10.1016/j.jii.2021.100215 13. Zhang, L., Li, H., Li, Y., Yu, Y., Au, M.H., Wang, B.: An efficient linkable group signature for payer tracing in anonymous cryptocurrencies. Futur. Gener. Comput. Syst. 101, 29–38 (2019). https://doi.org/10.1016/j.future.2019.05.081 14. Dierksmeier, C., Seele, P.: Cryptocurrencies and business ethics. J. Bus. Ethics 152(1), 1–14 (2016). https://doi.org/10.1007/s10551-016-3298-0 15. Gil, S., Ernesto, O., Varela, T.: Criptomonedas, as a business opportunity for microenterprise of the tourism sector in the south east zone of the state of mexico. Revista Global de Negocios, 6(1), 93–104 (2018). https://ssrn.com/abstract=3041472www.theIBFR.com 16. Amarasinghe, N., Boyen, X., McKague, M.: The cryptographic complexity of anonymous coins: a systematic exploration. Cryptography 5(1), 10 (2021). https://doi.org/10.3390/crypto graphy5010010 17. Vaz, J., Brown, K.: Sustainable development and cryptocurrencies as private money. J. Ind. Bus. Econ. 47(1), 163–184 (2019). https://doi.org/10.1007/s40812-019-00139-5

3D Modelling of Freedom Summit for Virtual Environments Aguas Luis1,3(B)

, Suárez Lizbeth2

, Coral Rosario1

, and Machay Byron2

1 Universidad Tecnológica Israel, Quito, Ecuador

[email protected] 2 Instituto Superior Tecnológico Vida Nueva, Quito, Ecuador 3 Universidad Técnica de Manabí, Portoviejo, Ecuador

Abstract. The conservation of the historical heritage of the city requires, among other things, updated information that contributes to the documentation and protection of the same and to decisions of urban transformations, this need and the accelerated technological development, has motivated the creation of specific software tools to collect geographical, topographic and/or archaeological information. The modelling of a heritage asset by its structural or access characteristics is a task of great complexity and for these models to approach, in the most precise way possible, to reality, three-dimensional representations are used. According to the objective of the three-dimensional model that is required to be obtained, different techniques and / or programs are used. This paper attempts to present the result obtained from the use of the aforementioned techniques and programs to achieve an approximation to the reality of a heritage asset such as the monument of the top of freedom of the City of Quito. The modelling process has considered the architectural conditions of the monument, as well as the characteristics of the software tools that allow the automation of the process, speed, ease of use to achieve greater accuracy of the model to achieve the realism of the result and the possibility of taking advantage of that model to obtain the best approximation to the reality of the patrimonial asset. The objective of this project and its degree of demand have been decisive in choosing the most appropriate virtual construction method to model the monument of the top of Liberty, in the city of Quito. Keywords: Modelling · 3D · 2D · Wings · VRML · Rendering

1 Introduction The constant technological changes and the great need for new and better tools to meet the requirements of creation, visualization and interaction with models of reality, which handle large amounts of information, define a field of application that implies the generation of new knowledge and new ways of applying software engineering and computer graphics [1]. To the above must be added the growth in the capabilities of supercomputers and the improvement in the scope of their graphic capabilities, which has allowed the inclusion of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 548–560, 2023. https://doi.org/10.1007/978-3-031-25942-5_43

3D Modelling of Freedom Summit for Virtual Environments

549

programming techniques in graphic design and 3D visualization and modelling, which provides a better understanding of a real phenomenon or scenario through its threedimensional representation. A 3D modelling system is a technology that attempts to virtually represent real objects while maintaining their own characteristics, considering the various techniques and procedures that handle the design of these elements [2]. In this research, in addition to the theoretical framework corresponding to 3D modelling, we try to present the result obtained from the use of the aforementioned techniques and programs to achieve an approximation to the reality of a heritage asset such as the monument of the Cima de la Libertad of the City of Quito and in this way create a tool that allows the virtual approach of users to the monument.

2 Metodology 2.1 Concepts and Characteristics of 3D Modeling A three-dimensional object (inanimate or living) can be modelled from its mathematical representation using specialized software. Through a process called 3D rendering or by using physics simulation software, this model can be viewed as a two-dimensional image. The 3D model can also be physically created using 3D printing devices. Structures can be created manually or automatically. The manual creation of a 3D model of a real object requires a process of preparing geometric information in a similar way to that of plastic arts and sculpture. Nowadays it is very common to use virtual environments for 3D modelling, which are based on the mathematical concept of linear algebra and spatial geometry. Geometric modelling can be understood as the process of object abstraction that, through the application of geometry concepts, produces a mathematical representation that can conceptually reflect the essential characteristic of an object or phenomenon. The modelling process can be improved according to the degree of accuracy with which it is required to represent the object. In the case of three-dimensional space, magnitudes such as depth, width, length and height are described in a model, and if the model is required to approximate reality, schemes can be built in which they are taken into account. Characteristics and details of the element to be modelled such as: colour, texture, shadows, etc. [3]. The simplest model that can be used to represent an object in a computer are the Cartesian coordinates of the points that delimit the real object, taken by devices for the entry of graphic information such as cameras or scanners, this is the simplest way to start the process of describing a three-dimensional model. The result of the rendering process using Cartesian coordinates is a finite collection of points, which can optionally include colour, direction, and texture. The collection of points or “point cloud” is usually used to define the boundary of the object to be modelled. To represent the boundary surface of the object, polygonal meshes are used, specifically triangular meshes, which are considered the simplest and most practical model. Image processing techniques such as low-pass and high-pass filters can be used, which detect the edges of images and objects [4].

550

A. Luis et al.

In the modelling process, one of the first problems that arises is to obtain a grid from the most representative points of the object and because many possible grids can be found, it is necessary to find the one that most adequately represents the object. Object, for which it is necessary to make use of elementary concepts related to the geometry of the plane and three-dimensional space for the triangulation or cube design of the obtained point cloud. 2.1.1 Modeling Techniques There are various 3D modelling techniques, according to the characteristics of the object contour: • Solid Modelling: Known as solid structure modelling. Models represent the volume of the object you are trying to define, and generally include the density of the internal material, the center of mass, etc. They are used to build computer models and in industrial and medical software. • Contour Modelling: It is known as profile representation. Border models only represent the surface that bounds the object (conceptually the “shell”). They are simple to define and modify. • Graphs in Three Dimensions: In representational graphics, the computer uses points and edges that join these points and in this way the structure of the elements is created. The grouping of points that form an object is called point mesh, so that the design approximates reality, the computer joins the points creating a structure of the object, this is known as highlighting of an object, then a mask of profiled to give it a solid appearance. 2.2 Development Methodology For the construction of this project, the OOHDM was used as a development methodology. OOHDM (Object Oriented Hypermedia Design Method) OOHDM is a 3-dimensional model construction methodology proposed by Rossi and Schwabe [5] for the development of hypermedia applications and aims to make the design of multimedia applications more efficient and simplify the process. OOHDM is not a modelling language, but rather defines work guidelines to develop multimedia applications in a methodological way but focused mainly on design. In traditional application development, the OOHDM model resembles the controller view model. The OOHDM methodology considers that the development of a hypermedia application comprises five main phases [6]: Phase I. Gathering of requirements. Use case diagrams are drawn up, designed for each scenario with the purpose of obtaining in detail the requirements and actions of the system. Phase II. Conceptual design. Using object orientation techniques, a model is defined according to [2] that represents the application domain. The result is a scheme of classes

3D Modelling of Freedom Summit for Virtual Environments

551

that are related to each other and that simplifies the application into subsystems, taking into account the profile of each user and the tasks that they develop. Phase III. Navigational Design. In OOHDM, the navigation system defines an application. In this phase, all the activities that the user is going to carry out on the project are considered. For which part of the conceptual scheme defined in the conceptual design phase. It is important to consider that different navigational models can be created from the same object (giving rise to a different application for each of the models obtained). Phase IV. Abstract Interface Design. After having defined the navigational structure, in this phase it is prepared so that it is identifiable by the user, for which it is defined which interface objects are going to be considered by the user, and at what time and place the different elements will be displayed. Manageability, which interface element will act in each step of the navigation, how the multimedia patterns are synchronized and the different interfaces that are adapted to the media and environments. For the same navigation model, different environments can be defined based on the defined interface and thus allowing the selection of the interface that best suits the user’s needs. Phase V. - Implementation. Once the conceptual model, the navigation pattern and the abstract interface scheme have been created, in this phase the programming language is defined, in order to generate the application’s executable file. 3D Modeling and Rendering Tools. The three-dimensional representation of the objects that are part of a real scenario is a process that requires a lot of experience and practice, as well as the availability of tools that facilitate this procedure. Current programming paradigms allow a wide variety of tools to be included for object representation. Models can be created automatically or manually. The manual process to create a 3D object is like that of a sculptor, whose work begins with a volumetric figure that is carved according to the style and appearance of the real object [7]. There are several software tools that allow you to create 3D models and animations. Among which are: Wings3D. It is an open-source tool for 3D modelling. It does not have animation utilities, so it is not the optimal option to give movement to the models, however, one of its advantages is that it is free and multiplatform [8]. Most free software tools do not allow you to create 3D models from a mathematical formula, which is a necessary requirement for modelling three-dimensional mathematical functions, however they do allow manual modelling, which requires more expert knowledge of 3D modelling. Blender. It is a specialized tool for graphic design that allows the creation, modelling and animation of objects in three dimensions, it is free and open source software that works on different operating systems. Its interface requires a high learning curve because its use is a bit complex [9]. Blender allows you to bring the created models to life using a timeline made up of frames, which allow you to decide in what state and position each object needs to be placed. Blender includes plugins and libraries with specialized functions (applications developed by different users) that add more specific functionalities to the software. When the model is obtained, Blender allows you to export it to different formats.

552

A. Luis et al.

VRML. This tool was created from Open Inventor, which is a language developed by Silicon Graphics, whose objective was to create a language with multiple graphic capabilities, which would allow developers to create three-dimensional models of real objects, with basic knowledge of computer graphics. However, VRML, in addition to greater flexibility, has certain features that allow it to take advantage of the features of the Internet. Sketch up. It is a proprietary 3D modelling software that allows you to create and animate 3D models. The operating systems Sketch Up runs on are Windows and MacOS, there is no Linux version. A large number of books, tutorials and online help are available. Its paid license offers advantages such as specialized help, and there are spaces on the internet where you can consult the community for solutions. The interface of this program (see Fig. 5) is very friendly. It has a tools menu, a work area for modelling, and a menu to create the animations of the modelled objects, it has an instructor, who explains the use of the tool graphically, making it easy to use and learn the application [10]. For the 3D modelling process of the Liberty Summit monument, real high-resolution photographs were taken on site with a professional CANON T7/2000D camera. These images were used as input for the Wings 3D software tool, which consists of a grid that allows the images to be modelled based on geometric processes, framing the images with the guidance provided by Wings 3D [11]. An example of this process can be seen in Fig. 1:

Fig. 1. Wings project

Later in this research work, the Virtual Mirror tool was used to break the flat structures and deform them and give them a more realistic appearance. See Fig. 2. Once the deformed images are obtained, the sculpting technique is used to outline and polish the images. (See Fig. 3). Depending on the designer’s experience in modelling techniques, figures and structures can be given more realism.

3D Modelling of Freedom Summit for Virtual Environments

553

Fig. 2. Mirror wings

Fig. 3. Sculpt

For the construction of the complete model, it was divided into objects, which are structured in a geometric scheme as shown in Fig. 4.

554

A. Luis et al.

Once all the objects of the geometric scheme have been modelled, they are joined to form the complete model and the executable files are generated through the rendering processes. Rendering. The render stage compacts all the elements obtained in the previous phases (Modelling, Materials and Textures, Lighting and Animation) [12] and results in an object that has all the components of the project. The more realism you want to give to the modelled objects, the longer this process takes, so computers with more powerful graphics cards are needed. To improve the rendering process, the Blender software tool was used, which also allows reducing the load of 3D images. In order to view the rendering result, you want to install viewers that optimize the computer’s resources, such as the CORTONA program for computers with minimal requirements, and thus view the result in the browser.

Fig. 4. Cortona - liberty summit

Sketch up (Fig. 5) is used to visualize the model developed on the computers with the most resources, which better manages the performance of the computer’s resources.

3 Result and Discussion Based on the OOHDM methodology [13], after the requirements phase, the conceptual design was elaborated and based on the navigational design, the necessary structures for the 3D representation of the monument were defined (Fig. 6). The designed 3D model consists of the following modules:

3D Modelling of Freedom Summit for Virtual Environments

555

Fig. 5. Sketch up - summit of liberty

Fig. 6. Real photograph of the emblematic monument cima de la libertad

Accessories Module. In this module, the modelling of small structures was made, the soldier, the cannon, the bench, the tree, as can be seen in Fig. 7. The coat of arms, flag of Ecuador, was also modelled, based on colours and shapes according to what you want to do. The tower and the posts were included, for this the Extrude tool was used. See Fig. 8.

556

A. Luis et al.

Fig. 7. Polygonal design various elements 1

Fig. 8. Polygonal design various elements 2

Tribune Module: The gallery was designed with identical colours to the tower, in the same way it was based on basic shapes, which were changed according to the features and details that were wanted. See Fig. 9. Unit of the Modules. For this stage, a computer with enough features for the design was necessary, since when joining all the modules, the processing work became heavy. Reason for which most of the parts made had to be resized and rotated in the different axes; in turn, parts such as green areas and roof were drawn to integrate them into the main module. See Fig. 10.

3D Modelling of Freedom Summit for Virtual Environments

557

Fig. 9. Structural design of the Grandstand

Fig. 10. Union of the designed modules

Project Color. This is one of the heaviest parts of the project, in addition to the selection of almost all the surfaces and giving them colour with the colour palette, the insertion of images was also used to give more realism to the textures (Fig. 11). The elaboration of three-dimensional models of historical monuments such as the Cima de la Libertad, help to strengthen the cultural identity of the city with the community, guiding efforts for its conservation and promotion locally and abroad (Fig. 12). This work is an example of documentation of a heritage building that will last over time and will serve for future projects of conservation, restoration, control and management of the built heritage (Fig. 13). The main objective of the creation of 3D models is to create environments that are as close as possible to reality on a visual level.

558

A. Luis et al.

Fig. 11. Comparison of the front of the Monument and the model obtained

Fig. 12. Comparison of the lateral part of the Monument and the model obtained

Fig. 13. Comparison of the central part of the Monument and the model obtained

To carry out large-scale 3D modelling projects, it is necessary to work with the right tools for greater functionality and time savings. If you want to combine the parts to make a single object, they must not be painted or with any image.

3D Modelling of Freedom Summit for Virtual Environments

559

Wings 3D’s editing functions allow you to quickly add geometric figures, without the need to manually program the script code and the tools used for manipulating the figures is quite easy. The work in modules was adequate since the Wings 3D tool allowed them to be joined and resized without any problem. Three-dimensional modelling is a technology that allows real-world objects to be represented through computer graphics, which is a field that facilitates the interaction of people with virtual scenarios through electronic devices. The tools offered by VRML browsers, as in the case of CORTONA, allow threedimensional objects to be displayed based on a panoramic tour and simulating the user’s interaction with the scene and allowing him to make movements within it. The use of 3D models implies the consumption of many computer resources, mainly memory, graphics card and processor; the load and the response time for visualization of the three-dimensional models will depend on the size of the application and its level of realism. The CORTONA plugin used for navigating 3D virtual environments is an essential factor for users whose computers have minimal resources, since it optimizes the capabilities of electronic devices by using a browser as a viewer. 3D virtual environments have various characteristics of their own that allow the creation of a model that is very close to reality.

References 1. Blender on Steam. (s/f). Steampowered.com. https://store.steampowered.com/app/365670/ Blender/. Accessed 27 Oct 2021 2. Gibelli, T., Graziani, A., Sanz, C.: Review of tools for the creation of 3d models oriented to the teaching of mathematics with augmented reality. In: 23rd Andean Congress of Computer Science (2017) 3. Morelli, R.D., Pangia, H.A.: 3D parametric modeling, rendering and animation with free software: freecad + blender interaction. In: Proceedings of the 2015 Geometries & Graphics, vol. 23, no. 36 (2015). https://www.fceia.unr.edu.ar/solcad/Paper_022.pdf. Accessed 17 Sept 2021 4. Miller, F.P., Vandome, A.F., McBrewster, J. (Eds.) Wings 3D. Alphascript Publishing (2012) 5. No title. (s/f). Uco.es. http://www.uco.es/grupos/eatco/automatica/ihm/cursobasicovrml/ tema0.html. Accessed 27 Oct 2021 6. Get blender: Microsoft store en-US. (s/f). Microsoft.com. https://www.microsoft.com/es-ec/ p/blender/9pp3c07gtvrh?activetab=pivot:overviewtab. Accessed 27 Oct 2021 7. 3D design software. (s/f). Sketchup.com. https://www.sketchup.com/es. Accessed 27 Oct 2021 8. Soliz, D.R., Frank, A.M.: OOHDM (Object Oriented Hypermedia Design Method) & ISO 9126 Standard 9. Unknown, & Profile, VT mi. (s/f). Museums of quito. Blogspot.com. http://museosquitoe cuador.blogspot.com/2015/05/museo-templo-de-la-patria-cima-de-la.html. Accessed 27 Oct 2021 10. Wikipedia contributors. (s/f). Blender. Wikipedia, The Free Encyclopedia. https://es.wikipe dia.org/w/index.php?title=Blender&oldid=139020728. Accessed 27 Oct 2021 11. (Y/fa). ati.es. http://www.ati.es/gt/LATIGOO/OOp96/Ponen6/atio6p06.html. Accessed 27 Oct 2021

560

A. Luis et al.

12. (S/fb). Illustrated.com. http://www.ilustrados.com/publicaciones/EpyppFVZkpkdzGj CYR.php. Accessed 27 Oct 2021 13. (S/fc). Start.es. http://inicia.es/de/marquezv/dihm/doc25.html. Accessed 27 Oct 2021

Model of Technological Competencies as Determinants of Innovation: A Comparative Intersectoral Study in Ecuador Claudio Arcos1(B)

and Adrian Padilla2

1 ISMAC Technological Institute and San Francisco Global Foundation, Quito, Ecuador

[email protected] 2 Graduated from the School of Mathematical and Computational Sciences, Yachay Tech

University, Hacienda San José S/N, 100115 Urcuquí, Imbabura, Ecuador

Abstract. Using the database of the II National Economic Census of Ecuador CENEC carried out in 2010, it is analyzed whether the mastery of technological skills favors the execution of R&D activities, for which the technological skills approach is used as a framework theory for the study of the determinants of innovation in companies in some productive sectors in Ecuador. The empirical framework of the study is completed with the application of a logit model to demonstrate the behavior of the dichotomous variable presence or absence of R&D, explained by the independent variables that emerge from the theoretical framework. The results show that, in the various sectors analyzed, the presence of innovation activities depends on the execution of various skills within the organization, and its innovative approach is stimulated by mastering a portfolio of technological skills from which R&D activities are carried out. Keywords: Technological competences · Skills · R&D · Resources and capabilities

1 Introduction Studies on the determinants of innovation, and the existing empirical evidence, affirm that there are several factors that could drive innovation activities in companies. This paper studies the mastery of technological skills in organizations, as the set of determining variables that induces the execution of R&D activities in the organization. The “Schumpeterian hypothesis” highlights the stimulation of the innovative effort of companies originates in the possibility of achieving significant market power as a result of said innovation. In this sense, spending on R&D is one of the main measures used to estimate business innovation; it is considered as an approximation of the input to drive innovation, used as to capture knowledge to produce the innovation output [1]. As a product of the study, the results achieved are described to determine if the mastery of technological skills favors the execution of R&D activities.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 561–574, 2023. https://doi.org/10.1007/978-3-031-25942-5_44

562

C. Arcos and A. Padilla

2 Theoretical Framework 2.1 Technological Competencies Innovation in an organization requires an adequate level of technical maturity of its capabilities and resources [2], that allows to master technological skills with the aim of achieving conditions related to technology characteristics [3]. According to [4], companies require an activity system designed and implemented to meet the needs identified in the market to create value in the chain, which includes interconnected and potentially interdependent activities. In this regard, [5] presents innovation as a systemic phenomenon that causes the creation of knowledge focused on achieving economically viable and sustainable objectives in the long term. From this point of view, mastery of technological skills allows the development of innovation. These competencies are a set of variables grouped into four dimensions: i) human; ii) technological; iii) organizational; iv) strategic [6–12]. In respect of which, [13] pointed out that the definition of a company’s competency profile should focus on the identification of four types of skills or abilities and resources, namely: facilities and equipment, personal skills, organizational skills and management skills. This approach was later strengthened by studies of [14], who indicated that the competitive advantages of the company are linked to the quality of its endogenous factors and to the control of the portfolio of resources and capabilities. In respect of which, [13] pointed out that the definition of a company’s competency profile should focus on the identification of four types of skills or abilities and resources, namely: facilities and equipment, personal skills, organizational skills and management skills. This approach was later strengthened by studies of [14], who indicated that the competitive advantages of the company are linked to the quality of its endogenous factors and to the control of the portfolio of resources and capabilities. 2.2 The Component Elements of the Technological Competences In reference to technology, [15] point out that it was little considered in models and methods of strategic management, because it was considered something external to the company. Then, in a study applied to Japanese companies, they identified that they reached their own technological capacity due to the absorption of technologies that were developed in various parts of the environment, as confirmed by [16] with the concept of “open innovation”. Thus, the decision to compete in one or another market must be the consequence of the analysis of the possibilities of innovation that a company has on the basis of the skills, technologies and knowledge that it dominates [17]. With these studies, the conceptual basis of contemporary approaches that establish that organizations are capable of deploying their operations according to their knowledge base was strengthened [12], from which they can master and control a set of technological skills to develop their competitive strategy [6, 18–21]. The fact is that innovation is the product of a process that considers organizational learning focused on mastering technological skills as the trigger determinants for developing solutions [11, 12]. A chronological bibliographical analysis of the doctrine that studies the determinants of innovation is carried out in Fig. 1.

Model of Technological Competencies as Determinants of Innovation Author - year Schumpeter, J., 1942 [22] Robinson, J., 1946 [23] Selznick, P., 1957 [24] Dosi, G., 1982

[25] Mintzberg, H., 1983 [26] Wernerfelt, B., 1984 [14] Teece, D., 1984

[27] Giget, M., 1984

[18] Porter, M., 1985

[19] Dussauge, P.; Ramanantsoa, B., 1986 [15] Kline, S.; Rosenberg, N., 1986 [28] Teece, D., 1986

[29] Dierickx, I.; Cool, K., 1989

563

Doctrinal contribution (summary) "Creative destruction" results from industrial change that continually revolutionizes economic structure due to organizational abilities. Consumers take into account the quality, the marketing method, the design of the point of sale and not only the price. That is the capacity to create strategies. The distinctive skills of an organization allow it to establish a competitive advantage and enhance its effectiveness and efficiency. Competitiveness is the result of science and technology. "Technological paradigm” explains the change as a systemic description of innovation coming from learning. Cooperation takes advantage of the knowledge, ideas, skills and abilities of others, behavior patterns are also imitated and organizational learning is promoted. Proper management allows the company to master distinctive skills through which it is possible to find the optimal product-market activities. Business efforts must stimulate the integration and use of the dynamic capabilities of the organization in order to create innovation. The integration and combination of the distinctive competencies of the company allow it to conceive, produce and sell new products in a competitive way. Intensity of applied knowledge can be deployed from the mastery and management of a set of technological skills necessary to trigger technological innovation. Competitive advantage can only grow because of the value that a company is capable of generating. Innovation is not a linear process; it requires a strong connection between technical knowledge that and business management. Based on knowledge and operational structure, organizations must seek the development of innovations in processes, organizational & commercial methods. The generation of competencies for innovation management is understood as a process that depends on time and the trajectory of the company.

[30] Roling, N., 1990 [31] Cohen, W.; Levinthal, D., 1990 [32] Amit, R.; Shoemaker, P., 1993 [33] Bell, M.; Pavitt, K., 1993 [34] Peteraf, M., 1993 [35] Prahalad, C. K.; Hamel, G., 1994 [36]

Innovation is a systemic process in which knowledge is generated from organizational learning, by taking advantage of information systems. Capacity depends on 3 factors: i. relationship between outside and inside; ii. relationship between internal units; iii. relationship between individuals. Competitive advantage is the capacity to manage the company, its available resources and favoring the complementarity between tangible and intangible. Competencies reflect the level of development of activities of the technological type of a company over time. Competencies involve collective learning and are based on knowledge, and improve as they are applied in the same business management. Competencies are a set of capabilities and technologies that favor the development of an offer to a group of clients.

Fig. 1. Milestones in the study of innovation: technological skills. Source: Direct Research.

564

C. Arcos and A. Padilla

Quinn, J.; Hilmer, F., 1994 [37] Freeman, C., 1995 [38] Barney, J., 1995

[39] Bower, J.; Christensen, C., 1995 [40] Nonaka, I.; Takeuchi, H., 1995 [41] Grant, R., 1996

[42] Knudsen, C., 1996 [43] Teece, D.; Pisano, G.; Shuen, A., 1997

Distinctive competencies are made up of the domain of skills and knowledge, technology, strategic management, the value chain and organizational systems. It is key in the management of intellectual capital focused on technological learning the mastery of technological skills. Resources that can be a source of competitive advantage are those defined in the "VRIO" scheme for its acronym: value, rareness, imitability, and organization. Organizations must develop adequate skills to compete, and even more so, to promote a technological gap with respect to their competitors. It is essential that there is a context, routines, systems, technologies that facilitate, encourage and stimulate the development of knowledge creation processes. The company development approach based on competencies is based on the business development approach based on knowledge and learning. Development of competitive advantages based on competencies, comes from the management of resources and knowledge. Dynamic capabilities are essential competencies of the company, because they allow defining the core business of the company.

[44] Morcillo, P., 1997 [45]

Giget, M., 1998

[6] Coombs, R.; Hull, R.; Peltu, M., 1998 [46] Yeoh, P.; Roth, K., 1999 [47] Dogson, M., 2000 [21] Kaufmann, A.; Tödtling, F., 2000 [48] Quinn, J., 2000

[49] Hlupic, V., 2002 [50] Viotti, E., 2002

[51] Christensen, C.; Raynor, M., 2003 [52]

Business competences are the integration of four aspects or dimensions: i) the human aspect, as a set of qualifications; ii) the technological aspect, in reference to developed and implemented technologies; iii) the organizational aspect, understood as the intertwining of human qualifications and technologies; iv) the strategic aspect, which is the way to offer a benefit to customers. Business skills are achieved as a result of adequate integration around human capital plus other aspects of technological origin. Innovation can be understood as knowledge management and the establishment of routines that allow competitive environments. The mastery of distinctive competencies includes processes and activities that allow companies to achieve better results in the long term. The way to maintain competitive advantage is not only to continuously improve, but also to keep innovation processes current Execution of R&D activities enhances the capacity to absorb, assimilate and apply useful information from abroad. In this practice companies develop internal skills. Innovation could be achieved faster and at a lower cost if companies develop their specific skills and outsource support management. Capabilities and resources of the company must be managed with the strategic use of existing tacit knowledge, combined with express knowledge. Innovation not only refers to new developments, it is also the absorption capacity of innovations and technologies that already exist but that could be new for a company. The definition of a portfolio of competencies with a solid technological base would favor the renewal of the markets.

Fig. 1. (continued)

Model of Technological Competencies as Determinants of Innovation Winter, S., 2003

[53] Chesbrough, H., 2003 [54] Alegre, J.; Lapiedra, R., 2005 [55] Bueno, E.; Morcillo, P.; Salmador, M., 2006 [17] European Patent Office, 2007

565

The dynamic capabilities of the company allow it to enrich, modify or create essential skills to strengthen its organizational management and survive in the market. Organizational learning with a focus on innovation but with strong incidence of resource and capacity management, helps to develop links in the innovation system. The company has two types of skills: i. component: mastery of skills; ii. architectural: skills that allow to rethink and re-design the component competencies. Competition includes, implicitly or explicitly, the personal, technological, organizational and strategic aspects of the company. Companies improve through learning and innovation, in a virtuous circle of better business performance.

[56] OECD, 2008

[57] Pisano, G.; Verganti, R., 2008 [58] Fagerberg, J.; Shrolec, M., 2008 [59] Fagerberg, J.; Srholec, M., 2009 [60] Castellanos, O.; Jiménez, C.; Domínguez, K., 2009 [61] Morcillo, P., 2011 [8] Hidalgo, A.; León, G.; Pavón, J., 2011

[62] Arcos, C., 2012

[9] Martín-Rojas, R.; GarcíaMorales, V.; Bolívar-Ramos, M., 2013 [63] Nylén, D.; Holmströn, J., 2015 [2] Kuo-Feng, H.; Cheng, T., 2015

[64]

Open innovation could be pointed out: i. “inbound”: companies take advantage of an external network; ii. "outbound": generation of open collaborative platforms. Collaborative innovation process requires that the organization be competent to manage and take advantage of new technologies. The role of existing technological and social capabilities is key to promote economic development. Technological competencies show a positive correlation with innovative performance, to a greater degree than other determinants that also favor innovation. Distinctive competencies are those that are developed on the basis of the company's strategies, capabilities and resources, and that are different from other companies. Distinctive competencies are achieved with the mastery of four aspects of the organization: i. Personal; ii. Technological; iii. Organizational; iv. Strategic. Competitiveness can increase if the resolution of problems based on organizational learning produces changes and improvements in the operational structure. Competitiveness leads to the improvement of the quality. This way companies are capable to manage its resources, capacities and competitive advantage. Certain technological variables are key for achieving a competitive advantage in technologically intensive industries: i. top management support for technology (TMS); ii. technological skills; iii. technological distinctive competencies (TDC). Organizations need to develop at least three dimensions of competencies: i. the company's products; ii. the digital environment; iii. the properties of the organization. Determinants of innovation are asset of factors such as organizational capacity, company size, core technology, main business, R&D, human capital, creativity.

Fig. 1. (continued)

566

C. Arcos and A. Padilla

Arcos, C, 2018

[12] Asnidatul Adilah, I.; Razali, H., 2019

[65] Popkova, E.; Zmiyak, K., 2019 [66] Pérez, C., 2020

[67] Álvarez, I.; Marín, R.; Albis, N., 2020

[68] Castellacci, F., 2020 [69] Jackson, N.; Dunn-Jensen, L., 2021 [71] Moldoveanu, M.; Frey, K.; Moritz, B., 2022 [72] Sadun, R.; Fuller, J.; Hansen, S.; Neal, P., 2022 [73] Arcos, C., 2022

[74]

Four aspects must intervene, combine and converge in order to generate technological competencies: i. personal; ii. technological; iii. organizational; iv. strategic. Three critical issues of technical skills are required in the context of the fourth industrial revolution: i. analyze, interpret and document data; ii. process understanding and optimization; iii. implementation, troubleshooting and maintenance of devices. Communication processes must promote the development of social skills in combination with technological skills, in order to adapt society to the technological mode. Technological revolution is the interconnection and interdependence of the actors to their technology and market, and their ability to change the economy and society. Technological change happens when capacities are developed due to the advantages of ownership, location and internalization, as defining elements of competitiveness. Advanced knowledge is characterized by a high technological capacity and a significant ability to manage and create complex technological knowledge. Digital transformation is rapidly changing the competitive landscape. Organizations and their human talent must reassess the structures and drive the core competencies. Digitization favors the development and mastery of new skills and abilities necessary to be able to innovate. Business operations are becoming more complex and technology-centric, and are facing greater public scrutiny. This requires the construction and mastery of skills, abilities and competencies that are developed internally in the organization. Companies must combine a set of productive factors understood as their distinctive competencies, to produce and develop organizational competitiveness.

Fig. 1. (continued)

2.3 Mastery of Technological Competences Companies must enhance their technology management as a better performance based on their skills to configure a competitive advantage. Based on this reference framework, the quantitative research method of this study is designed using the database of the II National Economic Census of Ecuador (CENEC) carried out in 2010 [75]. The study uses the presence or absence of R&D as a dichotomic proxy variable for identifying innovative and non-innovative enterprises. The variables were selected to study the four areas of technological competences (see Fig. 2). For the definition it was considered similar studies with econometric models applied [76–80].

Model of Technological Competencies as Determinants of Innovation Label S6P9 G_ANOINI CPC_MP4D WEB

Variable R&D spending Years of start of the main activity Handles raw material to 4 Digits

TRA_ING S6P3

Requires financing

S6P12

Affiliation to a guild

TRA_PER

Description Dependent variable

Technological aspects

Has a website Expenditure on training and education Company size according to the number of people Revenue received from sales

S6P10

S4P7C4

Main client abroad

CPC_PE4D

It has a 4-digit elaborated product

NAT_JUR

Legal nature

567

Personal aspects

Organizational aspects

Strategic aspects

Used as a proxy measure to identify innovative and non-innovative companies Used to identify more or less years of experience in your sector Used to identify the level of technological mastery in raw material handling Used to identify the use of digital transformation tools Used to identify the existence of investment in knowledge Used as control variable Used as control variable Used to know the needs of external resources Used to find out if there is cooperation with other agents in the sector Used to know the approach to the international market Used to identify the technological level of the product they manufacture Used to know the strategy in the civil and commercial field

Fig. 2. Representative variables of the 4 technological competences. Source: Direct Research.

3 Methodology 3.1 Database and Variables The Oslo Manual [81] describes that R&D activity involves basic and applied research and experimental development that the enterprise can do for acquiring new knowledge and for producing specific inventions or technical modifications. In Ecuador there are critical points in the production systems, such as the precariousness of governance and management capacity [82], that is why it is essential to analyze the components of the organizations as determining factors of innovation. In this sense, is understood that technological competences act as catalysts of the accumulation of assets and contribute to the entrepreneurial innovation [63], thus, the following hypothesis is defined: H1: The domain of technological competences does favor the execution of technological R&D activities in organizations. 3.2 Statistical Modelling In order to use standard notation for the studied variables and productive sectors, the International Standard Industrial Classification (ISIC) was considered. The productive sectors considered in the study are: i. A01. Agriculture, livestock, hunting and services; ii. C10. Food products manufacturing; iii. C11. Beverage’s manufacturing; iv. C14. Garment manufacturing; v. C20. Manufacture of chemical substances

568

C. Arcos and A. Padilla

and products; vi. C21. Manufacture of pharmaceutical products, chemical substances; vii. C29. Manufacture of motor vehicles, trailers and semi-trailers; viii. CPC0196.Live plants, cut flowers, buds and seeds. Likewise, this study considers the classification of companies according to the established regulations by the Andean Community of Nations (CAN) in its Resolution 1260 [82], which determines the following: microenterprise (1 up to 9 employees); small enterprise (10 to 49 employees); medium enterprise (50 to 199 employees); big enterprise (200 or more employees). Methodology and Empirical Results: Technological Competences Aspects Determinants of the Innovation. In the following, a logistic regression model is presented for studying the reasons which impulse enterprises of the above-mentioned sectors to undergo in innovative activities. To this end, the categorical indicator of expenditure on R&D activities is used as a proxy dependent variable for identifying innovative and non-innovative enterprises. In this regard, [83] indicates that modelling in the analysis of the determinant factors of innovation can be done with this logistic structure of econometric models, in which the explaining variables are business characteristics such as technological intensity, novelty in innovation, localization, impact of governmental support programs, competitive environment and company size. Quantitative Analysis Methodology. A series of logistic regression models was conducted with the variables appearing in Fig. 2, in accordance with the discussion made about their selection. This model allows to a measurement of the significance of the impact of the chosen. In general, a model of this type is defined by the equation: y = 1+e1−βX + ε Where y is a dependent dichotomic variable, ε an error term, and the estimation depends on the linear combination of the predictors: βX = β0 + β1 x1 + · · · + βn xn With β1 , . . . , βn as coefficients of each dependent variable x1 , . . . , xn , corresponding to each indicator function of the categories in Fig. 2. The estimation of the coefficients was made under a Bayesian framework for avoiding separation phenomena among the predictors and estimators. This allowed to establish a significant study among the income variable. If the estimated coefficient is positive, the influence of the associated predictor is positive in the probability of the dependent variable P(y = 1). Our approach is to conduct a general model including all data in the CENEC 2010 database and then, several logistic regression models restricted to each productive sector respectively, as presented above. The subsets of the main database are labelled according to the ISIC codification. This schema will help to study whether the impact on boosting the innovation degree vary horizontally in each sector. Furthermore, the significance of the estimation of each coefficient is obtained from Student’s t-tests. With this method, the null hypotheses (whether each coefficient βi = 0) are tested to be false, and the significance of the rejection of hypotheses is reported in terms of significance of the p-values of each estimation. If the p-value (the smallest value for the rejection of the null hypothesis) is small enough, the significance of the estimation will be higher, as well as the precision of the confidence interval. In this report, the significance of the estimation is labelled according to the notation indicated below in Fig. 3.

tra_ing8

tra_ing9

s6p31

s6p121

s4p7c41

cpc_pe4d1

web1

nat_jur1

Income >$400K

Income not reported

Funding need

Union member

Foreign customer

4-digit product

Webpage

Legal entity

0.67943

0.44337

0.52586

0.29461

0.61717

0.39374

0.09323

0.60710

0.99322

1.06338

0.79091

0.50392

0.63028

**

***

***

*

**

***

***

***

***

***

***

***

***

***

***

***

-0.09607

0.86207

0.50974

-0.01070

0.55457

0.39062

-0.46194

0.68445

0.46390

0.69662

-0.38291

1.78928

0.72256

-1.40000

-0.71962

0.63298

-0.62534

2.43231

.

***

***

Signif.

9 782

0.6125

0.5006

-1.2333

-0.7980

-0.1437

0.2723

-0.2659

2.5662

3.0371

2.2577

2.9015

1.4904

1.4459

0.1772

0.7925

0.6127

0.4517

3.2118

0.2362

-6.2255 -0.4817

Coefficients

.

.

***

***

***

***

.

*

***

***

Signif.

C11

-0.081402

1.568309

0.003942

1.694471

2.497825

0.812116

-0.196087

0.167095

0.399984

1.538839

-0.719799

-0.307460

1.408152

0.228064

0.591900

0.173510

-0.497682

1.597755

1.952349

-7.696676 -0.036891

Coefficients

200

.

*

.

*

Signif.

Beverages manufacturing

C14

0.33361

-1.11371

0.05801

-0.29688

0.42652

0.54787

-0.28827

2.28916

1.84354

1.36222

-0.75117

1.17820

1.30326

1.43023

-1.34758

0.61929

0.66972

2.44688

0.86420

-8.06798 0.20949

Coefficients

8 261

.

.

***

**

*

*

**

***

**

Signif.

Garment manufacturing

0.23942

0.09221

0.54987

0.90591

1.87377

-0.22170

-0.22418

1.39189

1.47445

1.43451

0.27982

-1.30054

1.57627

-1.15847

0.11345

-1.50217

-1.00589

1.67907

1.28864

-6.42165 -0.32678

Coefficients

**

.

.

.

*

*

***

**

Signif.

1.01180

-0.34155

0.72854

0.10623

0.54586

0.27630

0.00175

0.91566

-0.89745

1.56207

1.40304

-0.15106

-0.49971

-0.67139

0.34998

0.17109

0.56305

3.27875

-3.45913

-6.02562 2.18590

Coefficients

**

*

*

Signif.

C29 Manufacture of motor vehicles, trailers and semitrailers 401

C20 & C21 Manufacture of pharmaceutical prod., chemical substances 425

(CPC) 0196

1.94402

0.65045

-

1.90933

1.26080

0.14109

-0.06242

0.21047

0.55570

-0.37460

-0.13979

-0.21642

0.00078

-0.02072

-0.50072

0.98080

-0.03723

1.21400

0.23934

-7.60660 -0.43607

Coefficients

125

-

*

Signif.

Live plants, cut flowers, buds and seeds

Fig. 3. Logistic regression coefficients for different groups of enterprises according to their ISIC code. Source: Direct Research.

tra_ing7

Income $400K

0.90574

tra_ing5

tra_ing4

Income $70K

tra_ing6

tra_ing3

Income $50K

Income $90K

tra_ing2

Income $30K

Income $200K

1.03869

tra_per4

0.42895

Big size

0.18296

tra_per2

tra_per3

Small size

Medium size

2.80931

s6p101

Training

0.20082

-0.27032

cpc_mp4d1

Raw material

***

-6.11496 0.36008

*** **

-6.66475 -0.11066

827

Coefficients

Signif.

497 615

C10 Food products manufacturing

A01

Agriculture, livestock, hunting and services

Coefficients

All enterprises

Sample Size Dependent variables Intercept Years of start g_anoini1

Productive Sector

ISIC Code

Model of Technological Competencies as Determinants of Innovation 569

570

C. Arcos and A. Padilla

4 Discussion and Conclusions In all cases, the reference framework was chosen as a micro-enterprise which reports its starting year between 2006 and 2010, does not report the use of raw material, does not have expenses in formation and training, perceives income up to $10 000, does not need external funding, is not affiliated to any union, does not report foreign customers, does not have manufactured products codes, does not report a webpage nor is constituted as a legal entity. Firstly, the results obtained indicate that models with a lower sample size have lower levels of significance among the variables, mainly in the income variable. The common significant variable among all cases is the amount of expenses in formation and training, which has coefficients with the highest absolute values with respect to those of the other variables. This can be interpreted as the variable which mostly contributes to the existence of investment in R&D activities. Similarly, it can be noted that the variables corresponding to the existence of raw material and manufactured products does not have any significant impact in any model. In the variable related to raw material, there is a significant negative impact in the model for the manufacture of motor vehicles sector. Furthermore, there exists a high and significant positive impact on R&D expenses of those enterprises who belong to union associations in the beverages, pharmaceutical and chemical products manufacturing sectors. In general, most coefficients in the general model have positive impacts in for having expenses in R&D activities. However, in many variables in this schema, there exist variations and change in the sense of the impact among the size and training variables. For instance, there exists a significant positive impact if an enterprise in the beverages manufacturing sector reports a webpage but a negative impact for one in the garment industry. The legal entity variable proved to be a booster for R&D in the food manufacturing sector. Among all variables, the greatest impact is due to a better score in the income and training variables. In the plants and flower sector, the foreign customer and legal entity variables contributed highly and positive to the impact on R&D. The results provide empirical evidence about the positive and significant effect that the mastery of technological skills has as a determinant of R&D activities; therefore, the hypothesis that mastery of technological skills does favor the execution of R&D activities in organizations is accepted and, mainly, it is accepted that it is possible to take advantage of technologies when they are managed by competent people in terms of their ability and knowledge to do so. Further studies including continuous variables, in more specific productive sectors according to other relevant variables, such as source of funding, can be performed to calibrate the degree of impact of each group of competences. On the other hand, although the Census data was not specifically elaborated from an innovation survey, the economic variables it collects are sufficiently useful for the purposes of the investigation. In fact, despite it is a 2010 Census, the results are relevant for a developing economy such as Ecuador’s, the same one that has not shown major technological changes or in its productive matrix. Finally, a new line of research is opened to determine appropriate mechanisms to expand the knowledge base of the actors involved in the innovative development of

Model of Technological Competencies as Determinants of Innovation

571

Ecuador, so that they can improve their innovation management capacity, depending on what the country is competent to make and produce already in industrial terms.

References 1. Molero, J.: Innovación Tecnológica y Competitividad en Europa. Editorial Síntesis, Madrid (2001) 2. Nylén, D., Holmströn, J.: Digital innovation strategy: a framework for diagnosing and improving digital product and service innovation. Bus. Horiz. 58(1), 57–67 (2015) 3. Cantner, U., Guerzoni, M.: Innovation and the evolution of industries: a tale of incentives, knowledge and needs. In: Handbook of Research on Innovation and Entrepreneurship, Cheltenham, UK Northampton, MA, USA, Edward Elgar, pp. 382–402 (2011) 4. Zott, C., Amit, R.: Business model innovation: toward a process perspective. In: The Oxford Handbook of Creativity, Innovation, and Entrepreneurship. Oxford University Press, Oxford, pp. 395–406 (2015) 5. Scheel Mayenberger, C.: El enfoque sistémico de la innovación: ventaja competitiva de las regiones. Estud. Gerenciales 28, 27–39 (2012) 6. Giget, M.: La dynamique stratégique de l’entreprise: Innovation, croissance et redéploiement à partir de l’arbre de compétences. Dunod, Paris (1998) 7. Chaminade, C., Lundvall, B.Å., Vang, J., Joseph, K.J.: Designing innovation policies for development: towards a systemic experimentation-based approach. In: Handbook of innovation Systems and Developing Countries, Cheltenham, Edward Elgar Publishing Limited, pp. 360-379 (2009) 8. Morcillo, P.: Innovando por naturales: El pase lo dice todo. Editorial Visión Libros, Madrid (2011) 9. Arcos, C.: Una Aproximación a la Dimensión Estratégica de las Competencias Tecnológicas para la Generación de Innovaciones: Análisis del Sector Empresarial Químico-Farmacéutico del Ecuador, Madrid: UAM-Accenture Chair on the Economics and Management of Innovation, Autonomous University of Madrid, Faculty of Economics (2012) 10. Morcillo, P.: Procesos de innovación, Documento interno. Universidad Autónoma de Madrid, Madrid (2014) 11. Arcos, C.: Fomento de la innovación a partir del aprendizaje organizativo enfocado al dominio de las competencias tecnológicas de la empresa. Universidad Andina Simón Bolívar - Comité de Investigaciones, Quito (2015) 12. Arcos, C.: Gestión unificada de recursos para la innovación sistémica. San Gregorio, 78–85 (2018) 13. Ansoff, I.: Corporate Strategy. Mc Graw-Hill, Nueva York (1965) 14. Wernerfelt, B.: A resource-based view of the firm. Strateg. Manag. J. 5(2), 171–180 (1984) 15. Dussauge, P., Ramanantsoa, B.: Evolution Technologique et Politique d’Entreprise, Les Cahiers de Recherche, Centre HEC-ISA, Institut Superieur des Affaires, Paris: Chambre de Commerce el d’Industrie de Paris, CR 271, Jouy en Josas, France (1986) 16. Chesbrough, H., Vanhaverbeke, J.: West Open Innovation: Researching a New Paradigm. Oxford University Press, Oxford (2005) 17. Bueno, E., Morcillo, P., Salmador, M.: Dirección estratégica. Nuevas perspectivas teóricas. Ediciones Pirámide, Madrid (2006) 18. Giget, M.: Les Bonsaïs de l´industrie Japonaise. GEST, París (1984) 19. Porter, M.: Competitive Advantage. The Free Press, New York (1985) 20. Argyris, C.: Overcoming Organizational Defenses, Facilitating Organizational Learning. Allyn and Bacon, Boston (1990)

572

C. Arcos and A. Padilla

21. Dogson, M.: The Management of Technological Innovation. Oxford University Press, Oxford (2000) 22. Schumpeter, J.: Capitalism, Socialism, and Democracy. Harper & Brothers, New York (1942) 23. Robinson, J.: La Economía de Competencia Imperfecta. Aguilar, Madrid (1946) 24. Selznick, P.: Leadership in Administration. Harper Business, New York (1957) 25. Dosi, G.: Technological paradigms and technological trajectories. a suggested interpretation of the determinants and directions of technical change. Res. Policy 11(3), 147–162 (1982) 26. Mintzberg, H.: Power In and Around Organizations, Englewood Cliffs. Prentice-Hall, New Jersey (1983) 27. Teece, D.: Economic analysis and strategic management. Calif. Manag. Rev. 26(3), 87 (1984) 28. Kline, S., Rosenberg, N.: An overview of innovation. In: The Positive Sum Strategy: Harnessing Technology for Economic Growth. National Academy Press, Washington DC, pp. 275–305 (1986) 29. Teece, D.: Profiting from technological innovation. Res. Policy 15(6) (1986) 30. Dierickx, I., Cool, K.: Asset stock accumulation and sustainability of competitive advantage. Manag. Sci. 35(12), 1504–1513 (1989) 31. Roling, N.: The agricultural research-technology transfer interface: a knowledge systems perspective. In: Kaimowitz, D. (Ed.), Making the Link: Agricultural Research and Technology Transfer in Developing Countries. Westview Press, Boulder (1990) 32. Cohen, W., Levinthal, D.: Absorptive capacity: a new perspective on learning and innovation. Adm. Sci. Q. 35(1), 128–152 (1990) 33. Amit, R., Shoemaker, P.J.: Startegic assets and organizational rent. Strateg. Manag. J. pp. 33– 46 (1993) 34. Bell, M., Pavitt, K.: Accumulating technological capability in developing countries. In: Proceedings of the World Bank Annual Conference on Development Economics, Washington, D. C. (1993) 35. Peteraf, M.: The cornerstones of competitive advantage: a resource-based view. Strateg. Manag. J. 14(3), 179–191 (1993) 36. Prahalad, C.K., Hamel, G.: Competing for the Future. Harvard Business School Press, Boston (1994) 37. Quinn, J.B., Hilmer, F.G.: Strategic outsourcing. Sloan Manag. Rev. 35, 43–55 (1994) 38. Freeman, C.: The ‘National system of innovation’ in historical perspective. Camb. J. Econ. 19, 5–24 (1995) 39. Barney, J.B.: Looking inside for competitive advantage. Acad. Manag. 9(41), 49–61 (1995) 40. Bower, J., Christensen, C.: Disruptive technologies: catching the wave. Harvard Bus. Rev. (1995) 41. Nonaka, I., Takeuchi, H.: The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press, New York (1995) 42. Grant, R.M.: Toward a knowledge-based theory of the firm. Strateg. Manag. J. 17, 109–122 (1996) 43. Knudsen, C.: The competence perspective: a historical view. In: Towards a Competence Theory of the Firm. Routledge, London, pp. 13-37 (1996) 44. Teece, D., Pisano, G., Shuen, A.: Dynamic capabilities and strategic management. Strateg. Manag. J. 18, 509–533 (1997) 45. Morcillo, P.: Dirección estratégica de la tecnología e innovación: un enfoque de competencias. Editorial Civitas, Madrid (1997) 46. Coombs, R., Hull, R., Peltu, M.: Knowledge management practices for innovation: an audit tool for improvement. CRIC, working paper 6, The University of Manchester, pp. 1–31 (1998) 47. Yeoh, P.L., Roth, K.: An empirical analysis of sustained advantage in the U.S. pharmaceutical industry: impact of firm resources and capabilities. Strateg. Manag. J. 20, 637–653 (1999)

Model of Technological Competencies as Determinants of Innovation

573

48. Kaufmann, A., Tödtling, F.: Systems of innovation in traditional industrial regions: the case of Styria in a comparative perspective. Reg. Stud. 34(1), 29–40 (2000) 49. Quinn, J.: Outsourcing innovation: the new engine of growth. Sloan Management Review; Summer; 41, 4; ABI/INFORM Global, pp. 13–28 (2000) 50. Hlupic, V.: Knowledge and Business Process Management. Idea Group Inc., Hershey (2002) 51. Viotti, E.B.: National learning systems: a new approach on technological change in late industrializaing economies and evidences from the cases of Brazil and South Korea. Technol. Forecast. Soc. Change 69(7), 653–680 (2002) 52. Christensen, C.M., Raynor, M.E.: The Innovator‘s Solution: Creating and Sustaining Succesful Growth. Harvard Business School Publishing Corporation, Boston (2003) 53. Winter, S.G.: Understanding dynamic capabilities. Strateg. Manag. J. 24, 991–995 (2003) 54. Chesbrough, H.: Open innovation. The New Imperative for Creating and Profiting from Tecnology. Harvard Business School Publishing, Boston (2003) 55. Alegre, J., Lapiedra, R.: Gestión del conocimiento y desempeño innovador: un estudio delpapel mediador del repertorio de competencias distintivas. Cuadernos de Economía y Dirección de la Empresa 23 117–138 (2005) 56. Elahi, S., Carmichael, G., Karachalios, K., Müller, M., Rutz, B., (Eds.) Scenarios for the future: how might IP regimes evolve by 2025? what global legitimacy might such regimes have? European Patent Office, Munich (2007) 57. OECD, Open Innovation in Global Networks, OECD Paris (2008) 58. Pisano, G., Verganti, R.: Which kind of collaboration is right for you? Harvard Bus. Rev. 86(12), 78–86 (2008) 59. Fagerberg, J., Shrolec, M.: National innovation systems, capabilities and economic development. Res. Policy 37(9), 1417–1435 (2008) 60. Fagerberg, J., Srholec, M.: Innovation systems, technology and development: unpacking the relationships. In: Handbook of Innovation Systems and Developing Countries, Cheltenham, Edward Elgar Publishing Limited, pp. 83–115 (2009) 61. Castellanos, O., Jiménez, C., Domínguez, K.: Competencias tecnológicas: bases conceptuales para el desarrollo tecnológico en Colombia. Rev. Ing. Inv. 29, 133–139 (2009) 62. Hidalgo, A., León, G., Pavón, J.: La gestión de la innovación y la tecnología en las organizaciones, Madrid: Ediciones Pirámide (2011) 63. Martín-Rojas, R., García-Morales, V., Bolívar-Ramos, M.: Influence of technological support, skills and competencies, and learning on corporate entrepreneurship in European technology firms. Technovation 33(12), 417–430 (2013) 64. Kuo-Feng, H., Cheng, T.-C.: Determinants of firms’ patenting or not patenting behaviors. J. Eng. Technol. Manag. 36, 52–77 (2015) 65. Ismail, A.A., Hassan, R.: Technical competencies in digital technology towards Industrial Revolution 4.0. J. Tech. Educ. Training 11(3), 055–062 (2019) 66. Popkova, E., Zmiyak, K.: Priorities of training of digital personnel. On Horiz. 27(3/4), 138– 144 (2019) 67. Pérez, C.: Revoluciones tecnológicas y paradigmas tecnoeconómicos. de Teoría de la innovación: evolución, tendencias y desafíos. Herramientas conceptuales para la enseñanza y el aprendizaje, Madrid, Ediciones Complutense, pp. 133–159 (2020) 68. Álvarez, I., Marín, R., Albis, N.:Innovación, internacionalización y cadenas globales de valor. In: Teoría de la innovación: evolución, tendencias y desafíos, Madrid, Ediciones Complutense, pp. 403–444 (2020) 69. Castellacci, F.: Paradigmas tecnológicos, regímenes y trayectorias: industria manufacturera y de servicios en una nueva taxonomía de patrones sectoriales de innovación. In: Teoría de la innovación: evolución, tendencias y desafíos. Herramientas conceptuales para la enseñanza y el aprendizaje, Madrid, Ediciones Complutense, pp. 303–340 (2020)

574

C. Arcos and A. Padilla

70. Miles, I., et al.: Knowledge-intensive business services: their roles as users, carriers and sources of innovation. The University of Manchester, Manchester (1995) 71. Jackson, N., Dunn-Jensen, L.: Leadership succession planning for today’s digital transformation economy: key factors to build for competency and innovation. Harvard Bus. Rev. 64(2), 273–284 (2021) 72. Moldoveanu, M., Frey, K., Moritz, B.: 4 ways to bridge the global skills gap. Harvard Bus. Rev. (2022) 73. Sadun, R., Fuller, J., Hansen, S., Neal, P.: The c-suite skills that matter most. Harvard Bus. Rev. (2022) 74. Arcos, C.: Strategies to overcome obstacles to innovative activities: case of ecuador’s floricultor sector. Technology, Business, Innovation, and Entrepreneurship in Industry 4.0, EAI/Springer Innovations in Communications and Computing Series (2022) 75. INEC (2011). http://www.inec.gov.ec/cenec/ 76. Baldwin, J., Lin, Z.: Impediments to advanced technology adoption for Canadian manufacturers. Analytical Studies Branch, Research Paper Series, Statistics Canada no. 173, pp. 1–27 (2002) 77. Galia, F., Legros, D.: Complementarities between obstacles to innovation: evidence from France. Res. Policy 33, 1185–1199 (2004) 78. Bayona Sáez, C., García, M., Huerta, A.: ¿Cooperar en I+D? con quién y para qué. Revista de Economía Aplicada, XI, 103–134 (2003) 79. Arundel, A.: The relative effectiveness of patents and secrecy for appropriation. Res. Policy 30, 611–624 (2001) 80. Crepon, B., Duguet, E., Mairesse, J.: Research, innovation and productivity: an econometric analysis at the firm level. Econ. Innov. New Technol. 7, 115–158 (1998) 81. OECD/Eurostat, Oslo Manual: Guidelines for Collecting, Reporting and Using Data on Innovation, 4th Edition, The Measurement of Scientific, Technological and Innovation Activities. OECD Publishing, Luxembourg (2018) 82. Heredia-R, M., Falconí, V., H-Silva, J., Amores, K., Endara, C.A., F-Ausay, K.: Technological innovation for the sustainability of knowledge and natural resources: case of the choco andino biosphere reserve. In: Botto-Tobar, M., Zambrano Vizuete, M., Díaz Cadena, A. (eds.) CI3 2020. AISC, vol. 1277, pp. 464–476. Springer, Cham (2021). https://doi.org/10.1007/978-3030-60467-7_38 83. CAN. Resolución 1260 - comunidad andina de naciones. In: Disposición Técnica para la Transmisión de Datos de Estadísticas de PYME de los Países Miembros de la Comunidad Andina, Lima (2009) 84. Tourigny, D.: Impediments to innovation faced by Canadian manufacturing firms. Econ. Innov. New Technol. 13(3), 217–250 (2004)

Micro-enterprise Management Towards Scenario Building for Decision Making Paula Flores(B) , Estefani Segura , Rubén Jaramillo , Luis Ulcuango , and Lizbeth Suárez Instituto Tecnológico Vida Nueva, Quito 170126, Ecuador [email protected]

Abstract. The purpose of this article is the identification of different theoretical variables focused on the various factors of micro-enterprise management that influence decision-making in order to establish favorable scenarios for proper decisionmaking through a qualitative approach considering the information provided by the Scopus database. The information was processed in the VosViewer software. In addition, data was collected through a survey applied to 242 microenterprises in the southern sector of Quito in order to contrast the theoretical information between the study variables and the reality of the micro-enterprise environment. The co-occurrence analysis identifies the joint appearances of the keywords in the different articles analyzed, in this case the words “microfinance”, “decision making”, “microenterprises” and “social capital” are highlighted. Factors have been identified for the construction of microenterprise management scenarios with a focus on sustainability, corporate social responsibility, knowledge management and microenterprises development. In this sense, microenterprises are the central axis of growth and economic development of the country, hence the importance of establishing management scenarios to achieve the objectives set out through the construction of simulators. Keywords: Micro-enterprise management · Decision making · Strategic scenarios · Administrative functions · Micro-enterprises

1 Introduction Today, Ecuador’s productive factor depends to a large extent on the economic activities carried out by micro-enterprises, which emphasizes that the country’s wealth is driven by the sectors of tourism, footwear, tailoring, food services, construction materials, vehicle maintenance centers, bars, and professional services, to mention but a few. According to INEC statistics updated in 2019, micro-enterprises represent around 90.9%, which means that the economic movement depends entirely on the productivity they generate for the country. Even so, there is no substantial government support for sustainable growth and durability over time, which causes microenterprises to disappear, as in the case of the pandemic that allowed the rise of a range of microenterprises that adapted to the virtual work process, but also their permanence in the market, was fleeting. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 575–584, 2023. https://doi.org/10.1007/978-3-031-25942-5_45

576

P. Flores et al.

We do not see the support of the different autonomous governments, and that to date have imposed tax regulations that hinders their gradual work [1]. It can be said that the lack of stability of the businesses is due to the empirical practices in the management of the microenterprises, which in some cases are even managed under intuition principles. They work with very striking and interesting ideas, but they do not manage to generate a positioning due to lack of knowledge of strategies, management of advertising, establishment of costs and prices, management of resources, among other factors that affect their permanence in their sector, including the implementation of new technologies in the business.

2 Theoretical Reference 2.1 Microenterprise Management Microenterprise management is oriented towards the improvement, competitiveness, profitability, and productivity of the economic units, focusing on the construction of a business image that promotes a competitive position and consolidates the work environment and reciprocity with the environment [2]. The efficient development of micro-enterprises is based on making decisions that establish actions to manage human capital and resources, therefore, the priority is the growth of the business line [3]. The term growth is categorized as a process of accumulation of tangibles and intangibles in microenterprise management that combines different elements with which entrepreneurs direct the organization towards the achievement and efficient fulfillment of their objectives [4]. 2.2 Functions of the Microenterprise Nowadays, administration is a social science, which aims to maximize the re-sources of a company, as well as the techniques and procedures available to manage the administrative process; however, micro-companies, due to the nature of the economic unit and its activities, apply empirical knowledge and intuition according to the needs of each one of them. This process has undergone an important evolution through time, expanding its theories and approaches in the different administrative schools such as: The scientific, classical, humanistic, bureaucratic, contingency, and modern. It is important to understand that management is aimed at any type of economic activity carried out by the micro-enterprise, such as: production, services, and com-mere. Microenterprise management is based on four fundamental pillars: Planning, organization, management, and control, which are the ones that sustain and give significant growth to the business [5]. Planning is the administrative function that establishes the analysis of the current situation of a company, the definition of objectives, the formulation of strategies, and the design of long-term action plans. It identifies what is to be done, how it will be done, where and when it will be done and how much it will cost, the development and implementation of the different strategies. The next important pillar in this process

Micro-enterprise Management Towards Scenario Building

577

is organisation, which identifies methods and procedures to order, control and direct a company through its departments, resources, and processes, in order to achieve its predefined goals or objectives. The management process comprises the activities aimed at leadership, motivation, conduction, and control of the efforts of a group of individuals that make up the business, towards the fulfilment of certain objectives [6]. Finally, there is control, aimed at evaluating and measuring the execution of the different plans, where deviations are detected and foreseen in order to establish the necessary corrective measures. Social capital, seen from a management perspective, is the building of relationships of trust and working together, where the microenterprise interacts at the local level with its customers, suppliers, society and even competitors. Microenterprises seek to generate stronger interaction and communication links with their customers, thus allowing them to cover their working capital and a personal profit margin [7]. Knowledge capital has become the most important intangible asset of a business from the philosophy of economic management. The degree of knowledge that an organisation possesses gives it a privileged position to compete in the market. The proposal of an effective knowledge management strategy gives rise to innovative actions to generate products, services, processes and systems that optimise the company’s resources and capabilities, however, [9] the administrative process managed by the microenterprises is empirical, related to the capacity of the entrepreneurs to draw up objectives and work to achieve them, without theoretical support or administrative bases to back up the decision-making processes of the entrepreneurs, thus generating instability or failure of the businesses in the market [8]. 2.3 From Empiricism to Strategy in Micro-enterprise The main purpose pursued by any economic unit, regardless of its size, type, sec-tor, or activity, is economic performance. In this context, micro-enterprises in the Ecuadorian economic structure, according to statistical data from INEC, represent a high percentage of the national economic activity. Micro-enterprises, in particular, are managed empirically, as the knowledge of their managers does not provide the business activity that micro-enterprises need to be efficient. Mintzberg, according to his philosophy, proposes ten types of strategies for business management, which can be adapted to the reality of the micro enterprise, and counteract the lack of knowledge of its founders. In the case of the microenterprise, due to its nature, it executes strategies derived from the strategic school of learning, as its actions are empirical, therefore. It is necessary to promote cognitive strategic thinking, which consists in the development of management knowledge capacities and skills for the efficient performance of the micro enterprise [9]. On the other hand, the commercial function in micro enterprises is the visible and sensitive activity to the consumer market, in this sense, diversification strategies can be implemented to identify new business opportunities. We can think of a new modern marketing strategy with new product lines, involving technology, design, and production, in order to serve a market that is changing in the way it is consumed [3].

578

P. Flores et al.

2.4 Scenarios for Micro-enterprise Management Scenarios are management tools that allow a business to position itself in the market. These strategies allow you to build possible desired futures in a simulated space of analysis and projection of reality in a short time, generating in managers skills and abilities aimed at an efficient management of their resources. These scenarios are desirable because they have an open and interactive process generating strategic and flexible behaviors in managers, thus ensuring a link be-tween foresight and micro-enterprise management [10]. A micro-enterprise that does not sustain or know the reality of its business in the market does not manage a positioning within the industries, this is because they do not clearly identify the factors of business success or failure, for this reason microenterprises cannot consider future scenarios that are in line with the achievement of the desired objectives because they are subject to basic production and regulatory scenarios. 2.5 Decision-Making in Micro-enterprise Decision-making is the process by which an alternative is selected from among several alternatives when solving a problem and its resolution or the identification of actions that affect the company as a whole and in general. There are different types of decision making within a company, often depending on its size, as the process is not the same for large companies, where delegations are involved to help manage them [11]. Decisions should be based on desired objectives, expected results and performance analysis which, in turn, is linked to market behavior and objectives. Running a micro enterprise helps to understand the importance of the different functions and their impact on each other. Making a decision repeatedly by looking at all facets of the business, backed by analysis, helps to improve decision-making skills [11]. Decision-making is strategic in nature as it will generate possible positive or negative scenarios for a desired future; in order to make an effective decision, the manager must be aware of the resources and processes available to them at present and the objectives they wish to achieve.

3 Method This article has a qualitative approach, the type of research is exploratory, which fits the nature of the object of study, the research techniques are bibliographic analysis of documentary review and field data collection, using a questionnaire through a survey that was applied to 242 micro enterprises. 3.1 Bibliometric Analysis After the bibliographic review and supported by data analytics instruments relate-ed to the object of study, the bibliometric analysis was obtained to identify the fields of knowledge, theories, models, and approaches. For this purpose, the Bibliometrix software was

Micro-enterprise Management Towards Scenario Building

579

used to import bibliographic data from the SCOPUS, Web of Science of Clarivate Analytics, PubMed, and Cochrane databases, thus constructing data matrices of quotations, couplings, and the analysis of correlated words. For the analysis, the search equation proposed is presented in the following Table 1: Table 1. Search equation Equation

Result of the documents

Database

ALL (“small business”) AND (“decision-making”)

575

Scopus

Bibliometric analysis on micro-entrepreneurs and decision making. 3.2 Concurrence Analysis Considering the information provided by the Scopus database, the information was processed in the VosViewer software and from there the co-occurrence analysis was performed, identifying the joint appearances of the keywords in the different articles analyzed, in this case the words “micro-finance”, “decision making”, “micro-enterprise” and “social capital” are highlighted. This can be seen in the following image (Fig. 1):

Fig. 1. Co-occurrence analysis from Scopus database.

3.3 Co-occurrence Analysis Another of the results obtained by processing the information in VOSviewer is the bibliometric coupling, where the documents are grouped by subject cluster and depending

580

P. Flores et al.

Fig. 2. Information processing in VOSviewer

on how many times the articles have been cited the programmed places a large or small circle, for this case the result can be seen in the following image (Fig. 2): The bibliographic documentary analysis was carried out with a database of more than twenty indexed articles, referring to the topic of micro-enterprise management towards the construction of scenarios for decision-making. Finally, from the documents analysed, Bibliometrix produces a word cloud with key terms that are most relevant to the request and in this case the key words of the articles were used, and the result can be seen in the following image (Fig. 3):

Fig. 3. Bibliometric keyword analysis

3.4 Co-occurrence Analysis The instrument was constructed according to the author Luis Ledezma, who applied an exploratory documentary review and field methodology to identify the re-search

Micro-enterprise Management Towards Scenario Building

581

elements referring to the theory of social capital prioritizing the relationship between the company and its clients, and the social cognitive theory focused on the knowledge acquired from the interaction with the community, in which he applied a selfadministered survey to 33 people from ININ management. Referring to the methodology mentioned above, an instrument of sixteen items was elaborated for the present research, focused on the study variable Micro-entrepreneurial Management, divided into the dimensions of social capital and cognitive capital (Table 2). Table 2. Rating of items according to dimension. Social capital

Environment factor

Cognitive capital

Environment factor

Identificatión

Yes

Sector

Yes

Workers

Yes

Economic activity

Yes

Employment relationship

No

Tax regime

Yes

Affiliation to the IESS No

Product marketing

No

Number of affiliated partners

No

Merchandise turnover No

Sources of funding

No

Organisation of activities

No

Operation of the building

No

Working capital

Yes

Empirical knowledge/intuition

Yes

Technical training

Yes

4 Results The survey was applied to 242 micro enterprises in the southern sector of Quito, from which the following data were obtained, On the “Social Capital Dimension” the majority of micro-entrepreneurs recognize themselves as micro-enterprises (87%), thus the number of employees in the micro-enterprise fluctuates between 2–3 workers (57.9% and 21.9% respectively), and 20.2% from 4 employees onward; the majority of them reported to be employees (61.6%), family members (31.8%) and acquaintances (6.6%); a large percentage of the employees are multifunctional (59.1%) and the rest of them work in other areas such as salespersons and cashiers (40.9%). The employment relationship is informal (83%), equivalent to not being affiliated to the IESS, their source of financing is their own (60%) and they operate in a property that they do not own (85.1%) and other forms of use of physical space (14.8%). In the “Cognitive Capital Dimension”, 92% of the micro-enterprises are engaged in marketing activities, 73% are under the simplified tax regime, do not generate promotions, do not keep control of merchandise inventories,

582

P. Flores et al.

do not organize their activities and therefore do not keep control of their working capital. Finally, 55.8% of the micro-enterprises develop their economic activities through empirical knowledge and 44.2% through technical training.

5 Discussion According to the results obtained in the analysis presented in the bibliography section, large and interrelated knowledge clusters have been identified, such as: micro-enterprise management, micro-enterprises, knowledge management, social management, scenarios, decision making and capital management. On the other hand, micro-enterprises today have shown an essential growth in the Ecuadorian market, representing 90.9% of the population; however, micro-enterprise management is managed in an intuitive manner, presenting certain factors that impede their positioning: • • • • • • • • • •

They do not correctly define their economic activity. They do not know the current market situation of the business. Unable to adequately recover working capital Management and administration are empirical. The management is aimed at the Simplified Regime for Entrepreneurs and Popular Businesses. No access to funding sources. Little use of technology and digital tools for the development of the different activities. Most workers are multi-functional. No short-, medium- and long-term goals and objectives are set. They do not have the necessary resources to carry out their activities properly.

This information is based on data collected from a sample of 242 micro-enterprises in the Quito sector. In this sense, micro-enterprise management is affected by the incorrect decisions made by their administrators, due to the fact that their knowledge of business management is only theoretical, and they cannot apply it due to the lack of various resources, which means that the businesses are unable to establish future scenarios for their micro-enterprises, and prevents notable growth and sustainability over time [12]. Poor decisions in microenterprises occur because the manager does not plan for future scenarios that allow him/her to anticipate various events. With all the information identified above, it can be proposed that the manager or owner be trained in microeconomics management for strategic decision making for the management of the micro enterprise, which will help him/her to identify possible strategic scenarios, such as: • • • •

Sustainability Corporate Social Responsibility Knowledge management Economic and business development

These main scenarios, if well identified, open the way for micro-enterprises to have higher productivity, a good management of new technologies, innovation capacity and

Micro-enterprise Management Towards Scenario Building

583

a high degree of competitiveness. All of this is combined with the creation of microenterprise management simulators that will allow the identification of future scenarios where they will be able to develop efficiently in the market to which they belong [10] (Fig. 4).

Fig. 4. Factors for the construction of microenterprise management scenarios for decision-making

6 Conclusions It is difficult to define a micro enterprise because there is no clear concept, but it can be defined as a small-scale social and economic unit that, according to the industry in which it operates, carries out different activities which include production, services, and marketing processes. The proper management of micro-enterprise management through scenario building will generate a policy of permanent innovation, application of strategic marketing, human resources policy, management by objectives, total quality, and process re-engineering. Depending on the simulation, individuals may have different decision-making models, adopt different change management strategies, and have different reactions to group dynamics, leadership styles, sales techniques, or team decisionmaking practices. Simulators are tools used in various fields, simulation can be defined not only as digital tools, as they have been used since ancient times with paper and other concrete materials. A simulator is the approach to the development of usual activities, which involve not putting resources at risk, in the words of Gary Summers, computer based simulations can be seen as business games of companies or industries.

References 1. Agilar Castro, A., Puerto Becerrra, D.P.: Crecimiento empresarial basado en la responsabilidad social (2012) 2. Artega Coello, H.: La ciencia de la administración de empresas. Portoviejo (2016)

584

P. Flores et al.

3. Garzón Castrillón, M.A., Ibarra Mares, A.: Innovación empresarial, difusión, definiciones y tipología: Una revisión de literatura (2018) 4. Seclén Luna, J.P.: Gestión de la innovación empresarial: un enfoque multinivel, Revista de las ciencias de la Gestión (2016) 5. Riemenschneider, C., Harrison, D., Mykytyn, P., Jr.: Understanding it adoption decisions in small business: integrating current theories. Inf. Manag. 40(4), 269–285 (2003) 6. Velte, P., Stawinoga, M.: Integrated reporting: the current state of empirical research, limitations and future research implications. J. Manag. Control 28(3), 275–320 (2017) 7. Shevchenko, A., Pan, X., Calic, G.: Exploring the effect of environmental orientation on financial decisions of businesses at the bottom of the pyramid: Evidence from the microlending context. Bus. Strat. Env. 29(5), 1876–1886 (2020) 8. Malhotra, R., Temponi, C.: Critical decisions for ERP integration: small business issues. Int. J. Inf. Manag. 30(1), 28–37 (2010) 9. Jennings, P., Graham, B.: The performance and competitive advantage of small firms: a management perspective. Int. Small Bus. J. 15(2), 63–75 (1997) 10. Tello Rios, J.P.: Desarrollo de un simulador para la valoración de empresas en Santander. Santander (2022) 11. Khanzode, A.G., Sarma, P.R.S., Mangla, S.K., Yuan, H.: Modeling the Industry 4.0 adoption for sustainable production in Micro, Small & Medium Enterprises. J. Clean. Prod. 279, 123489 (2021) 12. Fassin, Y., Van Rossem, A., Buelens, M.: Small-business owner-managers’ perceptions of business ethics and CSR-related concepts. J. Bus. Ethics 98, 425–453 (2011) 13. Rodrigues Olalla, A., Aviles Palacios, C.: Integrating sustainability in organisations: an activity-based sustainability model. Sustainability 9(6), 1072 (2017)

Analysis of Business Efficiency Considering the Influence of the Particular Events on Sales Increase Period 2016–2020 Ximena Elizabeth Cayambe Badillo(B) , Willman Leonel Bravo Espinoza , Luis Alberto Carrera Toro , and Hilberth Alexis Villalba Bejarano Instituto Tecnológico Universitario Rumiñahui, Av. Atahualpa 1701 y 8 de Febrero, Sangolquí, Ecuador [email protected]

Abstract. The behavior of efficiency scores of 26 companies during the period 2016–2020 was analyzed, considering the records of the Superintendencia de Compañias y Seguros de Ecuador. The problem is the influence of the particular events for the increase in sales, and, as a consequence of this, the incidence in the movement of efficiency scores, due to the increase of the particular social demand. The objective is to determine the movement of efficiency scores between 2016 and 2020, caused by the increase in demand. An economic scenario before and during the pandemic is considered. The DEA Data Envelopment Analysis methodology was used, in order to show the efficiency levels resulting from the use of assets and liabilities to achieve profits, in addition, the evolution of scores was analyzed with bar charts and calculation of percentages. It resulted in a cycle that starts with a bullish phase, where the efficiency scores grow due to the increase in sales, due to specific temporary events, followed by a bearish phase, in which a decrease in sales is due to the disappearance of these events, having an oscillation of great intensity between the year 2019 and 2020. It was concluded that efficiency scores in 77% of the companies grew between 2016 and 2017, while in 92% of the companies decreased in the years 2017–2018 and 2018–2019; and, finally in the period 2019 and 2020, the scores grew up in 100% of the companies. Keywords: Mipymes · Finanzas · Eficiencia · Dea · Comercio · Evolución · Ciclo · Demanda · Covid 19

1 Introduction Globalization [1] has resulted in the dependence of less developed countries on those with better economic conditions at the macro and micro levels, considering the availability of access to education, technology and sources of financing [2]. In this context, it is important to review the evolution of efficiency levels in companies, due to the importance of achieving a competitive advantage [3] in the country, based on taking advantage of periods of prosperity that allow companies to achieve sustainability and not only momentary increases in sales as a result of the impact of the particular demands [4], for example: pandemics, or a specific political environment [5]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 585–599, 2023. https://doi.org/10.1007/978-3-031-25942-5_46

586

X. E. Cayambe Badillo et al.

In our country, considering the data recorded in the Directorio de Compañias y Establecimientos (DIEE) 2020 published by the Instituto Ecuatoriano de Estadisticas y Censos (INEC), it was verified that trade and services are the main economic sectors, generating 38% and 26% of sales nationwide, respectively; followed by manufacturing industry with 21%, mining and quarrying 6%, agriculture, livestock, forestry and fishing 6.99% and construction with 2.4% [6]. The companies reviewed belongs to those that are in the segment of micro, small and medium-sized companies, i.e., those with revenues between $0 and $5,000,000 [7] and are also under the control of the Superintendencia de Compañias y Seguros del Ecuador. According to information from this Entity (SCVS), the economic subsector commerce, specifically the sector of “Wholesale and retail trade; repair of motor vehicles and motorcycles”, has the largest share in the Ecuadorian economy [8], between 2016 and 2020 had an average share of 41%, and within that, the subsector of wholesale of pharmaceutical products, including veterinary products (CIUU code G4649. 22), has the third highest share in revenues, on average between 2016 and 2020 is 6%, also recorded the third highest in number of companies, for this reason, the study focused on the analysis of companies belonging to that subsector At the international level, one of the sectors less negatively affected by the pandemic was the pharmaceutical sector. Filiberto Enriquez [9] mentions that: The analysis of stock market movements caused by pandemic diseases and their effects on specific industries or sectors, stand out for their defensive level in health equipment, as well as the pharmaceutical and biotechnology industries [9]. Why is it necessary to review the evolution of efficiency scores? To identify the behavior of efficiency levels over a 5-year period, i.e., the use of the companies’ resources, since this will directly affect the prices of the products and/or services of the companies in the subsector studied [10]. The objective is to determine the impact of the increase of the particular demand on the movement of efficiency scores between 2016 and 2020, based on the review of information from the Superintendencia de Compañias of April 2022. The articles reviewed are presented below (See Table 1.):

Analysis of Business Efficiency Considering the Influence of the Particular Events

587

Table 1. Articles reviewed Research topic

Objectives

Conclusions

Data envelopment analysis (DEA) for the study of technical efficiency in health systems: a literature and methodological review in the Ecuadorian context [11]

To conduct an exhaustive review of the methodology used to measure technical efficiency in the health sector

The best methodology for measuring efficiency in health systems would be the DEA, since it is flexible in the use of both input and output variables, making simultaneous comparisons and calculating technical and allocative efficiency in terms of resources saved and output achieved

Analysis of SMEs (Small and medium-sized enterprises) in Ecuador, their importance within the country’s economy [12]

To review the participation of SMEs in the Ecuadorian business environment, in order to know their economic situation and their contribution to the creation of employment and wealth

SMEs are a very important part of the country’s economy, but they are also a very fragile segment because they do not have sufficient financial support to face the uncertainty of their environment

Cluster method-discriminant analysis-data envelopment analysis to classify and evaluate business efficiency [10]

Identify characteristic profiles of small and medium-sized small exporting companies and evaluate their business efficiency, in order to support improvement processes in their results

The need to improve decision-making in the operating conditions of the different profiles, in such a way that technical efficiency is improved and in this way better results can be reflected in the financial statements of the companies under study

The DEA [13] Data Envelopment Analysis methodology is used to establish the efficiency scores in the use of resources of the selected companies, for which 3 variables were selected; 2 INPUT (assets and equity) and 1 OUTPUT (net income), which are used to determine the efficiency scores based on the use of assets and liabilities to obtain profits in the companies; and, to show the evolution of the efficiency scores, bar charts and percentage calculations were used. The result is a cycle [14] that shows a bullish phase, where the efficiency scores grow due to the increase in sales due to specific temporary events, followed by a bearish phase in which a decrease in sales is shown due to the end of these events, having a great peak between 2019 and 2020, but this could cause an economic problem in the companies of the sector being studied. It is concluded that between 2016 and 2017, 77% of the companies grew in their efficiency scores, while between 2017–2018 and 2018–2019 92% of the companies revised had a decrease in their efficiency scores, but not between 2019 and 2020, in

588

X. E. Cayambe Badillo et al.

which 100% of the companies increased their efficiency scores due to the particular event such as the pandemic [15]. The present research is intended for researchers, public and private institutions interested in knowing the movement of the efficiency scores of several periods in the companies of the subsector G4649.22 - wholesale of pharmaceutical products, including veterinary products; considering the efficient use of assets and equity (variables chosen as inputs). This document is divided into 5 parts: 1. Introduction, 2. Methodology and Methods, 3. Results, 4. Discussion, 5. Future Work and Conclusions, and References.

2 Methodology and Methods The methodology used for the analysis of the efficient use of resources of small and medium-sized companies, as well as micro businesses as registered in the web page of the Superintendence of Companies of Ecuador, is the DEA data envelopment analysis with emphasis on the CCR-I methodology, that is to say, input-oriented, whose mathematical expression is given below (see Table 2 and Fig. 1): Table 2. DEA-CCR wrap-around model Primal constraint (model 2.3)

Dual variable (model 2.5) Dual constraint (model 2.5)

Primary variable (model 2.4)

δ T XO = 1

θ λ≥0

δT ≥ 1

μT Y − δ T X ≤ 0

Y λ ≥ Y0 θ x0 − X λ ≥ 0

μT ≥ 0

Fig. 1. DEA-CCR wrap-around model

To explain the methodology, it is necessary to consider what is indicated by Farell [16] in relation to the following:

Analysis of Business Efficiency Considering the Influence of the Particular Events

589

Farell’s proposal is to visualize efficiency from a real, not ideal perspective, where each production unit is evaluated in relation to others taken from a representative and comparable group. Thus, efficiency measures would be relative and not absolute, where the value reached by a given production unit corresponds to an expression of the deviation observed with respect to those considered as more efficient given the available information. In this sense, the methodology proposed by Farrell is a technique based on the “benchmark” concept (p. 12). In this context, the analysis was carried out considering a set of productive units (DMU decision making units), comparable among themselves, in terms of data availability and number of years elapsed since their incorporation within the same economic subsector (according to CIUU code G4649.22). Para el análisis se utilizó el software DEA y se identificaron las siguientes variables (ver Fig. 4):

Variable INPUT-ASSETS Variable INPUT- EQUITY Variable OUTPUT-NET PROFIT

Input 1 Assets

Imput 2 Equity

DMU (Decision Making Unit) SECTOR COMPANIES

PRODUCT (Net Income)

Fig. 2. INPUT and OUTPUT variables

Figure 2 shows the variables that will be used for efficiency scores based on the use of assets and liabilities to achieve profits in the companies. The information used for the DEA analysis, was information from small and medium-sized companies, as well as micro businesses, taken from the web page of the Superintendence of Companies, for the years 2016, 2017, 2018, 2019 and 2020 with cut off as of April 2022. The total of companies registered under the CIUU code G4649.22 - WHOLESALE OF WHOLESALE OF PHARMACEUTICAL PRODUCTS, INCLUDING VETERINARY PRODUCTS was 704 companies for all years. However, it was necessary to exclude the following companies for the reasons indicated in Table 3:

590

X. E. Cayambe Badillo et al. Table 3. Reasons for excluding companies from the analysis

Reasons for exclusion

2016

Companies with negative equity or Zero

2017

2018

2019

2020

77

65

73

83

93

Companies with no income

103

130

137

163

146

Companies reporting losses

20

85

24

0

142

Total companies excluded

200

280

234

246

381

Total companies to be reviewed

263

222

310

324

200

Subsequently, it was possible to get 460 companies that were in operation in all the years under analysis; however, information was not available for all the periods, therefore, it was necessary to consider only those that had complete information. The companies without data in all periods were 434, i.e., the number of companies to be analyzed was reduced by 26.

3 Results To begin with, the evolution of DMUs (decision making units) was analyzed according to the records of the Superintendencia de Compañias, considering those of CIU code G4649.22 - Wholesale of Pharmaceutical Products, Including Veterinary Products, in the analysis, a sustained increase in number of companies could be verified between 2016 and 2020 (see Fig. 3).

Evolution of the number of companies G4649.22 Wholesale of pharmaceutical products, including veterinary products. 800 600

502

544

570

581

463

año 2016

año 2017

año 2018

año 2019

año 2020

400 200 0 Total Companies Fig. 3. DMU evolution 2016–2020

At the same time, the evolution was analyzed considering the size of the companies, and a sustained growth of micro businesses was noted, according to the Superintendence of Companies, as shown in Fig. 4 below.

Analysis of Business Efficiency Considering the Influence of the Particular Events

591

Evolution of DMUs 2016-2020 Mid size company 350 300 250 200 150 100 50 0

Micro Business 291

240

173

159 85

2016

308

(en blanco) 332

253

147 71

Small Business

2018

161 101

93

78

2017

164

2019

2020

Fig. 4. Classification of companies by size 2016–2020

Along with the analysis of the evolution of the DMU, the total number of companies was reviewed, according to the classification by company size, and it was found that 54% are micro businesses, 30% are small businesses, and 16% are medium-sized companies. An efficiency analysis of the companies in the economic subsector wholesale of pharmaceutical products, including veterinary, was carried out, applying the DEA data envelopment analysis considering the data corresponding to the year 2020, based on which the most efficient DMUs were: the Pharmaceutical Distributor El Oro Orofarmacias S.A.; Pharmaceutical Distributor Farmactiva Cia.Ltda.; F&F Groupmedical Cia.Ltda.; Pharmaceuticals, Supplies and Medicines - Farinmedsa S.A.; Multiservices Rojas Cia.Ltda. Multired; Valne S.A, however, could not verify the evolution of its efficiency scores in the years: 2016, 2017, 2018, 2019, because the information has not been fully registered with the Superintendence of Companies, therefore, the inclusion of these companies in this analysis is not possible. It should be remembered that of the companies analyzed as efficient in 2020, 50% of the companies were micro-enterprises and the other 50% were small companies. To show the evolution of the efficiency levels in the use of their assets and equity, to obtain profits during the years 2016, 2017, 2018, 2019, and, 2020, we used the scores resulting from the analysis performed with the DEA CCR INPUT model, of those companies that register all the data required for the review in the aforementioned study years. The companies selected for the analysis are (see Table 4):

592

X. E. Cayambe Badillo et al.

Table. 4. Companies with information registered in the years 2016–2020 within CIU code G4649.22 - Wholesale of Pharmaceutical Products, Including Veterinary Products. Companies

Companies

Bivalvia S.A

Jusanco Cia Ltda

Consorcio De Servicios Industriales Comerciales Consei S.A

Laboratorios Santa Rita Labsantarita S.A

Cristalia Del Ecuador S.A Dermosalud S.A Equifarm S.A

Mercanser S.A Moleculas Biologicas Biomolec Cia. Ltda

Especialidades Farmaco Veterinarias Llaguno C Ltda Faes Farma Del Ecuador S.A Farmacos Basicos (Basicfarm) S.A Farmarapid Cia. Ltda Faxare S.A Garcia Vintimilla E Hijos Cia. Ltda

Natural Herbs S.A. Natuherb Pharmavaccine S.A Promotora Farmaceutica Ecuatoriana Profarmec Cia. Ltda Provenco C.Ltda Sinmedic S.A

Genecom Cia. Ltda Glucosamina S.A Imnac Importadora Nacional Cia. Ltda

Teco-Gram S.A Totalcorp S.A Victalabsa Laboratorios Del Ecuador S.A

From the 26 companies selected for the analysis, 58% were located in the Costa region and 42% in the Sierra, 54% in Guayas, 38% in Pichincha and 8% in Ibarra and Manabí. 54% are small companies; 38% being medium-sized companies and 8% are micro businesses, see Fig. 5.

Companies for the analysis of the evolution 2016-2020

14

15 10

10

5

2

0 Mid size company

Micro Business

Small Business

Fig. 5. Ranking of the 26 companies analyzed by size 2016–2020.

For the analysis of the evolution of the scores, the data were subjected to the normality test using the SPSS system, according to the following data (see Table 5 and Table 6).

Analysis of Business Efficiency Considering the Influence of the Particular Events

593

Table 5. Normality tests for statistical analysis Statistical Test: Shapiro wilk (sample less than 50 data) Hypothesis H 0 : It is confirmed that the efficiency scores have a normal distribution H 1 : It is not confirmed that the efficiency scores have a normal distribution Meaning: α = 5% = 0.05 p-value: (p practical) Decision: If p > α then, the null hypothesis is confirmed If p < α then, the alternative hypothesis is confirmed Conclusion: The efficiency scores do not follow a normal distribution, i.e. a descriptive analysis cannot be performed

Table 6. Test Shapiro Wilk SPSS Shapiro-Wilk

Statistic

gl

Sig

Year 2016

,623

25

,000

Year 2017

,704

25

,000

Year 2018

,802

25

,000

Year 2019

,768

25

,000

Year 2020

,874

25

,005

The result of the analysis shows that the alternative hypothesis is accepted, i.e., the efficiency scores do not follow a normal distribution [17], therefore, descriptive analysis cannot be performed. For the analysis of the evolution of the efficiency scores obtained with the DEA Data Envelopment Analysis methodology, bar charts and percentage references were prepared, and for the analysis the data were grouped considering the number of years in the market, according to the following detail (see Table 7). Table 7. Number of companies grouped in years since its creation Groups

Number of years since establishment

7–11 years

9

12–17 years

8

18–22 years

3

23–27 years

2

28–32 years

1

33 years and up Total

3 26

594

X. E. Cayambe Badillo et al.

Table 7 shows the companies grouped according to the number of years of operation, calculated from their date of establishment. Scores Analysis (Companies 7-11 years)

Efficiency Scores

2.50 2.00

2020

1.50

2019

1.00

2018

0.50

2017 2016

1

2

3

4

5

6

7

8

9

Companies Fig. 6. Scores analysis (Companies 7–11 years old) period 2016–2020.

Figure 6 shows the behavior of the following companies: 1 Faes Farma Del Ecuador S.A., 2 Farmarapid Cia. Ltda., 3 Garcia Vintimilla E Hijos Cia. Ltda., 4 Imnac Importadora Nacional Cia. Ltda., 5 Jusanco Cia Ltda., 6 Laboratorios Santa Rita Labsantarita S.A., 7 Moleculas Biologicas Biomolec Cia. Ltda., 8 Pharmavaccine S.A., 8 Pharmavaccine S.A. 9 Promotora Farmacéutica Ecuatoriana Profarmec Cia. Ltda.; the information reviewed shows an average growth of efficiency scores of 111% between 2016 and 2017, while between 2017 and 2018 the average decrease is 73%, in the same way, between 2018 and 2019 there was a decrease of 20%, not so between 2019 and 2020 year affected by a global crisis in which a growth of 3280% is shown. With respect to Fig. 7, the companies analyzed are: 1 Cristalia Del Ecuador S.A., 2 Dermosalud S.A., 3 Equifarm S.A., 4 Faxare S.A., 5 Glucosamine S.A., 6 Mercanser S.A., 7 Natural Herbs S.A. Natuherb, 8 Victalabsa Laboratorios Del Ecuador S.A., which presented an average growth of Scores between 2016 and 2017 of 198%, between 2017 and 2018 there is a decrease of 75%, between 2018 and 2019 an average decrease of 16%, and, as a consequence of the pandemic between 2019 and 2020, there was an average growth of 1,199% In Fig. 8 the efficiency Scores of the following companies were analyzed: 1 Genecom Cia. Ltda., 2 Teco-Gram S.A., 3 Totalcorp S.A., in the analysis it was possible to determine an average growth of 18% between 2016 and 2017, not so in the following period that an average decrease of 76% was obtained, in the same way in the period 2018– 2019, an average decrease of 62% was obtained, and finally, in the period 2019–2020, an average growth of 7.275% was shown. In relation to Fig. 9, the efficiency scores analyzed were for the companies: 1 Bivalvia S.A., and, 2 Consorcio De Servicios Industriales Comerciales Consei S.A., in which an average growth of 119% was found in the period 2016–2017, but not in the periods

Analysis of Business Efficiency Considering the Influence of the Particular Events

595

Efficiency Scores

Scores Analysis (Companies 7-11 years) 100% 80%

2020

60%

2019

40%

2018

20%

2017

0% 1

2

3

4

5

6

7

8

2016

Companies

Fig. 7. Analysis scores (Companies 12–17 years) period 2016–2020

Efficiency Scores

Score Analysis (Companies 18-22 years old)

1.50

2020

1.00

2019

0.50

2018 2017

1

2

3

2016

Companies

Fig. 8. Score analysis (Companies 18–22 years old) period 2016–2020

2017–2018 and 2018–2019 in which there was an average decrease of 90% and 46% respectively, finally in the period 2019–2020 there was an average growth of 3,217%. Figure 10 presents the analysis of the efficiency scores of the company FARMACOS BASICOS (BASICFARM) S.A., which between 2016 and 2017 grew by 435%, while in the period 2017–2018 it decreased by 87%, and in the period 2018–2019 it also decreased by 79%, and, finally, in the period 2019–2020 it grew by 908%. In Fig. 11 we reviewed the results of efficiency score of the companies: 1 Especialidades Farmaco Veterinarias Llaguno C Ltda, 2 Provenco C.Ltda., 3 Sinmedic S.A., which presents an average growth of 623% in the period 2016–2017, later in the period 2017–2018 there is an average growth of 209%, not so in the period 2018–2019 that shows a decreasing of 48%, meanwhile in the period 2019–2020 there is a growth of 2,589%.

596

X. E. Cayambe Badillo et al.

Score Analysis (Companies 23-27 years old)

Efficiency Score

0.50 0.40 0.30 0.20 0.10 -

2020 2019 2018 2017 1

2016

2 Companies

Fig. 9. Score Analysis (Companies 23–27 years) period 2016–2020

0.25000

Score Analysis (Companies 28-32 years old) FARMACOS BASICOS (BASICFARM) S.A. 0.23480

0.20000 0.15000 0.10000 0.05000

0.06831

0.04388

0.03170

0.00678

2016

2017

2018

2019

2020

escores Fig. 10. Score analysis (Companies 28–32 years) period 2016–2020

4 Discussion This analysis shows the evolution of the efficiency scores of the companies in the subsector of wholesale of pharmaceutical products, including veterinary products (CIUU code G4649.22), evidencing a cycle in which variations are shown due to the occurrence of the particulars events, and as a result it was found: A growth of the efficiency score between the year 2016 and 2017, for the 6 groups of companies, mainly due to the increasing sales, due to the growth of fiscal spending aimed at boosting economic activities in the health sector, since in 2017 the INEC records a gross formation of public fixed capital in health of 780 (millions of dollars).

Analysis of Business Efficiency Considering the Influence of the Particular Events

597

Efficiency Score

Score Analysis (Companies 33 years and older)

0.70 0.60 0.50 0.40 0.30 0.20 0.10 -

2020 2019 2018 2017 2016 1.00

2.00

3.00

Companies Fig. 11. Score analysis (Companies 33 years and older) period 2016–2020

A drop in the efficiency score between 2017–2018 and 2018–2019 in the companies analyzed and classified in 6 groups, due to the change of economic policies in relation to the health sector, due a new Government. A growth in the efficiency score in the period 2019 and 2020 as a result of the increase in demand, due to the appearance of a global pandemic, which increased out-of-pocket spending on health as a percentage of total health spending, from 30 .9% in 2018, and 31.5% in 2019, to 40% in 2020. The two events mentioned created the need for the use of items, supplies and medicines distributed by the companies in the analyzed subsector. In short, an economic cycle was evidenced that shows an upward phase, where the efficiency scores grow due to the increase in sales due to specific temporary events, followed by a downward phase in which there is a decrease in sales, due to the disappearance of these events. Companies must take advantage of high-intensity oscillations to invest and thereby generate sustainability, since the analysis shows an imminent fall after growth, which could affect the future stability of companies in the subsector studied, if not taken timely decisions.

5 Future Work and Conclusions As future work, it is expected to carry out a similar analysis post pandemic and for the same period (from 2021 to 2025), in order to determine the evolution of the efficiency scores. It was concluded that companies in the wholesale of pharmaceutical products, including veterinary products subsector (CIUU code G4649.22), it was not negatively affected by the pandemic event as occurred with other sectors considered non-essential, but also came out of it stronger, since their scores grew exponentially. Additionally, in 2016 and 2017, 77% of the companies increased their efficiency scores, while between 2017–2018 and 2018–2019, 92% of the companies analyzed had a decreasing efficiency score, but not between 2019 and 2020, in which 100% of the companies increased their efficiency scores.

598

X. E. Cayambe Badillo et al.

The growth in the two periods mentioned was caused by the appearance of two particular events, among them: the increase in the gross formation of public fixed capital in health, as a consequence of the increase in fiscal spending in health, and the appearance of a global pandemic, which increased the demand in the wholesale subsector of pharmaceutical products, including veterinary products (CIUU code G4649.22), which directly affected the increase in the efficiency scores of the companies analyzed. Finally, it became evident that it is necessary that the companies comply with the timely submission of their financial information to the corresponding supervisory body (SCVS), as this will allow timely monitoring, the respective corporate segment.

References 1. De Loja-ecuador, P.: Globalización post Covid-19: efectos sociopolíticos y económicos del fenómeno 7, 74–88 (2020) 2. Sorroza, N., Jinez, H., Jinez, L., Jinez, B.: Impacto de las pandemias en el comercio internacional y Ecuador. Reciamuc 1–3 (2020). https://www.reciamuc.com/index.php/RECIAMUC/ article/view/474/716. Accessed 06 July 2022 3. Luciani, L., Morales, A., González, A.: Mipymes ecuatorianas: Una visión de su emprendimiento, productividad y competitividad en aras de mejora continua. Coodes 7(3), 2–19 (2019). http://scielo.sld.cu/scielo.php?script=sci_arttext&pid=S2310-340X20190003 00313. Accessed 22 Aug 2022 4. INEC: Cuentas Satélite de Salud Boletín técnico, 1–13 (2021) 5. Lucio, R., López, R., Leines, N., Terán, J.A.: El Financiamiento de la Salud en Ecuador. Revistapuce (2019). https://doi.org/10.26807/revpuce.v0i108.215 6. INEC: Boletin_Tecnico_DIEE_2020. Inec (2021) 7. Presidencia de la Republica: Registro Oficial Suplemento 450 de 17-may (2017). www.lexis. com.ec. Accessed 06 July 2022 8. Superintendencia de Compañias Valores y Seguros: Portal de Información. Información de las empresas que están bajo la vigilancia de la Superintendencia de Compañías Valores y Seguros, 06 July 2022. https://appscvsmovil.supercias.gob.ec/PortalInformacion/sector_soc ietario.html. Accessed 06 July 2022 9. Valdes-Medina, F.E., Saavedra-García, M.L., Gutiérrez-Navarro, A.A.: Los comunicados de la Organización Mundial de la Salud relativos a las pandemias y su impacto en farmacéuticas que integran el índice Standard & Poor’s 500. Estud. Gerenciales, 3–16 (2021). https://doi. org/10.18046/j.estger.2021.158.4162 10. Fontalvo-Herrera, T.J., De La Hoz-Granadillo, E.: Método conglomerado-análisis discriminante-análisis envolvente de datos para clasificar y evaluar eficiencia empresarial. Entramado 16(2), 46–55 (2020). https://doi.org/10.18041/1900-3803/entramado.2.6437 11. Suin Guaraca, L.H., Duque Rodríguez, M.A., Aguirre Quezada, J.C.: Análisis Envolvente de Datos (DEA) para el estudio de la Eficiencia Técnica en los Sistemas de Salud: Una revisión bibliográfica y metodológica en el contexto ecuatoriano. Rev. la Fac. Ciencias Médicas la Univ. Cuenca 38(03), 97–108 (2021). https://doi.org/10.18537/rfcm.38.03.10 12. Rodríguez-Mendoza, R., Aviles-Sotomayor, V.: Las PYMES en Ecuador. Un análisis necesario. 593 Digit. Publ. CEIT 5-1(5), 191–200 (2020). https://doi.org/10.33386/593dp.2020. 5-1.337 13. Pino-Mejías, J.-L., Solís-Cabrera, F.M., Delgado-Fernández, M., Barea-Barrera, R.: Evaluación de la eficiencia de grupos de investigación mediante análisis envolvente de datos (DEA). El Prof. la Inf. 19(2), 160–167 (2010). https://doi.org/10.3145/epi.2010.mar.06

Analysis of Business Efficiency Considering the Influence of the Particular Events

599

14. Pagan, A.: El ciclo económico: Algunas reflexiones sobre la literatura. Papeles Econ. española (1923), 2–15 (2020). https://dialnet.utadeoproxy.elogim.com/servlet/articulo?cod igo=7689675 15. Blanco, A.: América Latina post COVID-19: riesgos y oportunidades del nuevo ciclo económico. Real Inst. Elcano, 1–10 (2021). http://www.realinstitutoelcano.org/wps/portal/ rielcano_es/contenido?WCM_GLOBAL_CONTEXT=/elcano/elcano_es/zonas_es/ari652021-blanco-america-latina-post-covid-19-riesgos-y-oportunidades-nuevo-ciclo-economico 16. Schuschny, A.R.: Método DEA y su aplicación al estudio del sector energético y las emisiones de CO2 en América Latina y el Caribe (46), (2007). http://www.cepal.org/publicaciones/xml/ 8/28668/LCL2657e.pdf 17. Martínez, E.: Estadística. Universidad Abierta para Adultos (UAPA) (2020)

Income from Ordinary Activities and Its Tax Impact on Companies in the Automotive Sector in Ecuador Aníbal Altamirano Salazar1(B) , Carla Valdiviezo Morales2 , Ramiro Pastás Gutiérrez1 , and Lenin Altamirano Gallegos3 1 Instituto Tecnológico Superior Rumiñahui, Av. Atahualpa 1701 and 8 de febrero, Sangolquí,

Ecuador [email protected] 2 Universidad Técnica de Machala, Av. 24 de Julio and Loja, Machala, Ecuador 3 Politécnico do Porto, Roberto Frias, 4200-465 Porto, Portugal

Abstract. The objective of the International Financial Reporting Standards (IFRS) is to provide guidelines for the preparation of financial statements. The automotive industry seeks to establish parameters for the recognition of income from ordinary activities from contracts and its correct presentation in the financial statements. However, many companies in this sector still emphasize accounting for revenue according to billing, without taking into account the mandatory application of the new revenue recognition model, assuming significant tax risks. For this purpose, a descriptive and correlational investigation was carried out in order to measure the degree of relationship between the accounting, financial, tax aspects and their subsequent control through the Audit, related to the income of companies in the Ecuadorian automotive sector, in the period from 2018 to 2021. The results obtained show that the recognition of income in accordance with IFRS has a significant direct positive impact on tax aspects with an indicator of 0.35. As well as the alternative hypothesis regarding the incidence of the income audit in the financial, accounting and tax impact in the companies of the automotive sector, since the global correlation between all the components and dimensions of the variables presents a value of 0.38. As a final contribution, the hypothesis was verified with a practical case. Keywords: International Financial Reporting Standards (IFRS) · Income from ordinary activities · Income Tax

1 Introduction Companies in the automotive sector maintain an activity aimed at satisfying the mobilization needs of society, boosting the national economy with an annual sales volume of approximately 11 million dollars, whose purpose is to contribute to the development of the country through technology transfer and mobility solutions [1]. It should be noted that the main line of business handled by this industry is the sale of vehicles, however, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Botto-Tobar et al. (Eds.): ICAETT 2022, LNNS 619, pp. 600–616, 2023. https://doi.org/10.1007/978-3-031-25942-5_47

Income from Ordinary Activities and Its Tax Impact

601

complementary activities are carried out such as the sale of spare parts, maintenance services, guarantees and direct credit. This sector in the last five years has presented a considerable increase in commercial houses that are dedicated to the commercialization of vehicles of different brands, both new and used. With this background, the automotive companies must compulsorily implement the current regulations, so that the adequate recognition of the income-generating events is carried out. In this context, one of the main problems lies in the erroneous recording of the accounting transactions carried out in a fiscal year; generated by the sale of goods that are recorded immediately without considering the date of delivery of the good. Traditionally, Ecuadorian companies have registered their sales by associating them with an invoice, that is, by issuing an invoice it has been considered that there is already income, erroneously interpreting the tax regulations. The automotive sector is not far from this reality problem that is accentuated at the end of the year, because on this date sales are made that are invoiced immediately, even when the transfer of control of the acquired good is pending for the following year for many circumstances ranging from complications in the availability of delivery in inventories. Another problem is pointed out by Arroyo and Buenaño [2], who state that companies in the automotive sector generate an important participation in the country’s economy, due to the collection of direct or indirect income. Regarding indirect income, part of the sales is mainly made through financing programs, where implicit interests are established that are not considered in the accounting of this type of contract, generating imperfect financial information, due to the inadequate registration of Interest on credit sales. These types of companies offer additional services such as vehicle maintenance and guarantees, in order to preserve the useful life of the asset and prevent further damage that may entail an additional cost or expense. For this reason, it is important to note that some companies do not record the additional performance obligation at the exact time the service was provided, but rather when the customer purchased the good. That is, the performance obligations are not billed on time, causing an undervaluation of income, thus, the value presented in the financial statements is less than what was actually incurred in maintenance services. In addition, this industry provides warranty coverage that undertakes to repair, replace and correct defective parts or elements under certain conditions established in the contract. The problem with post-sale guarantees lies in differentiating whether these are an independent performance obligation or constitute part of the benefits associated with the delivery of the good. For this, IFRS 15 establishes the need to evaluate the connection that this service has with the sale of the product, in this sense, if the guarantee is provided separately, the company will account for it as a performance obligation. On the contrary, if the provision is part of the contract, it must be accounted for in accordance with IAS 37 Provisions and Contingent Liabilities, that is, the company must estimate the guarantee provision on the global portfolio at that time. On the other hand, another circumstance present in this type of industry must be observed, related to the costs associated with the transfer of goods or services, which are derived from the signing of a contract with clients. Companies usually omit the distinction of dates both in obtaining the contract, as well as at the time of its fulfillment, for this reason these amounts are not always registered in the account corresponding to

602

A. Altamirano Salazar et al.

the generating event, causing deficiencies in the accounting treatment of income. A clear example is sales commissions, items that are subject to accrual, as long as the company expects to obtain a disbursement in exchange for the consideration received. In fact, depending on whether the cost occurs independently of obtaining the contract, it will be recorded as an expense, otherwise it will become part of the asset. A problem that can be added is the lack of a thorough evaluation by independent auditors, of the moment of transfer of control of the vehicle sold, of the separation of the performance obligations that are immersed in the sale of a vehicle, due to because there is a series of promotions considered as complementary services such as, for example, scheduled future maintenance, guarantees, as indicated in IFRS 15. Vehicle dealers will recognize the income when they transfer control of the vehicle to their client. That is, when the customer can use and obtain the benefits of their new vehicle, not at the time of billing as they have done for several years. Especially when they are transactions that are carried out at the end and beginning of the following year, since they can cause consequences in the taxation of the company’s taxes. To this end, the American Institute of Certified Public Accountants (AICPA), which created the International Accounting Standar Board (IASB), issued the International Financial Reporting Standards [3]. This group of standards have become the conceptual background for the preparation of financial statements, in effect, in this investigation IFRS 15 Income from contracts with customers will be analyzed, which establishes the accounting treatment of income through the Transfer model. of Control, which establishes the recognition of income once the control of the performance obligations constituted by goods or services has been transferred, through five fundamental steps: the identification of the contract, the separation of the performance obligations, the determination of the price, the assignment of the price to the performance obligations and, the accounting as the performance obligations are met. interpretations [4]. However, the standard states that the fundamental principle of the model is compliance with performance obligations to customers. In addition, that the company recognizes an income represented in the transfer of goods or services involved to “users of an amount that reflects the consideration that the company has as an expectation” [5]. And that, on the other hand, establish the accounting of the incremental costs to obtain a contract and of the costs directly related to the fulfillment of a contract as long as they are recognized in the periods in which they correspond without failing to observe the tax regulation [6]. In this sense Castro, Melinc & Zegarra [7] mention the most important aspects to be considered at the time of the application of IFRS 15. In the first place, the ordinary income of the contract receives the amount agreed in the initial contract together with the variations in the work of the contract, the claims and the payments of incentives to the extent that they are likely to result in revenue and can be measured reliably. Finally, Contract costs that are directly related to the specific contract and that are attributable to the general activity of the contract and according to the terms of the contract. Once the income has been analyzed from the accounting and financial point of view, however, it is important to take into account the fiscal or tax point of view, for this purpose, the tax administration classifies them into taxable income and exempt income. Taxable income refers to income obtained in Ecuadorian territory by natural persons or

Income from Ordinary Activities and Its Tax Impact

603

resident or non-resident companies. For tax purposes, article 8 of the Internal Tax Regime Law (LRTI) [8]. Regarding exempt income, it is that which is exclusively detailed in article 9 of the Tax Regime Law. Alarcón & Martínez [10] determined the main differences and impacts at a general level, between the local accounting standard, the tax standard and IFRS 15 in the experience of Colombian companies. Among the most important results of this comparison, it is established that the three norms coincide in terms of the recognition that the income comes from the increase in equity, without this having to do with the contributions of the partners. However, in this aspect, IFRS 15 extends the recognition to the transfer of control and has an impact on Income Tax. The research that is most related to the exposed topic is the one carried out by Cano & Gutiérrez [11], that collects information through a survey carried out in international auditing firms and managers of automotive companies and captures a practical case applied to companies in the automotive sector using IFRS 15. The results establish that the automotive sector will have an affectation in the financial part in relation to services. Regarding the tax part, they point out that there is no clear picture, because the control body has not ruled on how to pay taxes under a lower tax base, since the tax regulations have differences between accounting and taxation. For all the above considerations, this research work aims to establish the correct recognition of ordinary income from contracts for the sale of vehicles and their services in accordance with IFRS15 and its subsequent control through the Audit, measuring the main changes in financial aspects and especially the tax impact that the rule will have on companies in the automotive sector. Thus, the hypotheses consist of determining if the correct recognition of ordinary income from contracts for the sale of vehicles and their services, based on IFRS15, affects the tax impact on companies in the automotive sector in Ecuador and if the Audit of income affects the financial, accounting and tax impact on companies in the automotive sector. To fulfill this purpose, this research has been divided into three sections, the first covers the methodology used, types of research, instruments, population, sample and the methods used for data processing. The results obtained from the investigation are presented in the second section, and finally the discussion and conclusions are detailed in the last section.

2 Methodology The present investigation is of non-experimental design, since it responds to a study in which no intentional variations of the variables are made, that is, already raised realities have been taken and a direct interaction is not maintained that could alter the object of the investigation research [12]. The mixed approach will be used, since the data collected will be measured through numbers, figures and indicators and perceptions, applied to a sample of 17 companies in the automotive sector, 61 officials and 62 external auditors related to the sector. On the other hand, two types of research were used, the descriptive one to obtain information on the current state of the phenomena through observation and description in a particular field [13], therefore, the accounting behavior will be determined from the

604

A. Altamirano Salazar et al.

recognition, measurement and presentation of income in companies in the automotive sector. The correlational whose purpose is to measure the degree of relationship that exists between the different variables to know their behavior [14] will allow to verify the hypotheses raised, through the application of confirmatory factorial analysis through a series of statistics. To measure the independent variable, the survey designed based on the International Financial Reporting Standard IFRS 15, Revenue from contracts with customers, validated by two experts on the subject, will be used, which consists of three dimensions Compliance, Financial and Tax, measured using a five-level Leaker scale: 1 never, 2 sometimes, 3 frequently, 4 almost always and 5 always. With respect to the dependent variable Tax Impact, secondary data will be used in relation to Taxable Income, Taxable Income and Caused Income Tax compiled from the financial statements of the selected companies published on the portal of the Superintendence of Companies by the years 2018, 2019, 2020 and 2021. The data collected was analyzed using computer programs SPSS (Statistical Package for the Social Sciences) and the AMOS version 24 program, through confirmatory factor analysis and path diagrams.

3 Results Regarding the measurement of the independent variable Revenue from ordinary activities from IFRS 15 contracts, the correlational analysis was carried out to verify the validity and reliability of the data collected through the instrument used, in this case the survey based on IFRS 15. For this purpose, the Cronbach’s Alpha indicator was used in each of its dimensions and throughout the survey. The results exceed the acceptable value of 0.70 established by Luque [15] which evidences the reflective character of the items establishing validity and statistical reliability of the same, the results are presented in Table 1. Table 1. Alpha of Cronbrach Dimensions

Alpha De Cronbrach

No. of elements

Compliance

0,779

21

Financial

0,670

21

Tax

0,627

21

Full Survey

0,768

21

Once the validity of the instrument was verified, the confirmatory factorial analysis was applied through the statistical program AMOS version 24, to each of the dimensions, to establish the factorial loads, that is, the weight that each dimension has in the survey. As can be seen in Table 2, the three dimensions have a significant standardized regression weight, since they exceed the parameter of 0.40 established by Hair, Anderson, Tatham, &

Income from Ordinary Activities and Its Tax Impact

605

Table 2. Standardized regression weights Standardized regression weights

Estimate

Compliance dimension