Engineering Agile Big-Data Systems 8770220166, 9788770220163

To be effective, data-intensive systems require extensive ongoing customisation to reflect changing user requirements, o

420 69 93MB

English Pages 250 [436] Year 2018

Report DMCA / Copyright


Engineering Agile Big-Data Systems
 8770220166, 9788770220163

Table of contents :
Front Cover
Half Title Page
Title Page
Copyright Page
List of Contributors
List of Figures
List of Tables
List of Abbreviations
Chapter 1 - Introduction
1.1 State of the Art in Engineering Data-Intensive Systems
1.1.1 The Challenge
1.2 State of the Art in Semantics-Driven Software Engineering
1.2.1 The Challenge
1.3 State of the Art in Data Quality Engineering
1.3.1 The Challenge
1.4 About ALIGNED
1.5 ALIGNED Partners
1.5.1 Trinity College Dublin
1.5.2 Oxford University – Department of Computer Science
1.5.3 Oxford University – School of Anthropology and Museum Ethnography
1.5.4 University of Leipzig – Agile Knowledge Engineering and Semantic Web (AKSW)
1.5.5 Semantic Web Company
1.5.6 Wolters Kluwer Germany
1.5.7 Adam Mickiewicz University in Pozna´n
1.5.8 Wolters Kluwer Poland
1.6 Structure
Chapter 2 - ALIGNED Use Cases – Data and SoftwareEngineering Challenges
2.1 Introduction
2.2 The ALIGNED Use Cases
2.2.1 Seshat: Global History Databank
2.2.2 PoolParty Enterprise Application Demonstrator System
2.2.3 DBpedia
2.2.4 Jurion and Jurion IPG
2.2.5 Health Data Management
2.3 The ALIGNED Use Cases and Data Life Cycle. Major Challenges and Offered Solutions
2.4 The ALIGNED Use Cases and Software Life Cycle. Major Challenges and Offered Solutions
2.5 Conclusions
Chapter 3 - Methodology
3.1 Introduction
3.2 Software and Data Engineering Life Cycles
3.2.1 Software Engineering Life Cycle
3.2.2 Data Engineering Life Cycle
3.3 Software Development Processes
3.3.1 Model-Driven Approaches
3.3.2 Formal Techniques
3.3.3 Test-Driven Development
3.4 Integration Points and Harmonisation
3.4.1 Integration Points
3.4.2 Barriers to Harmonisation
3.4.3 Methodology Requirements
3.5 An ALIGNED Methodology
3.5.1 A General Framework for Process Management
3.5.2 An Iterative Methodology and Illustration
3.6 Recommendations
3.6.1 Sample Methodology
3.7 Sample Synchronisation Point Activities
3.7.1 Model Catalogue: Analysis and Search/Browse/Explore
3.7.2 Model Catalogue: Design and Classify/Enrich
3.7.3 Semantic Booster: Implementation and Store/Query
3.7.4 Semantic Booster: Maintenance and Search/Browse/Explore
3.8 Summary
3.8.1 Related Work
3.9 Conclusions
Chapter 4 - ALIGNED MetaModel Overview
4.1 Generic Metamodel
4.1.1 Basic Approach
4.1.2 Namespaces and URIs
4.1.3 Expressivity of Vocabularies
4.1.4 Reference Style for External Terms
4.1.5 Links with W3C PROV
4.2 ALIGNED Generic Metamodel
4.2.1 Design Intent Ontology (DIO)
4.3 Software Engineering
4.3.1 Software Life Cycle Ontology
4.3.2 Software Implementation Process Ontology (SIP)
4.4 Data Engineering
4.4.1 Data Life Cycle Ontology
4.5 DBpedia DataID (DataID)
4.6 Unified Quality Reports
4.6.1 Reasoning Violation Ontology (RVO) Overview
4.6.2 W3C SHACL Reporting Vocabulary
4.6.3 Data Quality Vocabulary
4.6.4 Test-Driven RDF Validation Ontology (RUT)
4.6.5 Enterprise Software Development (DIOPP)
4.6.6 Unified Governance Domain Ontologies
4.6.7 Semantic Booster and Model Catalogue Domain Ontology Model catalogue Booster
4.6.8 PROV16
4.6.9 SKOS17
4.6.10 OWL18
4.6.11 RDFS19
4.6.12 RDF20
Chapter 5 - Tools
5.1 Model Catalogue
5.1.1 Introduction
5.1.2 Model Catalogue Architecture Searching and browsing the catalogue Editing the catalogue contents Administration Eclipse integration and model-driven development Semantic reasoning Automation and search
5.1.3 Semantic Booster Introduction Semantic Booster
5.2 RDFUnit
5.2.1 RDFUnit Integration JUnit XML report-based integration Custom apache maven-based integration The shapes constraint language (SHACL) Comparison of SHACL to schema definition usingRDFUnit test patterns Comparison of SHACL to auto-generated RDFUnit testsfrom RDFS/OWL axioms Progress on the SHACL specification andstandardisation process SHACL support in RDFUnit
5.3 Expert Curation Tools and Workflows
5.3.1 Requirements Graduated application of semantics Graph – object mapping Object/document level state management and versioning Object-based workflow interfaces Integrated, automated, constraint validation Result interpretation Deferred updates
5.3.2 Workflow/Process Models Process model 1 – linked data object creation Process model 2 object – linked data object updates Process model 3 – updates to deferred updates Process model 4 – schema updates Process model 5 – validating schema updates Process model 6 – named graph creation Process model 7 – instance data updates and named graphs
5.4 Dacura Approval Queue Manager
5.5 Dacura Linked Data Object Viewer
5.5.1 CSP Design of Seshat Workflow Use Case
5.5.2 Specification
5.6 Dacura Quality Service
5.6.1 Technical Overview of Dacura Quality Service
5.6.2 Dacura Quality Service API Resource and interchange format URI Literals Literal types Quads POST variables Tests Required schema tests Schema tests Errors Endpoints
5.7 Linked Data Model Mapping
5.7.1 Interlink Validation Tool Interlink validation Technical overview Configuration via iv config.txt Configuration via external datasets.txt Execute the interlink validator tool
5.7.2 Dacura Linked Model Mapper
5.7.3 Model Mapper Service Modelling tool – creating mappings Importing semi-structured data with data harvesting tool
5.8 Model-Driven Data Curation
5.8.1 Dacura Quality Service Frame Generation
5.8.2 Frames for UserInterface Design
5.8.3 SemiFormal Frame Specification
5.8.4 Frame API Endpoints
Chapter 6 - Use Cases
6.1 Wolters Kluwer – Re-Engineering a Complex Relationa lDatabase Application
6.1.1 Introduction
6.1.2 Problem Statement
6.1.3 Actors
6.1.4 Implementation PoolParty notification extension rsine notification extension Results RDFUnit for data transformation PoolParty external link validity Statistical overview
6.1.5 Evaluation Productivity Quality Agility Measuring overall value Data quality dimensions and thresholds Model agility Data agility
6.1.6 JURION IPG Introduction Architecture Tools and features Implementation Evaluation Experimental evaluation
6.2 Seshat – Collecting and Curating High-Value Datasets with the Dacura Platform
6.2.1 Use Case Problem statement
6.2.2 Architecture Tools and features
6.2.3 Implementation Dacura data curation platform General description Detailed process
6.2.4 Overview of the Model Catalogue Model catalogue in the demonstrator system
6.2.5 Seshat Trial Platform Evaluation Measuring overall value Data quality dimensions and thresholds
6.3 Managing Data for the NHS
6.3.1 Introduction
6.3.2 Use Case Quality Agility
6.3.3 Architecture
6.3.4 Implementation Model catalogue NIHR health informatics collaborative
6.3.5 Evaluation Productivity Quality Agility
6.4 Integrating Semantic Datasets into Enterprise Information Systems with PoolParty
6.4.1 Introduction
6.4.2 Problem Statement Actors
6.4.3 Architecture
6.4.4 Implementation Consistency violation detector RDFUnit test generator PoolParty integration Notification adaptations RDFUnit Validation on import
6.4.5 Results RDF constraints check RDF validation Improved notifications Unified governance
6.4.6 Evaluation Measuring overall value Data quality dimensions and thresholds Evaluation tasks
6.5 Data Validation at DBpedia
6.5.1 Introduction
6.5.2 Problem Statement Actors
6.5.3 Architecture
6.5.4 Tools and Features
6.5.5 Implementation
6.5.6 Evaluation Productivity Quality Agility
Chapter 7 - Evaluation
7.1 Key Metrics for Evaluation
7.1.1 Productivity
7.1.2 Quality
7.1.3 Agility
7.1.4 Usability
7.2 ALIGNED Ethics Processes
7.3 Common Evaluation Framework
7.3.1 Productivity
7.3.2 Quality
7.3.3 Agility
7.4 ALIGNED Evaluation Ontology
Appendix A – Requirements
About the Editors
Back Cover

Polecaj historie