Kotlin Design Patterns and Best Practices: Elevate your Kotlin skills with classical and modern design patterns, coroutines, and microservices 9781805127765

ive deep into Kotlin design patterns, explore idiomatic functional programming, and master microservices with frameworks

979 356

English Pages 475 Year 2024

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Kotlin Design Patterns and Best Practices: Elevate your Kotlin skills with classical and modern design patterns, coroutines, and microservices
 9781805127765

Table of contents :
Cover
Copyright
Contributors
Table of Contents
Preface
Section 1: Classical Patterns
Chapter 1: Getting Started with Kotlin
Technical requirements
Basic language syntax and features
Multi-paradigm language
Understanding Kotlin code structure
Naming conventions
Packages
Comments
Hello Kotlin
No wrapping class
No arguments
No static modifier
A less verbose print function
No semicolons
Understanding types
Basic types
Type inference
Values
Comparison and equality
Declaring functions
Null safety
Reviewing Kotlin data structures
Lists
Sets
Maps
Mutability
Alternative implementations for collections
Arrays
Control flow
The if expression
The when expression
Working with text
String interpolation
Loops
The for-each loop
The for loop
The while loop
Classes and inheritance
Classes
Properties
Custom setters and getters
Interfaces
Abstract classes
Visibility modifiers
Inheritance
Data classes
Kotlin data classes versus Java records
Extension functions
Introduction to design patterns
What are design patterns?
Design patterns in real life
Design process
Using design patterns in Kotlin
Bringing it all together
Exercise
Example
Challenge
Summary
Questions
Chapter 2: Working with Creational Patterns
Technical requirements
Singleton
Factory Method
Static Factory Method
Abstract Factory
Casts
Subclassing
Smart casts
Variable shadowing
Collection of Factory Methods
Builder
Fluent setters
Default arguments
Prototype
Starting from a prototype
Summary
Questions
Chapter 3: Understanding Structural Patterns
Technical requirements
Decorator
Enhancing a class
The Elvis operator
The inheritance problem
Operator overloading
Caveats of the Decorator design pattern
Adapter
Adapting existing code
Adapters in the real world
Caveats of using adapters
Bridge
Bridging changes
Type aliasing
Constants
Composite
Secondary constructors
The varargs keyword
Nesting composites
Facade
Flyweight
Saving memory
Caveats of the Flyweight design pattern
Proxy
Lazy delegation
Summary
Questions
Chapter 4: Getting Familiar with Behavioral Patterns
Technical requirements
Strategy
Functions as first-class citizens
Iterator
State
Fifty shades of state
State of the nation
Command
Undoing commands
Chain of Responsibility
Interpreter
A language of your own
Call suffix
DSL Marker
Mediator
The middleman
Mediator caveats
Memento
Visitor
Writing a crawler
Template Method
Observer
Animal choir example
Summary
Questions
Section 2: Reactive and Concurrent Patterns
Chapter 5: Introducing Functional Programming
Technical requirements
Reasoning behind the functional approach
Immutability
Immutable collections
The pitfalls of a shared mutable state
Tuples
Functions as values
Higher-order functions in the standard library
The “it” notation
Closures
Pure functions
Currying
Memoization
Using expressions instead of statements
Pattern matching
Recursion
Summary
Questions
Chapter 6: Threads and Coroutines
Technical requirements
Looking deeper into threads
Thread safety
Thread synchronization mechanisms in Kotlin
Why are threads expensive?
Introducing coroutines
Starting coroutines
Jobs
Coroutines under the hood
Dispatchers
Switching dispatchers
Structured concurrency
The coroutineScope builder
Canceling a coroutine
Setting timeouts
Summary
Questions
Chapter 7: Controlling the Data Flow
Technical requirements
Reactive principles
The responsive principle
The resilient principle
The elastic principle
The message-driven principle
Higher-order functions on collections
Mapping elements
Filtering elements
Finding elements
Executing code for each element
Summing up elements
Getting rid of nesting
Exploring concurrent data structures
Sequences
Channels
Producers
Actors
Buffered channels
Flows
Buffering flows
Flow exceptions and error handling
Catching exceptions
Handling completion
Retrying
Optional retrying
Flow sharing
shareIn
stateIn
Cancellation
Flow builders
Conflating flows
Rate-limiting
Combining flows
Summary
Questions
Chapter 8: Designing for Concurrency
Technical requirements
Deferred Value
Barrier
Scheduler
Pipeline
Fan-Out
Fan-In
Racing
Unbiased Select
Mutex
Deadlocks
Sidekick
Summary
Questions
Section 3: Practical Application of Design Patterns
Chapter 9: Idioms and Anti-Patterns
Technical requirements
Scope functions
let function
apply function
also function
run function
with function
Type checks and casts
An alternative to the try-with-resources statement
Inline functions
Algebraic data types
Recursive functions
Reified generics
Using constants efficiently
Constructor overload
Dealing with nulls
Making asynchronicity explicit
Validating input
Sealed hierarchies versus enums
Context receivers
Summary
Questions
Chapter 10: Practical Functional Programming with Arrow
Technical requirements
Getting started with Arrow
Typed errors
Raise
Collecting failures
Smart constructors
Alternatives to Either and Raise
Result
Optional
Ior
Advantages of typed errors
High-level concurrency
Parallel operations
CyclicBarrier
Racing
Resource
Software transactional memory
Resilience
Retry and repeat
Circuit Breaker
Saga
Immutable data
Summary
Questions
Chapter 11: Concurrent Microservices with Ktor
Technical requirements
Getting started with Ktor
Routing requests
Testing the service
Connecting to other HTTP services
Connecting to a database
Configuration management in Ktor
Defining tables with Exposed
Creating new entities
Making the tests consistent
Fetching all entities
Fetching a single entity
Organizing routes in Ktor
Deleting an entity
Updating an entity
Achieving concurrency in Ktor
Summary
Questions
Chapter 12: Reactive Microservices with Vert.x
Technical requirements
Getting started with Vert.x
Routing requests
Verticles
Handling requests
Subrouting the requests
Testing Vert.x applications
Working with databases
Understanding Event Loop
Communicating with Event Bus
Sending JSON over Event Bus
Summary
Questions
Assessments
Other Book You May Enjoy
Index

Citation preview

Kotlin Design Patterns and Best Practices

Third Edition

Elevate your Kotlin skills with classical and modern design patterns, coroutines, and microservices

Alexey Soshin

Kotlin Design Patterns and Best Practices

Third Edition

Copyright © 2024 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Senior Publishing Product Manager: Denim Pinto Acquisition Editor – Peer Reviews: Swaroop Singh Project Editor: Janice Gonsalves Senior Development Editor: Elliot Dallow Copy Editor: Safis Editing Technical Editor: Tejas Mhasvekar Proofreader: Safis Editing Indexer: Hemangini Bari Presentation Designer: Pranit Padwal Developer Relations Marketing Executive: Vipanshu Parashar First published: June 2018 Second edition: December 2021 Third edition: April 2024 Production reference: 1230424 Published by Packt Publishing Ltd. Grosvenor House 11 St Paul’s Square Birmingham B3 1RB, UK. ISBN 978-1-80512-776-5 www.packt.com

Contributors About the author Alexey Soshin is a software architect who has worked in the industry for over 18 years. He started exploring Kotlin when the language was still in beta and has since become a passionate advocate of it. In addition to being a conference speaker and published writer, he is also the author of the Pragmatic System Design video course.

To Lula Leus, my constant source of inspiration. To my mentor, Lior Bar On. Without you, I would have never started writing. To Rick Houghton, for teaching me true empathy. – Alexey Soshin

About the reviewers Lee Turner is an experienced software engineer with over 20 years’ experience working on large scale software implementations. He has a passion for well-crafted, pragmatically tested software. He works as a senior software engineer at WireMock and has significant experience using Kotlin as a backend developer building microservices in the financial and developer tooling sectors. Lee is the founder of the Brighton Kotlin meetup group.

I would like to thank my partner Juliet and my son Oliver who always support me in my seemingly never-ending side projects.

Matthias Schenk is a passionate software engineer with more than 10 years’ experience in developing with Java and Kotlin, mainly in the Spring ecosystem. His focus is on writing idiomatic and clean code that is easy to understand and maintain. He previously worked as a technical reviewer on Kotlin Essentials and Advanced Kotlin, by Marcin Moskala, and now writes a blog covering topics surrounding the Kotlin ecosystem at https://medium.com/@inzuael.

I want to thank Alexey and Packt for giving me the opportunity to take part in the process of reviewing this new edition of Kotlin Design Patterns and Best Practices. The previous version of the book was one of the first Kotlin books I ever bought.

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

Table of Contents Preface 

xix

Section 1: Classical Patterns 

1

Chapter 1: Getting Started with Kotlin 

3

Technical requirements ������������������������������������������������������������������������������������������������������ 4 Basic language syntax and features ������������������������������������������������������������������������������������� 4 Multi-paradigm language • 5 Understanding Kotlin code structure ���������������������������������������������������������������������������������� 5 Naming conventions • 5 Packages • 6 Comments • 7 Hello Kotlin • 8 No wrapping class • 8 No arguments • 9 No static modifier • 9 A less verbose print function • 9 No semicolons • 10 Understanding types ��������������������������������������������������������������������������������������������������������� 10 Basic types • 10 Type inference • 11 Values • 12

viii

Table of Contents

Comparison and equality • 13 Declaring functions • 14 Null safety • 15 Reviewing Kotlin data structures �������������������������������������������������������������������������������������� 16 Lists • 16 Sets • 17 Maps • 18 Mutability • 18 Alternative implementations for collections • 19 Arrays • 20 Control flow ����������������������������������������������������������������������������������������������������������������������� 21 The if expression • 21 The when expression • 22 Working with text ������������������������������������������������������������������������������������������������������������� 23 String interpolation • 23 Loops �������������������������������������������������������������������������������������������������������������������������������� 25 The for-each loop • 25 The for loop • 26 The while loop • 27 Classes and inheritance ���������������������������������������������������������������������������������������������������� 28 Classes • 28 Properties • 29 Custom setters and getters • 30 Interfaces • 32 Abstract classes • 33 Visibility modifiers • 34 Inheritance ����������������������������������������������������������������������������������������������������������������������� 35 Data classes • 36 Kotlin data classes versus Java records • 37 Extension functions ���������������������������������������������������������������������������������������������������������� 37

Table of Contents

ix

Introduction to design patterns ���������������������������������������������������������������������������������������� 39 What are design patterns? • 39 Design patterns in real life • 39 Design process • 40 Using design patterns in Kotlin • 41 Bringing it all together ����������������������������������������������������������������������������������������������������� 41 Exercise • 41 Example • 42 Challenge • 42 Summary �������������������������������������������������������������������������������������������������������������������������� 42 Questions �������������������������������������������������������������������������������������������������������������������������� 42

Chapter 2: Working with Creational Patterns 

45

Technical requirements ���������������������������������������������������������������������������������������������������� 46 Singleton �������������������������������������������������������������������������������������������������������������������������� 46 Factory Method ������������������������������������������������������������������������������������������������������������������ 51 Static Factory Method • 53 Abstract Factory ���������������������������������������������������������������������������������������������������������������� 56 Casts • 58 Subclassing • 59 Smart casts • 60 Variable shadowing • 61 Collection of Factory Methods • 61 Builder ������������������������������������������������������������������������������������������������������������������������������ 63 Fluent setters • 67 Default arguments • 69 Prototype �������������������������������������������������������������������������������������������������������������������������� 70 Starting from a prototype • 72 Summary �������������������������������������������������������������������������������������������������������������������������� 73 Questions �������������������������������������������������������������������������������������������������������������������������� 74

Table of Contents

x

Chapter 3: Understanding Structural Patterns 

75

Technical requirements ���������������������������������������������������������������������������������������������������� 76 Decorator �������������������������������������������������������������������������������������������������������������������������� 76 Enhancing a class • 76 The Elvis operator • 77 The inheritance problem • 78 Operator overloading • 82 Caveats of the Decorator design pattern • 84 Adapter ����������������������������������������������������������������������������������������������������������������������������� 85 Adapting existing code • 88 Adapters in the real world • 89 Caveats of using adapters • 90 Bridge ������������������������������������������������������������������������������������������������������������������������������� 91 Bridging changes • 93 Type aliasing • 94 Constants • 94 Composite ������������������������������������������������������������������������������������������������������������������������� 96 Secondary constructors • 98 The varargs keyword • 98 Nesting composites • 99 Facade ����������������������������������������������������������������������������������������������������������������������������� 100 Flyweight ������������������������������������������������������������������������������������������������������������������������ 102 Saving memory • 104 Caveats of the Flyweight design pattern • 105 Proxy ������������������������������������������������������������������������������������������������������������������������������� 106 Lazy delegation • 107 Summary ������������������������������������������������������������������������������������������������������������������������ 108 Questions ������������������������������������������������������������������������������������������������������������������������ 108

Table of Contents

Chapter 4: Getting Familiar with Behavioral Patterns 

xi

111

Technical requirements ��������������������������������������������������������������������������������������������������� 112 Strategy ��������������������������������������������������������������������������������������������������������������������������� 112 Functions as first-class citizens • 114 Iterator ���������������������������������������������������������������������������������������������������������������������������� 117 State �������������������������������������������������������������������������������������������������������������������������������� 120 Fifty shades of state • 120 State of the nation • 123 Command ������������������������������������������������������������������������������������������������������������������������ 125 Undoing commands • 130 Chain of Responsibility ���������������������������������������������������������������������������������������������������� 131 Interpreter ����������������������������������������������������������������������������������������������������������������������� 135 A language of your own • 136 Call suffix • 140 DSL Marker • 140 Mediator �������������������������������������������������������������������������������������������������������������������������� 141 The middleman • 144 Mediator caveats • 146 Memento ������������������������������������������������������������������������������������������������������������������������ 146 Visitor ������������������������������������������������������������������������������������������������������������������������������ 149 Writing a crawler • 150 Template Method ������������������������������������������������������������������������������������������������������������� 153 Observer �������������������������������������������������������������������������������������������������������������������������� 157 Animal choir example • 158 Summary ������������������������������������������������������������������������������������������������������������������������� 163 Questions ������������������������������������������������������������������������������������������������������������������������� 163

xii

Section 2: Reactive and Concurrent Patterns  Chapter 5: Introducing Functional Programming 

Table of Contents

165 167

Technical requirements ��������������������������������������������������������������������������������������������������� 167 Reasoning behind the functional approach �������������������������������������������������������������������� 168 Immutability ������������������������������������������������������������������������������������������������������������������� 169 Immutable collections • 169 The pitfalls of a shared mutable state • 170 Tuples • 172 Functions as values ���������������������������������������������������������������������������������������������������������� 173 Higher-order functions in the standard library • 175 The “it” notation • 176 Closures • 176 Pure functions • 177 Currying • 179 Memoization • 181 Using expressions instead of statements ������������������������������������������������������������������������� 183 Pattern matching • 183 Recursion ������������������������������������������������������������������������������������������������������������������������� 185 Summary ������������������������������������������������������������������������������������������������������������������������ 186 Questions ������������������������������������������������������������������������������������������������������������������������� 187

Chapter 6: Threads and Coroutines 

189

Technical requirements �������������������������������������������������������������������������������������������������� 190 Looking deeper into threads ������������������������������������������������������������������������������������������� 190 Thread safety • 192 Thread synchronization mechanisms in Kotlin • 195 Why are threads expensive? • 196 Introducing coroutines ��������������������������������������������������������������������������������������������������� 198 Starting coroutines • 199

Table of Contents

xiii

Jobs ��������������������������������������������������������������������������������������������������������������������������������� 201 Coroutines under the hood ��������������������������������������������������������������������������������������������� 203 Dispatchers �������������������������������������������������������������������������������������������������������������������� 208 Switching dispatchers • 210 Structured concurrency ��������������������������������������������������������������������������������������������������� 211 The coroutineScope builder • 214 Canceling a coroutine • 215 Setting timeouts • 218 Summary ������������������������������������������������������������������������������������������������������������������������� 219 Questions ������������������������������������������������������������������������������������������������������������������������ 220

Chapter 7: Controlling the Data Flow 

221

Technical requirements ��������������������������������������������������������������������������������������������������� 221 Reactive principles ���������������������������������������������������������������������������������������������������������� 222 The responsive principle • 222 The resilient principle • 223 The elastic principle • 223 The message-driven principle • 224 Higher-order functions on collections ���������������������������������������������������������������������������� 225 Mapping elements • 225 Filtering elements • 226 Finding elements • 226 Executing code for each element • 227 Summing up elements • 228 Getting rid of nesting • 229 Exploring concurrent data structures ����������������������������������������������������������������������������� 230 Sequences ����������������������������������������������������������������������������������������������������������������������� 230 Channels ������������������������������������������������������������������������������������������������������������������������� 232 Producers • 234 Actors • 235 Buffered channels • 235

Table of Contents

xiv

Flows ������������������������������������������������������������������������������������������������������������������������������ 237 Buffering flows • 241 Flow exceptions and error handling • 241 Catching exceptions • 242 Handling completion • 242 Retrying • 243 Optional retrying • 244 Flow sharing • 245 shareIn • 245 stateIn • 246 Cancellation • 248 Flow builders • 249 Conflating flows • 251 Rate-limiting • 252 Combining flows • 253 Summary ������������������������������������������������������������������������������������������������������������������������ 256 Questions ������������������������������������������������������������������������������������������������������������������������ 257

Chapter 8: Designing for Concurrency 

259

Technical requirements �������������������������������������������������������������������������������������������������� 260 Deferred Value ���������������������������������������������������������������������������������������������������������������� 260 Barrier ���������������������������������������������������������������������������������������������������������������������������� 262 Scheduler ������������������������������������������������������������������������������������������������������������������������ 264 Pipeline ��������������������������������������������������������������������������������������������������������������������������� 266 Fan-Out �������������������������������������������������������������������������������������������������������������������������� 268 Fan-In ����������������������������������������������������������������������������������������������������������������������������� 270 Racing ����������������������������������������������������������������������������������������������������������������������������� 272 Unbiased Select • 273 Mutex ����������������������������������������������������������������������������������������������������������������������������� 274 Deadlocks • 276

Table of Contents

xv

Sidekick ������������������������������������������������������������������������������������������������������������������������� 280 Summary ������������������������������������������������������������������������������������������������������������������������ 281 Questions ������������������������������������������������������������������������������������������������������������������������ 282

Section 3: Practical Application of Design Patterns  Chapter 9: Idioms and Anti-Patterns 

283 285

Technical requirements �������������������������������������������������������������������������������������������������� 286 Scope functions �������������������������������������������������������������������������������������������������������������� 286 let function • 286 apply function • 287 also function • 287 run function • 288 with function • 288 Type checks and casts ����������������������������������������������������������������������������������������������������� 289 An alternative to the try-with-resources statement �������������������������������������������������������� 290 Inline functions ��������������������������������������������������������������������������������������������������������������� 291 Algebraic data types �������������������������������������������������������������������������������������������������������� 292 Recursive functions �������������������������������������������������������������������������������������������������������� 295 Reified generics ��������������������������������������������������������������������������������������������������������������� 297 Using constants efficiently ���������������������������������������������������������������������������������������������� 299 Constructor overload ����������������������������������������������������������������������������������������������������� 300 Dealing with nulls ���������������������������������������������������������������������������������������������������������� 302 Making asynchronicity explicit ��������������������������������������������������������������������������������������� 304 Validating input �������������������������������������������������������������������������������������������������������������� 305 Sealed hierarchies versus enums ������������������������������������������������������������������������������������� 308 Context receivers ������������������������������������������������������������������������������������������������������������ 310 Summary ������������������������������������������������������������������������������������������������������������������������� 314 Questions ������������������������������������������������������������������������������������������������������������������������� 315

Table of Contents

xvi

Chapter 10: Practical Functional Programming with Arrow 

317

Technical requirements �������������������������������������������������������������������������������������������������� 318 Getting started with Arrow ��������������������������������������������������������������������������������������������� 318 Typed errors �������������������������������������������������������������������������������������������������������������������� 318 Raise • 323 Collecting failures • 326 Smart constructors • 328 Alternatives to Either and Raise • 330 Result • 330 Optional • 331 Ior • 331 Advantages of typed errors • 333 High-level concurrency ��������������������������������������������������������������������������������������������������� 334 Parallel operations • 335 CyclicBarrier • 337 Racing • 338 Resource • 339 Software transactional memory ��������������������������������������������������������������������������������������� 341 Resilience ����������������������������������������������������������������������������������������������������������������������� 345 Retry and repeat • 346 Circuit Breaker ���������������������������������������������������������������������������������������������������������������� 348 Saga ��������������������������������������������������������������������������������������������������������������������������������� 351 Immutable data ������������������������������������������������������������������������������������������������������������� 353 Summary ������������������������������������������������������������������������������������������������������������������������� 357 Questions ������������������������������������������������������������������������������������������������������������������������ 359

Chapter 11: Concurrent Microservices with Ktor 

361

Technical requirements �������������������������������������������������������������������������������������������������� 362 Getting started with Ktor ������������������������������������������������������������������������������������������������ 362

Table of Contents

xvii

Routing requests ������������������������������������������������������������������������������������������������������������� 367 Testing the service • 368 Connecting to other HTTP services • 370 Connecting to a database ������������������������������������������������������������������������������������������������� 371 Configuration management in Ktor �������������������������������������������������������������������������������� 372 Defining tables with Exposed • 374 Creating new entities • 375 Making the tests consistent • 377 Fetching all entities • 378 Fetching a single entity • 381 Organizing routes in Ktor ������������������������������������������������������������������������������������������������ 383 Deleting an entity • 387 Updating an entity • 388 Achieving concurrency in Ktor ���������������������������������������������������������������������������������������� 390 Summary ������������������������������������������������������������������������������������������������������������������������ 390 Questions ������������������������������������������������������������������������������������������������������������������������� 391

Chapter 12: Reactive Microservices with Vert.x 

393

Technical requirements �������������������������������������������������������������������������������������������������� 394 Getting started with Vert.x ��������������������������������������������������������������������������������������������� 394 Routing requests ������������������������������������������������������������������������������������������������������������� 396 Verticles �������������������������������������������������������������������������������������������������������������������������� 397 Handling requests ����������������������������������������������������������������������������������������������������������� 398 Subrouting the requests • 399 Testing Vert.x applications �������������������������������������������������������������������������������������������� 400 Working with databases ������������������������������������������������������������������������������������������������� 403 Understanding Event Loop ��������������������������������������������������������������������������������������������� 407 Communicating with Event Bus ������������������������������������������������������������������������������������� 410 Sending JSON over Event Bus • 411 Summary ������������������������������������������������������������������������������������������������������������������������� 413 Questions ������������������������������������������������������������������������������������������������������������������������ 414

xviii

Table of Contents

Assessments 

415

Other Book You May Enjoy 

437

Index 

439

Preface Design patterns represent a compendium of best practices and replicable solutions for frequently encountered software development challenges. These patterns, forged through the collective experience of seasoned developers, offer proven solutions for specific, recurring issues in software design, adaptable to a range of situations. They furnish developers with a shared vocabulary to facilitate communication, collaboration, and code maintenance. Fundamentally, design patterns empower developers to write superior, more efficient, and maintainable code by diminishing the time spent devising solutions for common problems from the ground up. Kotlin is a versatile programming language that embraces multiple programming paradigms and was crafted by JetBrains, who are renowned for creating widely-used integrated development environments, including IntelliJ IDEA. The primary goal of this book is to introduce you to classical design patterns, whether you’re unfamiliar with them entirely or are seeking to implement them in Kotlin after using them with other languages. Kotlin, a contemporary language, naturally integrates many essential design patterns within its own syntax and core libraries, often eliminating the need to implement these patterns manually. Nonetheless, recognizing the design patterns embodied by specific language features remains valuable. This updated edition focuses on the advancements in Kotlin up to Kotlin 2.0. With the ongoing evolution of the Kotlin ecosystem, this edition highlights some of the most thrilling features, such as context receivers, and significant libraries like Arrow, justifying the need for this new edition.

Who this book is for This book is intended for developers who want to utilize design patterns they’ve learned from other programming languages in Kotlin to create robust, scalable, and easily maintainable applications.

xx

Preface

What this book covers Chapter 1, Getting Started with Kotlin, introduces you to the basic syntax of Kotlin and explain how design patterns can be applied effectively in Kotlin. The focus is not to cover the entire language vocabulary, but rather to provide a clear understanding of the fundamental concepts and idioms. Subsequent chapters will gradually introduce additional language features that are relevant to the design patterns being discussed. By the end of this chapter, you’ll have a strong foundation in the language and be ready to explore more advanced topics. Chapter 2, Working with Creational Patterns, teaches you about classical creational patterns that are already embedded in the Kotlin language, as well as how to implement those that are not. These patterns focus on how and when to create objects. By mastering these patterns you will be able to manage your objects more effectively, adapt well to changes, and write more maintainable code. The chapter covers various patterns, including Singleton and Builder. Chapter 3, Understanding Structural Patterns, introduces classical structural design patterns that can be used to extend the functionality of objects and adapt them to changes. By learning these patterns you will be able to write more robust and adaptable code. The chapter covers several patterns, including the widely used Decorator and Adapter patterns, which are essential for achieving greater flexibility and maintainability in software development. Chapter 4, Getting Familiar with Behavioral Patterns, focuses on behavioral patterns in Kotlin, which deal with how objects interact with each other. You will learn how an object can exhibit different behaviors depending on the situation, how objects can communicate without direct knowledge of one another, and how to iterate over complex structures easily. By understanding these patterns you can write more flexible and reusable code that is easier to maintain over time. Chapter 5, Introducing Functional Programming, this chapter presents the fundamental principles of functional programming and their connection to Kotlin. Without delving into much new syntax, we will address key concepts like data immutability and treating functions as first-class values. While these principles were crucial for grasping Kotlin’s advantages in the earlier chapters, here, we examine their significance within the realm of functional programming. Instead of examining their application in implementing design patterns, our focus will be on their essential role in crafting code that is more concise, modular, and maintainable. Chapter 6, Threads and Coroutines, centers on efficiently managing a multitude of requests in our application. Threads are traditionally the go-to for concurrency in contemporary applications; however, Kotlin offers coroutines as a superior, more efficient option.

Preface

xxi

We’ll examine the advantages of utilizing coroutines and demonstrate their implementation for processing a high rate of requests. Furthermore, we will explore structured concurrency in Kotlin, a feature that enhances the safety and efficiency of concurrent code. A comprehensive explanation of this concept will be provided, along with insights into its application for boosting the performance of our applications. Chapter 7, Controlling the Data Flow, covers higher-order functions that can be used with collections and concurrent data structures. We’ll also introduce Channels and Flows, which provide concurrent and reactive solutions that leverage these higher-order functions. Chapter 8, Designing for Concurrency, explores the most widely utilized concurrency design patterns, particularly those implemented using coroutines. These patterns enable the simultaneous management of multiple tasks. We will also discuss how coroutines synchronize their execution to avert race conditions and guarantee thread safety. Chapter 9, Idioms and Anti-patterns, is dedicated to discussing the optimal and less optimal practices of coding in Kotlin. We’ll go over what the preferred coding style for Kotlin is and identify certain coding patterns that are not recommended. Once you have completed this chapter, you should be able to produce Kotlin code that is easier to read and maintain, as well as steer clear of typical coding mistakes. Chapter 10, Practical Functional Programming with Arrow, leverages our knowledge of Functional Programming and Coroutines from the previous parts and puts it into action using the Arrow framework. Arrow aims to provide a uniform and idiomatic functional programming experience for Kotlin developers, making it accessible to all. We’ll explore the power of Arrow in writing concise, expressive, and maintainable code. Throughout this chapter, we’ll highlight the key features and benefits of the Arrow framework, showcasing how it enhances development and enables the adoption of functional programming principles in Kotlin. With Arrow, developers can unlock the full potential of functional programming while capitalizing on the flexibility and robustness of Kotlin. Chapter 11, Concurrent Microservices with Ktor, demonstrates how to put the knowledge learned in previous chapters into practice by building a microservice using Kotlin. We will use the Ktor framework, developed by Jetbrains, the creators of the Kotlin programming language. By following the examples in this chapter, you will be able to create your own microservices using Kotlin and Ktor.

Preface

xxii

Chapter 12, Reactive Microservices with Vert.x, explores an alternative method for building microservices with Kotlin by utilizing the Vert.x framework. Vert.x is based on reactive design patterns. We will examine the advantages and disadvantages of each approach and analyze real code examples to determine when to use each one. Assessments, gives you the answers to the questions that follow each chapter throughout the rest of the book.

To get the most out of this book Before you dive into this book, it’s important to have a solid understanding of at least one programming language. Knowledge of classical design patterns in your language of choice will be beneficial. Although the book is comprehensible to programmers versed in different languages, some familiarity with Java would be an added advantage.

Download the example code files The code bundle for the book is hosted on GitHub at https://github.com/PacktPublishing/ Kotlin-Design-Patterns-and-Best-Practices_Third-Edition. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Conventions used There are a number of text conventions used throughout this book. CodeInText: Indicates code words in text, database table names, folder names, filenames, file

extensions, pathnames, dummy URLs, user input, and Twitter handles. For example; “The let() function is useful for operating on nullable objects, executing code only if the object is non-null.” A block of code is set as follows: val clintEastwoodQuotes = mapOf( "The Good, The Bad, The Ugly" to "Every gun makes its own tune.", "A Fistful Of Dollars" to "My mistake: four coffins." )

Preface

xxiii

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold: data class JamesBondMovie( var actorName: String = "Sean Connery", var movieName: String = "From Russia with Love" )

Any command-line input or output is written as follows: > Node(value=1, left=Empty, right=Node(value=2, left=Node(value=3, left=Empty, right=Empty), right=Empty))

Bold: Indicates a new term, an important word, or words that you see on the screen. For instance, words in menus or dialog boxes appear in the text like this. For example: “The State pattern extends this idea by allowing objects to transition seamlessly between various states.”

Warnings or important notes appear like this.

Tips and tricks appear like this.

Get in touch Feedback from our readers is always welcome. General feedback: Email [email protected] and mention the book’s title in the subject of your message. If you have questions about any aspect of this book, please email us at questions@ packtpub.com. Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you reported this to us. Please visit http://www.packtpub.com/submit-errata, click Submit Errata, and fill in the form.

xxiv

Preface

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material. If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit http://authors.packtpub. com.

Share your thoughts Once you’ve read Kotlin Design Patterns and Best Practices, Third Edition, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback. Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book Thanks for purchasing this book! Do you like to read on the go but are unable to carry your print books everywhere? Is your eBook purchase not compatible with the device of your choice? Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost. Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application. The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily. Follow these simple steps to get the benefits: 1. Scan the QR code or visit the link below:

https://packt.link/free-ebook/9781805127765

2. Submit your proof of purchase. 3. That’s it! We’ll send your free PDF and other benefits to your email directly.

Section 1 Classical Patterns This section focuses on introducing the fundamental syntax of Kotlin and exploring the application of all traditional design patterns within Kotlin. Traditional design patterns address three primary challenges in system design: efficient object creation, effective encapsulation of object hierarchies, and enhancing the dynamism in object behavior. We will explore the design patterns that are inherently part of Kotlin and provide guidance on implementing those that are not. This section comprises the following chapters: •

Chapter 1, Getting Started with Kotlin



Chapter 2, Working with Creational Patterns



Chapter 3, Understanding Structural Patterns



Chapter 4, Getting Familiar with Behavioral Patterns

By the end of this section, you will have gained a solid understanding of Kotlin’s basics and the know-how to implement all the classical design patterns in Kotlin.

1

Getting Started with Kotlin

This chapter will primarily focus on the fundamentals of Kotlin syntax. It is crucial to have a strong understanding of the language before diving into the implementation of design patterns. We will also briefly explore the problems that design patterns aim to solve and explain why they should be used in Kotlin. This will be beneficial for those who are less familiar with the concept of design patterns. Even experienced engineers can gain interesting insights from this discussion. It’s important to note that this chapter doesn’t aim to cover the entire range of the language’s vocabulary. Instead, its purpose is to introduce you to fundamental concepts and idioms. In the following chapters, we will gradually introduce more language features as they become relevant to the design patterns we examine. The main topics covered in this chapter include: •

Basic language syntax and features



Understanding Kotlin code structure



Understanding types



Reviewing Kotlin data structures



Control flow



Working with text blocks



Iterating using loops



Classes and inheritance



Extension functions



Introduction to design patterns

Getting Started with Kotlin

4

By the end of this chapter, you will have a solid understanding of the basics of Kotlin, which will serve as the foundation for the subsequent chapters.

Technical requirements To follow the instructions in this chapter, you’ll need the following: •

IntelliJ IDEA Community Edition (https://www.jetbrains.com/idea/download/)



OpenJDK 19 or higher (https://jdk.java.net/19/)

The code files for this chapter are available at https://github.com/PacktPublishing/KotlinDesign-Patterns-and-Best-Practices_Third-Edition/tree/main/Chapter01. IMPORTANT NOTE: For simple code snippets, there’s no requirement to write them in a file. You have the option to explore the language online, using platforms like https://play. kotlinlang.org/, or leverage a REPL and an interactive shell after installing Kotlin and executing the kotlinc command.

Basic language syntax and features If you’re familiar with Java, C#, Scala, or other similar programming languages, you’ll find Kotlin’s syntax quite familiar. This is by design, as Kotlin aims to facilitate a smooth transition for those with experience in other languages. Beyond solving real-world problems with features like improved type safety, Kotlin also addresses shortcomings inherent in other languages, such as Java’s notorious Null Pointer Exception (NPE) issue or the absence of top-level functions. The language maintains a practical approach that is consistently applied throughout its design. One of the biggest advantages of Kotlin is its ability to work seamlessly with Java. You can use both Java and Kotlin classes together in the same project and freely use any Java library. However, it’s worth noting that this interoperability isn’t without its challenges, such as dealing with nullable types. So, while the integration is robust, some effort is required to address these nuances. In summary, Kotlin aims to achieve the following goals: •

Pragmatic: It simplifies common tasks.



Readable: It finds a balance between being concise and making the code clear.



Easy to reuse: It helps adapt code to different scenarios.

Chapter 1

5



Safe: It discourages writing error-prone code.



Interoperable: It allows the use of existing libraries and frameworks that are already very popular with Java developers, such as the Spring Framework, Hibernate, and jOOQ, to name just a few.

This chapter will explain how Kotlin accomplishes these goals.

Multi-paradigm language Procedural, object-oriented, and functional paradigms are among the major paradigms in programming languages. Kotlin, being pragmatic, accommodates all of these paradigms. You are not forced to adhere to a single paradigm, as some other languages do. Kotlin incorporates classes and inheritance from the object-oriented approach and embraces higher-order functions from functional programming. However, Kotlin does not impose the necessity of encapsulating everything within classes. If desired, you can structure your code purely as a collection of procedures and structs. Throughout the examples, you will witness the combination of these different paradigms to solve the discussed problems. Instead of providing comprehensive coverage of each topic from beginning to end, we will gradually build knowledge as we progress.

Understanding Kotlin code structure When you start programming in Kotlin, your first step is typically to create a new file, which usually has the .kt extension. In contrast to Java, Kotlin doesn’t strictly enforce a one-to-one relationship between the filename and the class name. While you have the flexibility to include multiple public classes within a single Kotlin file, it’s generally considered best practice to group only logically related classes together in one file. Keep in mind that doing so should not make the file excessively long or difficult to read. Additionally, packing multiple public classes into a single file can make it harder to search for specific functionality, as the project overview may not display all available classes.

Naming conventions As a convention, if your file consists of a single class, it is recommended to name your file the same as your class.

Getting Started with Kotlin

6

When your file contains multiple classes, the filename should describe the shared purpose or theme of those classes. It is advisable to use UpperCamelCase when naming your files, in accordance with the Kotlin coding conventions (refer to https://kotlinlang.org/docs/codingconventions.html). In a Kotlin project, the main file is commonly named Main.kt. This file typically serves as the entry point or starting point of your application.

Packages A package in Kotlin is a collection of files and classes that share a common purpose or domain. Packages provide a convenient way to organize your classes and functions under a unified namespace, often residing in the same folder. This concept is prevalent in Kotlin, as well as many other programming languages. To declare the package that a file belongs to, you use the package keyword—for example: package me.soshin

When working with a mix of Java and Kotlin files, it’s important to ensure that Kotlin files adhere to the Java package naming rules, which are specified at https://docs.oracle.com/javase/ tutorial/java/package/namingpkgs.html. While Kotlin provides flexibility in organizing packages within directories and files, it is highly recommended to align with Java’s package naming rules for consistency and interoperability. Additionally, many IDEs will display warnings or suggestions to place a file in the correct directory according to the package declaration. This feature can be particularly helpful in maintaining proper organization when working with Kotlin and Java files together. In purely Kotlin projects, it is permissible to omit common package prefixes from the folder structure. For instance, if all your Kotlin projects reside under the me.soshin package, and a specific part of your application deals with mortgages, you can directly place your files in the /mortgages folder, without following the nested structure me/soshin/mortgages, as required in Java. This is shown in the following table:

Chapter 1

7

Java

Kotlin

me

mortgages





└── soshin

└── Main.kt

│ └── mortgages │ └── Main.java

Table 1.1: For the Main.kt file, there is no need to explicitly declare a package

IMPORTANT NOTE: From this point forward, we will utilize ellipsis notation (three dots) to signify that certain parts of the code have been omitted in order to emphasize the key aspects. However, you can always access the complete code examples by referring to the GitHub link provided for this chapter.

Comments Moving forward, we will document parts of the code using Kotlin comments. Kotlin employs the use of // for single-line comments and /* */ for multiline comments, much like many other programming languages. Comments serve as a valuable means to provide additional context to fellow developers and yourself in the future. With that in mind, let’s proceed to write our first Kotlin program and examine how Kotlin’s guiding principles are applied to it. IMPORTANT NOTE: The Java examples are for familiarity and not to prove that Kotlin is superior to Java in any way.

Getting Started with Kotlin

8

Hello Kotlin Every programming language book typically includes the famous Hello World example, and we won’t deviate from that tradition. To start exploring how Kotlin works, let’s put the following code in our Main.kt file and run it: fun main() { println("Hello Kotlin") }

When you run this code, for example, by clicking the Run button in IntelliJ IDEA, it will display the following output: > Hello Kotlin

There are some notable differences in this code compared to the equivalent Java code that achieves the same result: class Main { public static void main(String[] args) { System.out.println("Hello Java"); } }

In the following sections, we will explore these distinctive features in more detail. IMPORTANT NOTE: Many examples in this book assume that the code we provide is wrapped within the main function. If you don’t see the signature of the main function, it is likely that the code should be part of the main function. Alternatively, you can also run the examples in an IntelliJ scratch file, which provides a convenient way to execute code snippets.

No wrapping class In languages like Java, C#, Scala, and many others, it is mandatory to encapsulate every function within a class for it to be executable. However, Kotlin introduced the concept of package-level functions. If a function does not require access to class properties, there is no need to wrap it within a class. It can be defined directly at the package level. It’s as straightforward as that.

Chapter 1

9

We will explore package-level functions in more depth in the upcoming chapters, providing a comprehensive understanding of their usage and benefits.

No arguments In command-line applications, arguments are provided as an array of strings to configure the application. In Java, it’s mandatory for the main() function to accept this array of arguments, at least up until Java 21: public static void main(String[] args) { ... }

However, in Kotlin, the array of arguments is entirely optional. You can define the main() function with it: fun main(args: Array) { ... }

But you can also omit it completely: fun main() { ... }

This flexibility in Kotlin allows you to create a main() function that doesn’t require any arguments.

No static modifier In some languages, the static keyword is used to indicate functions within a class that can be called without creating an instance of that class. The main() function is a classic example of such usage. In contrast, Kotlin gives you more flexibility. If a function doesn’t rely on any internal state or class properties, you can define it outside of any class. Kotlin doesn’t use the static keyword for functions, and this has been the case from Kotlin 1.0 to at least Kotlin 2.0. It’s worth mentioning that while this approach enhances modularity and code reusability, there can be downsides. For example, when a function is separated from its logically related class, it might be harder to follow the code’s structure or maintain encapsulation.

A less verbose print function Rather than using the verbose System.out.println() method in Java, which prints a string to the standard output, Kotlin offers a convenient alias called println(). This alias functions identically to System.out.println(), allowing you to output strings to the standard output in a concise manner. It’s important to note that println() is an inline function, which means it gets replaced by its content (System.out.println()) in the generated code. This inline behavior makes it clear why we refer to it as an alias.

Getting Started with Kotlin

10

No semicolons In languages like Java and many others, every statement or expression must be terminated with a semicolon—for instance: System.out.println("Semicolon =>");

However, Kotlin takes a pragmatic approach. It automatically infers during compilation where semicolons should be placed, sparing you the need to explicitly include them—for instance: println("No semicolons! =>")

In Kotlin, most of the time, you won’t need to use semicolons in your code as they are considered optional, aligning with Kotlin’s emphasis on pragmatism and conciseness. This flexibility eliminates unnecessary clutter, allowing you to focus on the essential aspects of your code, which contributes to its readability and maintainability. However, like many conveniences, there can be some downsides to this flexibility. While it makes coding easier, especially for experienced developers, it’s essential to consider readability for yourself and others who may work on your code. While Kotlin’s flexibility allows for concise code, it’s crucial to adhere to best practices and coding standards. For instance, even though semicolons are optional, using them in specific situations can enhance code readability, particularly when separating multiple statements on a single line or in enum declarations. Ultimately, Kotlin provides the flexibility to write code efficiently, but it’s important to balance this with writing code that is clear and understandable, following industry best practices.

Understanding types Previously, we stated that Kotlin is a type-safe language. Now, let’s delve into a Kotlin-type system and compare it to what Java offers.

Basic types In some languages, a distinction is made between primitive types and objects. Java, for instance, has the int type for primitive values and Integer for objects. The former is more memory-efficient, while the latter is more expressive due to its support for null values and additional methods. However, Kotlin does not make such a distinction between primitives and objects as Java does. From a developer’s perspective, all types in Kotlin are treated equally, and you typically do not deal with primitives directly, which is a significant departure from Java. In Java, you often need to consider whether you are working with primitives or objects when writing code.

Chapter 1

11

Nonetheless, this difference does not imply that Kotlin is less efficient than Java in this regard. The Kotlin compiler optimizes types behind the scenes, ensuring that performance is not compromised. Therefore, while you may not directly deal with primitives in Kotlin, there’s no need to be overly concerned about it from an efficiency perspective. Most of the Kotlin types have similar names to their Java counterparts. The exceptions include Kotlin’s Int replacing Java’s Integer and Kotlin’s Unit replacing Java’s void. Listing all the types would be cumbersome, but here are some examples: Type family

Example types

Example values

Numbers

Int, Long, Double

42, 6_000_000L, 3.14

Strings

String

Booleans

Boolean

true, false

Characters

Char

z, \n, \u263A

“C-3 PO”

Table 1.2: Kotlin types

Type inference Let’s declare our first Kotlin variable by extracting the string from our Hello Kotlin example: fun main() { var greeting = "Hello Kotlin" println(greeting) }

It’s important to note that nowhere in our code do we explicitly state that greeting is of the type String. Instead, the compiler determines the variable’s type. Unlike interpreted languages such as JavaScript, Python, or Ruby, the type of a variable in Kotlin is inferred only once during compilation. The Kotlin compiler infers the most specific type for a variable, so in this case, it infers that greeting is of type String, not Any. However, attempting to assign a value of a different type will result in an error in Kotlin, as shown in the following example: fun main() { var greeting = "Hello Kotlin" greeting = 1 // System.out.println(s.length()));

However, this approach does not completely solve the problem. If a function receives an Optional as an argument, it is still possible to pass null and crash the program at runtime: void printLength(Optional optional) { if (optional.isPresent()) { // Int { return fun(y: Int): Int { return x - y } }

Introducing Functional Programming

180

Here is the shorter form of the preceding code: fun subtract(x: Int) = fun(y: Int): Int { return x - y }

In the preceding example, we use single-expression syntax to return an anonymous function without the need to declare the return type or use the return keyword. And here it is in an even shorter form: fun subtract(x: Int) = {y: Int -> x - y}

Now, an anonymous function is translated to a lambda, with the return type of the lambda inferred as well. We can invoke the curried function as follows: println(subtract(50)(8)) // 42 val subtractFrom50 = subtract(50) println(subtractFrom50(8)) // 42

Although not very useful by itself, it’s still an interesting concept to grasp. And if you’re a JavaScript developer looking for a new job, make sure you understand it fully, since it’s asked about in nearly every interview. One real-world scenario where you might want to use currying is logging. A log function usually looks something like this: enum class LogLevel { ERROR, WARNING, INFO } fun log(level: LogLevel, message: String) = println("$level: $message")

We could fix the log level by storing the function in a variable: val errorLog = fun(message: String) { log(LogLevel.ERROR, message) }

Chapter 5

181

Notice that the errorLog function is easier to use than the regular log function because it accepts one argument instead of two. However, this raises a question: What if we don’t want to create all of the possible loggers ahead of time? In this case, we can use currying. The curried version of this code would look like this: fun createLogger(level: LogLevel): (String) -> Unit { return { message: String -> log(level, message) } }

Now, it’s up to whoever uses our code to create the logger they want: val infoLogger = createLogger(LogLevel.INFO) infoLogger("Log something")

Interestingly, this approach bears a strong resemblance to the Factory design pattern we discussed in Chapter 2, Working with Creational Patterns. The capabilities of a modern language like Kotlin reduce the need for creating multiple custom classes to achieve similar functionality. Now, let’s move on to another useful technique that can prevent us from repeatedly performing the same calculations.

Memoization If our function always returns the same output for the same input, we can easily map its input to the output, caching the results in the process. This technique is called memoization. A common task when developing different types of systems or solving problems is finding a way to avoid repeating the same computation multiple times. Let’s assume we receive multiple lists of integers, and for each list, we would like to print its sum: val input = listOf( setOf(1, 2, 3), setOf(3, 1, 2), setOf(2, 3, 1), setOf(4, 5, 6) )

Looking at the input, you can see that the first three sets are in fact equal – the difference is only in the order of the elements, so calculating the sum three times would be wasteful.

Introducing Functional Programming

182

The sum calculation can be easily described as a pure function: fun sum(numbers: Set): Double { return numbers.sumOf { it.toDouble() } }

This function is pure as it neither relies on nor modifies any external state. Consequently, it is entirely safe to substitute a call to this function with its previously returned value for the same input. We could store the results of a previous computation for the same set in a mutable map: val resultsCache = mutableMapOf()

To avoid creating too many classes, we could use a higher-order function that would wrap the result in the cache that we created earlier: fun summarizer(): (Set) -> Double { val resultsCache = mutableMapOf() return { numbers: Set -> resultsCache.computeIfAbsent(numbers, ::sum) } }

Here, we use a method reference operator (::) to tell computeIfAbsent to use the sum() method in the event where the input hasn’t been cached yet. Note that sum() is a pure function, while summarize() is not. The latter will behave differently for the same input. But that’s exactly what we want in this case. Running the following code on the preceding input will invoke the sum function only twice: val summarizer = summarizer() input.forEach { println(summarizer(it)) }

The combination of immutable objects, pure functions, and closures provides us with a powerful tool for performance optimization. Just remember: nothing is free. We trade one resource, CPU time, for another resource, which is memory. And it’s up to you to decide which resource is more expensive in each case. Not only is the trade-off between CPU and memory important, but also the better readability/understandability for the reader of the code. The next topic we’ll discuss should help us with that further.

Chapter 5

183

Using expressions instead of statements A statement is a block of code that doesn’t return anything. An expression, on the other hand, returns a new value. Since statements produce no results, the only way for them to be useful is to mutate the state, whether that’s changing a variable, changing a data structure, or performing some kind of IO. Functional programming tries to avoid mutating the state as much as possible. Theoretically, the more we rely on expressions, the more our functions will be pure, with all the benefits of functional purity. This also improves the testability. We’ve used the if expression many times already, so one of its benefits should be clear: it’s less verbose and, for that reason, less error-prone than the if statement from other languages. Let’s now see an alternative to if statements, called pattern matching.

Pattern matching The concept of pattern matching will seem like switch/case on steroids. We’ve already seen how the when expression can be used, which we explored in Chapter 1, Getting Started with Kotlin, so let’s briefly discuss why this concept is important for the functional paradigm. You may know that in Java, switch accepts only some primitive types, strings, or enums. Consider the following code, which is usually used to demonstrate how polymorphism is implemented in the language: class Cat : Animal { fun purr(): String { return "Purr-purr"; } } class Dog : Animal { fun bark(): String { return "Bark-bark"; } } interface Animal

Introducing Functional Programming

184

If we were to decide which of the functions to call, we would need to write code akin to the following: fun getSound(animal: Animal): String { var sound: String? = null; if (animal is Cat) { sound = animal.purr(); } else if (animal is Dog) { sound = animal.bark(); } if (sound == null) { throw RuntimeException(); } return sound; }

This code attempts to figure out at runtime what methods the getSound class implements. This method could be shortened by introducing multiple returns, but in real projects, multiple returns are usually a bad practice. Since we don’t have a switch statement for classes, we need to use an if statement instead. Now, let’s compare the preceding code with the following Kotlin code: fun getSound(animal: Animal): String = when(animal) { is Cat -> animal.purr() is Dog -> animal.bark() else -> throw RuntimeException("Unknown animal") }

Since when is an expression, we avoided declaring the intermediate variable we previously had altogether. In addition, the code that uses pattern matching doesn’t need any type checks or casts. Now that we’ve learned how to replace imperative if statements with much more functional when expressions, let’s see how we can replace imperative loops in our code by using recursion.

Chapter 5

185

Recursion Recursion is a function invoking itself with new arguments. Many well-known algorithms, such as depth-first search, rely on recursion. Here is an example of a very inefficient function that uses recursion to calculate the sum of all the numbers in a given list: fun sumRec(i: Int, sum: Long, numbers: List): Long { return if (i == numbers.size) { return sum } else { sumRec(i+1, numbers[i] + sum, numbers) } }

We often try to avoid recursion due to the stack overflow errors that we may receive if our call stack is too deep. You can call this function with a list that contains a million numbers to demonstrate this: val numbers = List(1_000_000) {it} println(sumRec(0, 0, numbers)) // Crashed pretty soon, around 7K

However, Kotlin supports an optimization called tail recursion. One of the great benefits of tail recursion is that it avoids the dreaded stack overflow exception. If there is only a single recursive call in our function, we can use that optimization. When the Kotlin compiler sees a tailrec function, it transforms the recursive calls into a loop during compilation. This means that instead of each call consuming stack space, the state is updated in each iteration of the loop, preserving stack space. Let’s rewrite our recursive function using a new keyword, tailrec, to avoid this problem: tailrec fun sumRec(i: Int, sum: Long, numbers: List): Long { return if (i == numbers.size) { return sum } else { sumRec(i+1, numbers[i] + sum, numbers) } }

Introducing Functional Programming

186

Now, the compiler will optimize our call and avoid the exception completely. However, this optimization doesn’t work if you have multiple recursive calls, such as in the merge sort algorithm. Let’s examine the following function, which is the sort part of the merge sort algorithm: tailrec fun mergeSort(numbers: List): List { return when { numbers.size numbers numbers.size == 2 -> { return if (numbers[0] < numbers[1]) { numbers } else { listOf(numbers[1], numbers[0]) } } else -> { val left = mergeSort(numbers.slice(0..numbers.size / 2)) val right = mergeSort( numbers.slice((numbers.size / 2 + 1) ..< numbers.size)) return merge(left, right) } } }

Notice that there are two recursive calls instead of one. The Kotlin compiler will then issue the following warning: > "A function is marked as tail-recursive but no tail calls are found"

The Kotlin compiler is state-of-the-art, and you should rely on its suggestions.

Summary By now, you should have a more comprehensive grasp of functional programming, its advantages, and how Kotlin tackles this paradigm. We’ve explored the ideas of immutability and pure functions, and how their integration leads to code that’s both easier to test and maintain. Of course, nothing comes without some trade-offs. One significant drawback is that it can lead to performance issues in some scenarios due to the creation of numerous intermediate objects and the potential for increased memory usage.

Chapter 5

187

Additionally, the paradigm shift from imperative programming requires a learning curve and can be challenging to integrate with imperative codebases. We covered how Kotlin supports closures, allowing a function to access variables from its surrounding function and thus preserve state between multiple runs. This facilitates techniques like currying and memoization, which enable us to set default function arguments and cache previously computed function values to avoid redundant calculations. We touched on Kotlin’s use of the tailrec keyword, enabling compiler optimizations for tail-recursive functions. We discussed topics such as higher-order functions, the distinction between expressions and statements, and pattern matching. These features collectively contribute to creating code that’s not only easier to test but also less prone to concurrency-related issues. In the upcoming chapter, we’ll apply these concepts in a practical setting, exploring how reactive programming leverages the foundations of functional programming to develop scalable and robust systems.

Questions 1. What are higher-order functions? 2. What is the tailrec keyword in Kotlin? 3. What are pure functions?

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

6

Threads and Coroutines This chapter promises to be an exciting one as we take a deeper dive into the realm of concurrency in Kotlin. You may recall that in the previous chapter, we touched upon how our application could efficiently handle thousands of requests per second. To illustrate the importance of immutability, we introduced you to the concept of a race condition using two threads. In this chapter, we’ll extend that understanding and explore the following: •

Looking deeper into threads: How do threads work in Kotlin and what are the advantages and disadvantages of using them?



Introducing coroutines: What are coroutines and how do suspend functions facilitate them?



Starting coroutines: How do you launch a new coroutine and what are the different ways to do it?



Jobs: Understand what jobs are in the context of coroutines and how they help manage concurrent operations.



Coroutines under the hood: How does the Kotlin compiler handle coroutines and what happens at the bytecode level?



Dispatchers: Learn about the role of dispatchers in deciding what thread a coroutine runs on.



Structured concurrency: What is structured concurrency and how does it help in preventing resource leaks?

By the end of this chapter, you’ll have a solid grasp of Kotlin’s concurrency primitives and how to utilize them effectively in your applications.

190

Threads and Coroutines

Technical requirements There are no additional requirements compared to the previous chapter. You can find the source code for this chapter here: https://github.com/PacktPublishing/Kotlin-Design-Patternsand-Best-Practices_Third-Edition/tree/main/Chapter06.

Looking deeper into threads Before delving into the technical details, let’s first understand what problems threads are designed to solve. Modern computers and smartphones today are commonly equipped with multi-core CPUs. This architecture enables the computer to perform multiple tasks in parallel. This is a dramatic improvement compared to 15 years ago, when single-core CPUs were the norm, and dual-core CPUs were a luxury for tech enthusiasts. However, even with older, single-core CPUs, you weren’t limited to performing just one task at a time. You could listen to music while browsing the web, for example. How is that possible? The CPU employs a task-switching strategy, much like your brain does when multitasking. When you’re reading a book and listening to someone talk at the same time, your attention is divided between the two activities, switching back and forth. Although modern CPUs can handle multiple requests simultaneously, consider a scenario where there’s a surge of 10,000 requests per second. Given that you don’t have 10,000 CPU cores, it’s not feasible to process all these requests in parallel. Instead, they can be processed concurrently. A process is a self-contained execution unit with its own memory space, representing an executing computer program. These processes are independent, requiring specific techniques for inter-process communication, and can house multiple threads. A thread, in contrast, is the smallest execution unit within a process capable of running independently. It executes a sequence of instructions as part of the larger process. Threads within the same process share resources and memory, facilitating easier communication and data sharing. However, this shared environment necessitates careful management to prevent data corruption. In the Java virtual machine (JVM), threads serve as the primary units of concurrency, allowing code to execute concurrently to maximize CPU core utilization. Lighter than processes, threads can be generated in large numbers by a single process. While thread-based data sharing is simpler compared to processes, it also introduces unique challenges that will be examined later.

Chapter 6

191

Now that we’ve established the basics, let’s learn how to create threads in Java. We’ll start with a simple example where two threads each output numbers ranging from 0 to 100: for (int t = 0; t < 2; t++) { int finalT = t; new Thread(() -> { for (int i = 0; i < 100; i++) { System.out.println("Thread " + finalT + ": " + i); } }).start(); }

The output will look something like this: > ... > T0: 12 > T0: 13 > T1: 60 > T0: 14 > T1: 61 > T0: 15 > T1: 16 > ...

Note that the output will vary between executions and that at no point is it guaranteed to be interleaved. The same code in Kotlin would look as follows: repeat(2) { t -> thread { for (i in 1..100) { println("T$t: $i") } } }

In Kotlin, there’s less boilerplate because there’s a function that helps us create a new thread. Notice that, unlike Java, we don’t need to call start() to launch the thread. It starts by default.

Threads and Coroutines

192

If we would like to postpone it for later, we can set the start parameter to false: val t = thread(start = false) ... // Later t.start()

Daemon threads, a feature derived from Java, are specialized for running non-essential background tasks. Distinct from regular threads, daemon threads don’t hinder the JVM from shutting down. The JVM terminates once all non-daemon threads finish their tasks, consequently ending any ongoing daemon threads. In Java, the process of creating and starting a daemon thread involves a few steps and can be somewhat verbose. Typically, you create a thread, designate it as a daemon by setting its setDaemon property to true, and then start it running. Kotlin streamlines this process, offering a more straightforward approach to creating and starting daemon threads. This makes it easier to implement non-critical background tasks without affecting the application’s overall life cycle: fun main() { thread(isDaemon = true) { for (i in 1..1_000_000) { println("daemon thread says: $i") } } Thread.sleep(10) }

Observe that despite being instructed to print numbers up to one million, this thread only manages to print a few hundred. This behavior is due to its nature as a daemon thread. As soon as the parent thread terminates, all daemon threads are also stopped.

Thread safety Thread safety is a vast topic with a myriad of nuances; it’s an area so complex that numerous books have been dedicated to it. One of the challenges of dealing with concurrency is that bugs resulting from a lack of thread safety can be incredibly elusive. They may manifest only under specific conditions, such as when multiple threads are competing for the same resource, making them hard to reproduce and diagnose.

Chapter 6

193

Since this book focuses on Kotlin rather than the broad topic of thread safety, we’ll merely touch upon the basics. However, if you’re keen on diving deep into thread safety in the JVM environment, I highly recommend the book Java Concurrency in Practice by Brian Goetz as an excellent resource. To illustrate some of the challenges around thread safety, consider a straightforward example where 100,000 threads attempt to increment a shared counter. To ensure that all the threads have completed their tasks before we examine the final value of the counter, we’ll use a concurrency utility known as CountDownLatch. CountDownLatch is a thread synchronization mechanism that lets one or more threads wait until

a series of operations in other threads are completed. It is initialized with a specific count and utilizes two primary methods: countDown() to decrement the count each time an operation is completed and await() for threads to wait until the count reaches zero. When the count hits zero, indicating the completion of all required operations, the waiting threads are released to proceed. This tool is effective in ensuring tasks don’t start until their necessary prerequisites are fulfilled: fun main() { var counter = 0 val latch = CountDownLatch(100_000) repeat(100) { thread { repeat(1000) { counter++ latch.countDown() } } } latch.await() println("Counter $counter") }

The reason this code doesn’t print the correct number is that we introduced a data race since the ++ operation is not atomic. So, if more threads try to increment our counter, then there are more

chances for data races.

Threads and Coroutines

194

IMPORTANT NOTE: An atomic operation completes in a single, indivisible step, making it appear instantaneous to other threads. These threads can only see either the operation’s complete outcome or the original state, with no intermediate stages. In contrast, when non-atomic operations are performed on a shared variable by multiple threads, simultaneous modifications by these threads can result in an outcome dependent on thread scheduling. This unpredictability in the final state is termed a race condition, a concept briefly touched upon in the previous chapter.

Unlike Java, there’s no synchronized keyword in Kotlin. The reason for this is that Kotlin designers believe that a language shouldn’t be tailored to a particular concurrency model. Instead, there’s a synchronized function we can use: fun main() { var counter = 0 val latch = CountDownLatch(100_000) repeat(100) { thread { repeat(1000) { synchronized(latch) { counter++ latch.countDown() } } } } latch.await() println("Counter $counter") }

Now, our code prints 100,000, as expected.

Chapter 6

195

IMPORTANT NOTE: To summarize, thread safety is the characteristic of code or data structures that ensures correct and predictable operations in a multi-threaded environment, where multiple threads operate concurrently. It ensures that shared data remains consistent and uncorrupted despite simultaneous access by different threads. This is achieved by either employing synchronization mechanisms (such as locks, mutexes, and semaphores) to control access to critical code sections or by designing code to be stateless or using immutable objects. Thread safety is vital for ensuring reliable and predictable behavior in programs with concurrent thread execution.

Thread synchronization mechanisms in Kotlin In Java, the synchronized keyword is used to create blocks or methods that only one thread can access at a time, guarding against concurrency issues like data corruption and race conditions in multithreaded programs. Synchronized methods lock on the object they belong to, with behavior varying based on whether the object is static or an instance of a class. While crucial for maintaining data consistency, excessive use of synchronized can reduce performance and risk deadlocks. Consequently, modern Java practices often favor more adaptable concurrency tools like those in the java.util.concurrent package and the Lock interface. The volatile keyword signals that a variable is subject to concurrent access and modification by multiple threads. When a variable is marked as volatile, it ensures that every read and write operation on that variable happens directly in the main memory. This direct interaction with the main memory guarantees immediate visibility of the variable’s changes to all threads, thereby enhancing its thread safety by ensuring that threads always access the most current state of the variable. If you miss the synchronized methods from Java, there’s the @Synchronized annotation in Kotlin. Java’s volatile keyword is also replaced by the @Volatile annotation instead. The annotations are then compiled to the corresponding Java keyword.

Threads and Coroutines

196

The following table shows us an example of this comparison: Java

Kotlin

synchronized void oneThreadAtATime()

@Synchronized fun oneThreadAtATime()

volatile int sharedByMultipleThreads = 0;

@Volatile var shareByMultipleThreads: Int = 0

Table 6.1: Comparison between Java and Kotlin (synchronized and volatile methods)

The reason @Synchronized and @Volatile are annotations and not keywords in Kotlin is that Kotlin is designed to be a multi-platform language. While concepts like synchronized methods and volatile variables are essential in the context of the JVM, they might not have the same significance or behavior in other Kotlin target platforms, such as JavaScript or native code. By using annotations, Kotlin allows developers to specify platform-specific behavior when needed, making it a versatile language that can be used across different platforms without sacrificing compatibility or functionality. To understand the benefits of using coroutines, we first need to understand why threads are expensive. Let’s explore that in the following section.

Why are threads expensive? Creating a new thread comes at a cost, as each thread requires its own memory stack. What if we simulate some work within each thread by putting it to sleep? In the following code snippet, we’ll try to create 10,000 threads, each sleeping for a relatively short duration: val counter = AtomicInteger() val threads: List = try { List(10_000) { thread { Thread.sleep(1000) counter.incrementAndGet() } }

Chapter 6

197

} catch (oome: OutOfMemoryError) { println("Spawned ${counter.get()} threads before crashing") exitProcess(-42) }

Each thread needs to have some memory allocated for its stack. Creating such a large number of threads can lead to extensive communication with your operating system (OS) and substantial memory usage. We aim to detect whether we’ve exhausted the available memory by catching the relevant exception. Depending on your OS, this could result in either an OutOfMemoryError or a severe slowdown of the entire system. Let’s wait for all the threads to complete and print how much memory they consumed if they didn’t crash: threads.forEach { it.join() } println( "Finished without running Out of Memory consuming ${ (Runtime.getRuntime().totalMemory() - Runtime.getRuntime(). freeMemory()) / 1024 / 1024 }Mb" )

Certainly, there are ways to control the number of concurrently executing threads using the Executors API. This API has been available since Java 5, so you may already be familiar with it. With this API, you can create a thread pool of a specified size. Experiment with setting the pool size to values like 1, the number of CPU cores on your machine, 100, or 2000, and observe the outcomes. The following code shows this being set to 100: val pool = Executors.newFixedThreadPool(100)

Now, we would like to submit a new task. We can do this by calling pool.submit(): val counter = AtomicInteger(0) val start = System.currentTimeMillis() for (i in 1..10_000) { pool.submit {

Threads and Coroutines

198 // Do something counter.incrementAndGet() // Simulate wait on IO Thread.sleep(100) // Do something again counter.incrementAndGet() } }

By incrementing the counter variable once before sleep and once after, we are simulating some business logic. For instance, this could represent preparing some JSON data and then parsing the response, while the sleep operation simulates a network request, often involving a wait for a server response. To ensure that the thread pool terminates, and to allow it 20 seconds to do so, we can include the following lines of code: pool.awaitTermination(20, TimeUnit.SECONDS) pool.shutdown() println("Took me ${System.currentTimeMillis() - start} millis to complete ${counter.get() / 2} tasks")

It took us 20 seconds to complete this task because a new task cannot begin until the previous tasks wake up and finish their jobs. This situation illustrates what happens in a multithreaded system that lacks sufficient concurrency. In the next section, we’ll explore how coroutines aim to address this problem.

Introducing coroutines Kotlin introduces coroutines alongside Java’s threading model, offering a lightweight alternative. These coroutines come with various advantages over traditional threads. They are more efficient, reducing resource consumption by enabling efficient multiplexing. Multiplexing, in the context of coroutines, refers to the ability to handle multiple tasks concurrently within a single thread. Unlike traditional threading, where each task might require a separate thread, multiplexing allows these tasks to share threads more efficiently. This is achieved through the suspension and resumption of coroutine executions. Coroutines follow structured concurrency, simplifying task management and preventing resource leaks. They support suspending and resuming execution, ideal for non-blocking IO operations, and enhance code responsiveness.

Chapter 6

199

Additionally, they offer built-in cancelation and streamlined error handling, making code more predictable. Coroutines make asynchronous code resemble sequential code, improving readability and integration with Java libraries. It’s important to note that coroutines are not a native part of the language; they are an external library provided by JetBrains. Therefore, to use them, we must specify their inclusion in our Gradle configuration file, typically named build.gradle.kts: dependencies { ... implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.7.3") }

IMPORTANT NOTE: By the time you read this book, the version of the coroutines library may be greater than 1.7. Please make sure you’re using the most up-to-date version.

First, we will compare starting a new thread and starting a new coroutine.

Starting coroutines We learned how to start a new thread in Kotlin in the Looking deeper into threads section. Now, let’s initiate a new coroutine instead. We will replicate a similar example to what we did with threads. Each coroutine will increment a counter, simulate some IO by sleeping for a while, and then increment the counter again: val latch = CountDownLatch(10_000) val c = AtomicInteger() val start = System.currentTimeMillis() for (i in 1..10_000) { with(GlobalScope) { launch { c.incrementAndGet() delay(100) c.incrementAndGet() latch.countDown() } }

Threads and Coroutines

200 } latch.await(10, TimeUnit.SECONDS) println("Executed ${c.get() / 2} coroutines in currentTimeMillis() - start}ms")

${System.

To initiate a coroutine, we require a scope. You can think of a scope as a thread pool, but for coroutines. The most basic scope available to you is called GlobalScope. Once we’ve obtained the scope, we can start a new coroutine by using the launch() function provided by that scope. It’s important to note that this function is simply another function and not a language construct. A key aspect to note is the usage of the delay() function, simulating IO-bound work like database queries or network data fetching. While delay() resembles the Thread.sleep() method in putting the current coroutine on hold, it differs significantly in its impact. sleep() pauses the entire thread, but delay(), marked with the suspend keyword (to be discussed in the Jobs section), only suspends the specific coroutine. This allows other coroutines on the same thread to continue operating, unlike Thread.sleep(), which halts all activities on the thread. Thus, while Thread.sleep() affects the entire thread, delay() targets only the individual coroutine, maintaining the efficiency. Upon running this code, you’ll observe that using coroutines completes the task in approximately 200 ms, while using threads takes around 20 seconds or results in a memory overload. Remarkably, this improvement doesn’t require substantial code alterations. The efficiency gain stems from the high concurrency of coroutines, which can be suspended without blocking their host thread. This advantage is crucial as it enables more work to be done with fewer threads, which are resource intensive, demonstrating the effectiveness of coroutines in resource utilization and task execution. If you run this code in your IntelliJ IDEA, you may see that GlobalScope is marked as a delicate API. This means that GlobalScope shouldn’t be used in real-world projects unless the developer thoroughly understands how it works under the hood. Otherwise, it may lead to unintended resource leaks. We’ll explore better ways of launching coroutines later in this chapter. While we’ve seen that coroutines offer significantly better concurrency compared to threads, they are not magic. Now, let’s explore another way of starting a coroutine and discuss some of the issues coroutines may still face. The launch() function starts a coroutine that doesn’t produce a return value, akin to a void function. It yields a Job, essentially a wrapper for Unit, indicating no result.

Chapter 6

201

On the other hand, when a function needs to return a value, the async() function is used. It launches a coroutine too, but rather than yielding a Job, it returns a Deferred, with T representing the type of the expected result. Deferred is a subtype of Job, serving as a placeholder for the eventual outcome. This distinction introduces a discussion about the purpose and utility of these wrapper types in Kotlin’s coroutine framework.

Jobs The result of running an asynchronous task is referred to as a job. Just as the Thread object represents an actual OS thread, the Job object represents an actual coroutine. For instance, consider the following function that initiates a coroutine to generate a universally unique identifier (UUID) asynchronously and returns it: fun fastUuidAsync() = GlobalScope.async { UUID.randomUUID() }

However, if we execute this code from our main method, it won’t print the expected UUID value. Instead, it will produce a result similar to the following: > DeferredCoroutine{Active}

The object returned from a coroutine is known as a job. Now, let’s explore what a job is and how to use it correctly. To illustrate this concept, consider the following code snippet: fun main() { runBlocking { val job: Deferred = fastUuidAsync() println(job.await()) } }

A job has a simple life cycle and can be in one of the following states: •

New: Created but not started yet.



Active: Just created by the launch() function, for example. This is the default state.



Completed: Everything went well.



Cancelled: Something went wrong.

Threads and Coroutines

202

Two more states are relevant to jobs that have child jobs: •

Completing: Waiting to finish running child jobs before completing.



Canceling: Waiting to finish running child jobs before canceling.

The job that we confused with its value is in the active state, meaning that it hasn’t finished computing our UUID yet. A job that has a value is known as being Deferred: val job: Deferred = fastUuidAsync()

We’ll discuss the Deferred value in more detail in Chapter 8, Designing for Concurrency. To wait for a job to complete and retrieve the actual value, we can use the await() function: val job: Deferred = fastUuidAsync() println(job.await())

However, this code won’t compile as it requires a coroutine context. To fix this, wrap the code in a runBlocking block: fun main() { runBlocking { val job: Deferred = fastUuidAsync() println(job.await()) } }

The runBlocking function blocks the main thread until all coroutines within its scope have completed. It serves as a bridge between standard code and coroutine-based code. Alternatively, you could modify the main function to be a suspending function, which is another viable approach. suspend fun main() { val job: Deferred = fastUuidAsync() println(job.await()) }

IMPORTANT NOTE: For the sake of conciseness, we may omit runBlocking in some examples in this chapter. Full working examples can always be found in this book’s GitHub repository.

Chapter 6

203

The Job object also offers other useful methods, which we’ll discuss in the following sections.

Coroutines under the hood We’ve highlighted some key facts about coroutines several times: •

Coroutines are akin to lightweight threads. They consume fewer resources compared to regular threads, enabling the creation of more concurrent tasks.



Unlike traditional threads that block the entire thread when waiting for an operation to complete, coroutines suspend themselves, allowing the underlying thread to execute other tasks in the meantime.

But how exactly do coroutines work? Let’s explore the mechanics using an example: class Blocking { companion object { fun profile(id: String): Profile { val bio = fetchBioOverHttp(id) // takes 1s val picture = fetchPictureFromDB(id) // takes 100ms val friends = fetchFriendsFromDB(id) // takes 500ms return Profile(bio, picture, friends) } private fun fetchFriendsFromDB(id: String): List { Thread.sleep(500) return emptyList() } private fun fetchPictureFromDB(id: String): ByteArray? { Thread.sleep(100) return null } private fun fetchBioOverHttp(id: String): String { Thread.sleep(1000) return "Alexey Soshin, Software Architect" } } }

Threads and Coroutines

204

In the given code, the Blocking.profile() function takes approximately 1.6 seconds to complete. Its execution is entirely sequential, and during this time, the executing thread remains blocked. However, we can improve the design of this function to leverage coroutines for concurrency, as demonstrated in the following: class Async { private val scope = CoroutineScope(Dispatchers.Default) suspend fun profile(id: String): Profile { val bio = fetchBioOverHttpAsync(id) // takes 1s val picture = fetchPictureFromDBAsync(id) // takes 100ms val friends = fetchFriendsFromDBAsync(id) // takes 500ms return Profile(bio.await(), picture.await(), friends.await()) } private fun fetchFriendsFromDBAsync(id: String) = scope.async { delay(500) emptyList() } private fun fetchPictureFromDBAsync(id: String) = scope.async { delay(100) null } private fun fetchBioOverHttpAsync(id: String) = scope.async { delay(1000) "Alexey Soshin, Software Architect" } }

Without the suspend keyword, asynchronous code in Kotlin won’t compile. We’ll discuss the importance of the suspend keyword later in this section. We declare something called a scope in our class, and that scope provides us with the async operation. Now, let’s compare the performance of the two functions: one written in a blocking manner, and another that utilizes coroutines.

Chapter 6

205

We can declare the main function as suspended, as we saw earlier, and measure the time it takes for them to complete using measureTimeMillis: suspend fun main() { val t1 = measureTimeMillis { Blocking.profile("123") } val t2 = measureTimeMillis { Async().profile("123") } println("Blocking code: $t1") println("Async: $t2") }

The output will be something like this: > Blocking code: 1631 > Async: 1051

The execution time of the concurrent coroutines is the maximum of the longest coroutine, with a minimum overhead, while with sequential code it’s the sum of all functions. Now that we’ve covered the first two examples, let’s explore another way to write the same code. We’ll mark each of the functions with the suspend keyword: class Suspend { suspend fun profile(id: String): Profile { val bio = fetchBioOverHttp(id) // takes 1s val picture = fetchPictureFromDB(id) // takes 100ms val friends = fetchFriendsFromDB(id) // takes 500ms return Profile(bio, picture, friends) } private suspend fun fetchFriendsFromDB(id: String): List { delay(500) return emptyList() }

Threads and Coroutines

206

private suspend fun fetchPictureFromDB(id: String): ByteArray? { delay(100) return null } private suspend fun fetchBioOverHttp(id: String): String { delay(1000) return "Alexey Soshin, Software Architect" } }

If you run this example, the performance will be the same as the blocking code: suspend fun main() { ... val t3 = measureTimeMillis { Suspend().profile("123") } println("Blocking code: $t1") println("Async: $t2") println("Suspend: $t3") }

The output is as follows: > Blocking code: 1631 > Async: 1051 > Suspend: 1631

So, why would we want to use suspendable functions? Suspendable functions in Kotlin are non-blocking, distinct from parallel task execution. This feature allows for handling a larger number of users efficiently with the same thread count, due to Kotlin’s adept management of these functions.

Chapter 6

207

When the Kotlin compiler sees the suspend keyword, it knows it can split and rewrite the function, like this: fun profile(state: Int, id: String, context: ArrayList): Profile { when (state) { 0 -> { context += fetchBioOverHttp(id) profile(1, id, context) } 1 -> { context += fetchPictureFromDB(id) profile(2, id, context) } 2 -> { context += fetchFriendsFromDB(id) profile(3, id, context) } 3 -> { val (bio, picture, friends) = context return Profile(bio, picture, friends) } } }

This rewritten code uses the State design pattern from Chapter 4, Getting Familiar with Behavioral Patterns, to split the execution of the function into many steps. By doing so, we can release the thread that executes coroutines at every stage of the state machine. IMPORTANT NOTE: This is not a perfect depiction of the generated code. The goal is to demonstrate the idea behind what the Kotlin compiler does, but some subtle implementation details are omitted for brevity.

Note that, unlike the asynchronous code we produced earlier, the state machine itself is sequential and takes the same amount of time as the blocking code to execute all its steps.

Threads and Coroutines

208

It is a fact that none of these steps block any threads, which is important in this example.

Dispatchers In the section discussing the high cost of threads, we touched on the concept of executors in Java. Previously, we utilized a coroutine scope for writing asynchronous code. Now, we will examine the rationale behind the use of coroutine dispatchers in Kotlin. When we ran our coroutines using the runBlocking function, their code was executed on the main thread. You can check this by running the following code: fun main() { runBlocking { launch { println(Thread.currentThread().name) } } }

This prints the following output: > main

In contrast, when we run a coroutine using GlobalScope, it runs on something called DefaultDispatcher: fun main() { runBlocking { GlobalScope.launch { println("GlobalScope.launch: ${Thread.currentThread().name}") } } }

This prints the following output: > DefaultDispatcher-worker-1 DefaultDispatcher is a thread pool that is used for short-lived coroutines.

Chapter 6

209

Coroutine generators, such as launch() and async(), rely on default arguments, one of which is the dispatcher they will be launched on. To specify an alternative dispatcher, you can provide it as an argument to the coroutine builder: fun main() { runBlocking { launch(Dispatchers.Default) { println(Thread.currentThread().name) } } }

The preceding code prints the following output, DefaultDispatcher-worker-1, instead of the previous main. In addition to the main and default dispatchers, which we’ve already discussed, there is also an IO dispatcher, which is used for long-running tasks. You can use it similarly for other dispatchers by providing it to the coroutine builder, like so: async(Dispatchers.IO) { // Some long running task here }

Dispatchers in Kotlin provide a powerful mechanism for managing the execution context of coroutines. Here are some benefits and details of using different dispatchers. Under the hood, these dispatchers manage thread pools. This means that coroutines launched on the same dispatcher share threads from the associated pool. This efficient thread reuse minimizes the overhead of creating and destroying threads for each coroutine. By choosing the appropriate dispatcher, you can keep your application responsive. For example, by offloading network requests to Dispatchers.IO, you ensure that your UI remains smooth and responsive, even during data retrieval. This is especially relevant for Android or desktop applications. Dispatchers enable you to control the level of concurrency in your application. You can limit the number of concurrently executing coroutines on a specific dispatcher, preventing resource exhaustion and contention.

Threads and Coroutines

210

Additionally, dispatchers allow you to scale your application efficiently. By using different dispatchers for various tasks, you can allocate resources according to the workload. For example, you can dedicate more threads to IO-bound tasks while reserving fewer threads for CPU-bound operations. The coroutines library provides you with four dispatchers: Dispatcher

Description

Default

Optimized for CPU-bound work

IO

Optimized for IO-bound work

Main

For the UI thread in Android, JavaFX, or Swing EDT

Unconfined

No specific thread constraints Table 6.2: Comparison between different dispatchers

In addition, you can also define your own dispatchers, for example, by utilizing thread pools: async(Executors.newFixedThreadPool(4).asCoroutineDispatcher()) { // Some long running task here }

This code creates a fixed-size thread pool with four threads and adapts it into a coroutine dispatcher using asCoroutineDispatcher(). Yes, that’s the Adapter pattern in action! This approach effectively manages and parallelizes long-running operations using a dedicated thread pool while maintaining the benefits of Kotlin’s coroutine-based asynchronous programming model. In summary, Kotlin’s dispatchers offer a flexible and efficient way to manage concurrency and parallelism in your applications. By choosing the right dispatcher for each task, you can optimize resource utilization, improve responsiveness, and create well-structured concurrent code.

Switching dispatchers Sometimes, you may need to switch the coroutine’s context or dispatcher for a specific block of code and then return to the original context afterward. For that, Kotlin provides you with the withContext() function. When you call withContext, you provide a new coroutine context or dispatcher as an argument. This can be a different thread pool (dispatcher) or a custom coroutine context with specific settings. The code inside the withContext block will run in this new context.

Chapter 6

211

This can be useful for performing IO operations on a specific dispatcher or running code in a background thread. After the execution of the code within the withContext block, the coroutine reverts back to its initial context seamlessly, as though no context switch has taken place. This guarantees that any code following this will operate in the original context, maintaining thread affinity and any other behaviors specific to that context. For example, we could change the Async.profile() function example we discussed earlier in the following manner: suspend fun profile(id: String): Profile { ... val picture = withContext(Dispatchers.IO) { fetchPictureFromDBAsync(id) } ... }

In this case, any code before or after the withContext block will run on the Default dispatcher. But the fetchPictureFromDBAsync() code will run on the IO dispatcher.

Structured concurrency Structured concurrency is a concept in Kotlin that ensures the orderly execution and completion of coroutines. It ties the life cycle of coroutines to the scope they are launched in, making it easier to manage and control them. Under structured concurrency, when a coroutine scope is canceled or completes its execution, all coroutines launched within that scope are also canceled or completed. This approach simplifies the handling of concurrent operations, preventing resource leaks and ensuring that coroutines don’t run longer than necessary. Let’s look at an example. It is a very common practice to spawn coroutines from inside another coroutine. The first rule of structured concurrency is that the parent coroutine should always wait for all its children to complete. This prevents resource leaks, which are very common in languages that don’t have the structured concurrency concept.

Threads and Coroutines

212

This means that if we look at the following code, which starts 10 child coroutines, the parent coroutine doesn’t need to wait explicitly for all of them to complete: val parent = launch(Dispatchers.Default) { val children = List(10) { childId -> launch { for (i in 1..1_000_000) { UUID.randomUUID() if (i % 100_000 == 0) { println("$childId - $i") yield() } } } } }

Now, let’s decide that one of the coroutines throws an exception after some time: ... if (i % 100_000 == 0) { println("$childId - $i") yield() } if (childId == 8 && i == 300_000) { throw RuntimeException("Something bad happened") } ...

When you execute this code, something interesting happens. The coroutine terminates, and simultaneously, all of its sibling coroutines that haven’t finished yet are also terminated. This behavior is due to an uncaught exception that reaches the parent coroutine, leading to its cancelation. Consequently, the parent coroutine proceeds to terminate all its other child coroutines to prevent any resource leaks. If the parent coroutine is not explicitly handling exceptions from its children, it will still cancel all its child coroutines in a structured manner upon encountering an exception in any of them. This is a fundamental aspect of structured concurrency in Kotlin: when one coroutine in a scope fails, the entire scope is canceled.

Chapter 6

213

This cancelation process is orderly and structured, ensuring that all resources are properly released and that the coroutines do not just abruptly stop, but rather complete their cancelation process in a controlled manner. This behavior helps in maintaining robustness and preventing resource leaks in concurrent applications. Usually, this is the desired behavior. If we’d like to prevent child exceptions from stopping the parent as well, we can use supervisorScope: val parent = launch(Dispatchers.Default) { supervisorScope { val children = List(10) { childId -> ... } } }

By using supervisorScope, even if one of the coroutines fails, the parent job won’t be affected. This means you don’t have to handle the exception thrown by the child coroutines. In the context of supervisorScope, exceptions thrown by child coroutines are not propagated to the parent coroutine. This means that without additional handling, these exceptions can go unnoticed. To effectively capture and handle exceptions in child coroutines within a supervisorScope, you need to attach a CoroutineExceptionHandler. This handler is necessary because, unlike a regular coroutine scope, supervisorScope does not automatically cancel the parent coroutine or other children upon a failure in one of the children, and as a result, it does not propagate exceptions up the hierarchy in the same way. Using a CoroutineExceptionHandler allows you to define custom behavior for dealing with these exceptions: fun withSupervisorScopeAndExceptionHandler() = runBlocking { println("Running with Supervisor Scope and Exception Handler") val exceptionHandler = CoroutineExceptionHandler { _, e -> println("Exception: $e") } val parent = supervisorScope { val children = List(10) { childId -> launch(exceptionHandler) { for (i in 1..1_000_000) { UUID.randomUUID()

Threads and Coroutines

214 if (i % 100_000 == 0) { println("$childId - $i") yield() } if (childId == 8 && i == 300_000) {

throw RuntimeException("Something bad happened") } } } } } }

The parent coroutine can still terminate all its child coroutines by using the cancel() function. Once we invoke cancel() on the parent job, all of its children are canceled too. Now that we’ve discussed the benefits of structured concurrency, let’s reiterate one point from the start of this chapter: using GlobalScope and the fact that it’s marked as a delicate API. Although GlobalScope exposes functions such as launch() and async(), it doesn’t benefit from structured concurrency principles and is prone to resource leaks when used incorrectly. Since GlobalScope is not tied to any specific coroutine scope, coroutines launched within it live for the entire lifetime of the application, or until they complete. This means they don’t have a well-defined life cycle and can continue running even when no longer needed, potentially causing memory leaks and consuming system resources unnecessarily. Additionally, error handling and cancelation become more complex and less predictable with GlobalScope, as these coroutines are not automatically canceled when their launching environment or activity is destroyed. Therefore, while GlobalScope is useful in specific scenarios, its misuse in situations where structured concurrency would be more appropriate poses significant risks to application stability and efficiency.

The coroutineScope builder In Kotlin, the coroutineScope function is a coroutine builder that provides structured concurrency within a suspending function. It creates a new coroutine scope, which means that any coroutines launched within this scope are bound by the life cycle of the enclosing scope. When the enclosing suspending function completes, all coroutines launched within coroutineScope must also complete.

Chapter 6

215

This function enforces structured concurrency by ensuring that all child coroutines launched within the scope are awaited before the parent coroutine can proceed. This prevents resource leaks and unhandled exceptions from escaping the scope. Using the coroutineScope builder doesn’t specify which dispatcher to use. This is different from the initial examples where we explicitly used GlobaScope.launch, for example. If any child coroutine within the coroutineScope is canceled due to an exception or explicitly using cancel(), it will not affect the other child coroutines or the parent coroutine. Only the failed coroutine is canceled, and the others continue execution. We’ll discuss this topic in the Canceling a coroutine part of this chapter. Any exceptions thrown by child coroutines are propagated to the parent coroutine. You can use regular try-catch blocks to handle these exceptions within coroutineScope. Let’s see how a function from earlier could be rewritten using the coroutineScope builder: suspend fun fetchFriendsFromDBAsync(id: String) = coroutineScope { async { delay(500) emptyList() } }

Remember that coroutineScope is a suspending function, so it can only be called from within another suspending function or coroutine. It is a powerful tool for managing concurrent tasks within a structured and safe environment.

Canceling a coroutine If you are a Java developer, you may know that stopping a thread is quite complicated. For instance, the Thread.stop() method is deprecated in Java due to its unsafe and unpredictable nature, as it can leave shared data in an inconsistent state. Alternatively, there’s Thread. interrupt(), which is a safer option for requesting a thread to stop. However, the effectiveness of Thread.interrupt() relies on the target thread’s cooperation, as it merely sets an interrupted status (or flag) on the thread. The thread must regularly check this interrupted status – usually by calling Thread.isInterrupted() or by handling InterruptedException – to respond appropriately, typically by terminating its operation.

Threads and Coroutines

216

The challenge arises because not all threads actively check for this interrupted status, especially if they are not designed with interruption in mind. This can happen in cases where the thread’s code doesn’t include explicit interruption checks, or it doesn’t involve operations that throw InterruptedException (like blocking IO or Thread.sleep()). In such scenarios, calling Thread.interrupt() might not have the desired effect of stopping the thread, since the thread may continue running as if uninterrupted. In Java, when utilizing a thread pool, you receive a Future object, which provides the cancel(boolean mayInterruptIfRunning) method for controlling thread execution. Similarly, in Kotlin, the launch() function yields a Job, serving an analogous purpose for managing coroutine execution. This job can be canceled. The same rules from the previous example apply, though. If your coroutine never calls another suspend function or the yield function, it will disregard cancel(). To demonstrate that, we’ll create one coroutine that yields once in a while: val cancellable = launch { try { for (i in 1..10_000) { println("Cancellable: $i") yield() } } catch (e: CancellationException) { e.printStackTrace() } }

As you can see, after each print statement, the coroutine calls the yield function. If it is canceled, it will print the stack trace. We’ll also create another coroutine that doesn’t yield: val notCancellable = launch { for (i in 1..10_000) { if (i % 100 == 0) { println("Not cancellable $i") } } }

Chapter 6

217

This coroutine never yields and prints its results every 100 iterations to avoid spamming the console. Now, let’s try canceling both coroutines: println("Canceling cancellable") cancellable.cancel() println("Canceling not cancellable") notCancellable.cancel()

Then, we’ll wait for the results: runBlocking { cancellable.join() notCancellable.join() }

By invoking join(), we can wait for the execution of the coroutine to complete. Let’s look at the output of our code: > Canceling cancellable > Cancellable: 1 > Not cancellable 100 >... > Not cancellable 10000 > Canceling not cancellable

A few interesting points we can learn from this experiment regarding the behavior of coroutines are as follows: •

Cancelling the cancellable coroutine doesn’t happen immediately. It may still print a line or two before being canceled.



We can catch CancellationException, but our coroutine will be marked as canceled anyway. Catching that exception doesn’t automatically allow us to continue.

Now, let’s understand what happened. The coroutine checks whether it was canceled, but only when it is switching between states. Since the non-cancelable coroutine didn’t have any suspending functions, it never checked if it was asked to stop.

Threads and Coroutines

218

In the cancellable coroutine, we used a new function: yield(). This function checks whether there is anybody else that wants to do some work. If there’s nobody else, the execution of the current coroutine will resume. Otherwise, another coroutine will start or resume from the point where it stopped earlier. Note that without the suspend keyword on our function or a coroutine generator, such as launch(), we can’t call yield(). This is true for any function marked with suspend: it should either be called from another suspend function or from a coroutine.

Setting timeouts Imagine a scenario where fetching a user’s profile is unexpectedly slow. Suppose we decide that if the profile doesn’t return within 0.5 seconds, we will opt to show no profile. This behavior can be easily implemented using the withTimeout() function in Kotlin: val coroutine = async { withTimeout(500) { try { val time = Random.nextLong(1000) println("It will take me $time to do") delay(time) println("Returning profile") "Profile" } catch (e: TimeoutCancellationException) { e.printStackTrace() } } }

We set the timeout to be 500 milliseconds, and our coroutine will delay for between 0 and 1000 milliseconds, giving it a 50 percent chance of failing. We’ll await the results from the coroutine and see what happens: val result = try { coroutine.await() } catch (e: TimeoutCancellationException) {

Chapter 6

219

"No Profile" } println(result)

The advantage of Kotlin’s try being an expression allows for immediate result return. If the coroutine completes before timeout, result is set to Profile; otherwise, a TimeoutCancellationException leads to No Profile. Utilizing timeouts combined with try-catch expressions offers a robust way to manage long-running tasks. This technique is vital in scenarios like data fetching in UIs or microservices communication. For instance, when fetching server data such as user profiles or images, network latency can cause delays. Here, implementing a timeout prevents the UI from being indefinitely stuck in a loading state. On encountering a prolonged request, the application can smoothly switch to a fallback, like showing a default message, thus preserving user experience. Similarly, in microservices, timeouts prevent a service from waiting endlessly for a response from another service, enhancing system reliability by allowing alternative actions or cached responses. In summary, withTimeout() is a key tool in maintaining application responsiveness and system reliability, adeptly handling potential delays or failures to safeguard user experience and system stability.

Summary In this chapter, we’ve explored the creation of threads and coroutines in Kotlin, highlighting the advantages of coroutines over traditional threads. While Kotlin offers a simpler syntax for thread creation compared to Java, it still comes with memory and performance overheads. Coroutines offer an efficient alternative for concurrent code execution in Kotlin. By this point, you should be well versed in initiating and awaiting the completion of coroutines, as well as in retrieving their results. Additionally, we have covered the structured nature of coroutines and their interaction with dispatchers. Furthermore, we’ve introduced the concept of structured concurrency, a contemporary approach that simplifies the prevention of resource leaks in concurrent code. In the next chapter, we’ll explore how to leverage these concurrency mechanisms to design scalable and robust systems tailored to our requirements.

220

Threads and Coroutines

Questions 1. What are the different ways to start a coroutine in Kotlin? 2. With structured concurrency, if one of the coroutines fails, all the siblings will be canceled as well. How can we prevent that behavior? 3. What is the purpose of the yield() function?

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

7

Controlling the Data Flow Having discussed coroutines in the previous chapter, we will now broaden our exploration to include features that enhance communication and data handling across different coroutines. This chapter focuses on further important aspects of Kotlin’s concurrency toolkit, namely channels and flows. We will also look into higher-order functions for collections, noting their APIs’ similarity to those of channels and flows. Building on the foundation laid by our discussion of functional programming, this chapter will emphasize the use of small, reusable, and composable functions. These functions enable us to write expressive code that clearly outlines what we want to achieve, rather than focusing on the steps needed to accomplish it. In this chapter, the topics we’ll cover are: •

Reactive principles



Higher-order functions for collections



Exploring concurrent data structures



Sequences



Channels



Flows

By the end of this chapter, you’ll have the knowledge and tools to communicate more effectively between different coroutines and to process data more efficiently.

Technical requirements There are no additional requirements compared to the previous chapter.

Controlling the Data Flow

222

You can find the source code used in this chapter on GitHub at the following location: https://github.com/PacktPublishing/Kotlin-Design-Patterns-and-Best-Practices_ Third-Edition/tree/main/Chapter07.

Reactive principles As we begin this chapter, we shift our focus to reactive programming. This concept is fundamental to data streaming and forms the foundation for the work we will cover in this chapter. Rooted in functional programming, reactive programming allows us to shape our logic as a series of operations on a stream of data. The core tenets of this programming approach are encapsulated in the Reactive Manifesto, which you can read on the website (https://www.reactivemanifesto.org). According to the Reactive Manifesto, a truly reactive program should possess four key qualities: •

Responsive



Resilient



Elastic



Message-driven

To illustrate these principles, consider the following scenario: You’re experiencing slow internet speeds and decide to call your internet service provider to address the issue. Keep this situation in mind as we proceed to explore how each of the four principles would manifest in a reactive program designed to handle such customer service interactions.

The responsive principle The time you’re willing to spend waiting on the line varies based on your circumstances, such as how urgent your issue is and how much time you can afford to lose. If you’re pressed for time, you’re more likely to hang up early, especially when you’re uncertain about how much longer you’ll have to wait while listening to unappealing hold music. This situation exemplifies a system being unresponsive. Similar scenarios play out in the digital realm; for instance, a web request may languish in a queue while the server processes other pending requests, leaving you in the dark about when your request will finally be attended to. Conversely, a responsive system could offer periodic updates during your wait, informing you of your position in the queue or even estimating how much longer you’ll need to hold. In both real-world and digital examples, you’ve spent time waiting, but the second system at least offers you the courtesy of information.

Chapter 7

223

Armed with this information, you can make an informed decision about whether to continue waiting or take a different action. This is what it means for a system to be responsive, one of the key tenets outlined in The Reactive Manifesto.

The resilient principle The principle of resiliency is about how well a system can handle failure, recover from it, and continue to function. A dropped call after a long wait is a perfect example of a system lacking in resilience. The Reactive Manifesto outlines various strategies for building a resilient system: •

Delegation: If your initial customer service representative can’t solve your issue, the system reroutes you to someone who can. This act of delegation ensures that if one part of the system fails to meet the need, another part can step in.



Replication: The call center system can also be designed to scale its workforce based on demand. If there are many callers in the queue, additional representatives can be added to manage the load. This ties into the concept of “elasticity,” which we’ll look at shortly.



Containment and isolation: The automated voice providing an option to leave your phone number for a callback serves two purposes: •

Containment: By opting for a callback, you’re no longer tied to the system’s current limitations, such as a shortage of available representatives.



Isolation: If the system has issues like unreliable phone lines, your experience is still protected; you’re isolated from these systemic problems because the company can contact you later.

In essence, a resilient system is designed to cope with challenges and recover effectively without significantly disrupting the user’s experience.

The elastic principle Elasticity refers to the system’s capacity to dynamically adjust its resources in response to fluctuations in demand or workload. This feature ensures that the system scales its resources up or down as required, thereby maintaining optimal performance and user experience, while also minimizing costs. Essentially, it allows for paying only for the resources needed at any given moment, rather than constantly paying for peak-time resources.

224

Controlling the Data Flow

In this scenario, a sudden surge of calls occurs when a mole chews through an internet cable, causing widespread service interruptions. An elastic system has the capacity to scale up, possibly by bringing in additional customer service representatives, to handle the increased volume of calls. This ensures that customers’ concerns are addressed in a timely manner. Then, once the issue is resolved and call volumes return to normal levels, the system can scale back down, allowing the extra representatives to return to their other tasks. Elasticity is closely related to scalability. In a scalable system, the call center should have the infrastructure to handle an increased number of representatives, such as additional phones and workstations. If there aren’t enough phones to accommodate the representatives, that becomes a bottleneck, limiting the system’s ability to effectively manage higher call volumes. In summary, an elastic system is not just about having extra resources, but also about being able to allocate and deallocate those resources dynamically, thus ensuring efficient utilization while maintaining a high level of service.

The message-driven principle The concept of being message-driven is fundamental to creating reactive systems. Message-driven architecture promotes asynchronous communication and thereby enables many of the other reactive principles like resiliency and elasticity. When customers only leave messages, it decouples the time of the request from the time of processing. This allows for better system resource management. Representatives can batch messages or prioritize them based on criteria like urgency or type of issue, improving overall efficiency. Backpressure is another significant aspect of message-driven systems. When the system detects that it’s being overwhelmed with messages, it has mechanisms to slow down the acceptance of new messages or to offload tasks, thereby maintaining system integrity. Moreover, the message-driven model allows for non-blocking interactions. Once a message is sent, both the sender and the receiver are free to engage in other activities. This not only enhances system responsiveness but also allows for greater concurrency, as multiple tasks can be handled simultaneously without waiting for other tasks to complete. Importantly, a message-driven architecture enables delegation. When one part of the system is overwhelmed, tasks can be delegated to other parts that have available resources, making the system more resilient and elastic.

Chapter 7

225

In summary, the message-driven principle aligns well with the other principles of reactive systems, enabling them to be more responsive, resilient, and elastic. As we move forward, we will explore how Kotlin provides tools and libraries that allow us to build systems that adhere to these reactive principles, starting with how collections can be viewed as static data streams in the realm of reactive programming.

Higher-order functions on collections We briefly touched on this topic in Chapter 1, Getting Started with Kotlin, but before we can discuss streams, let’s make sure that those of us who come from languages that don’t have higher-order functions on collections know what they are, what they do, and what the benefits of using them are. Higher-order functions on collections are a powerful feature that Kotlin (among other modern programming languages) provides. They simplify the code, make it more readable, and often lead to fewer errors. Let’s discuss some key functions. We won’t be able to cover all of the functions available on collections, but we’ll cover the most widely used ones.

Mapping elements The map() function transforms each element in a collection, potentially into a new type of element. To illustrate this, let’s assume we have a list of letters, and we want to convert them into their corresponding ASCII values. First, let’s tackle this problem using imperative programming: val letters = 'a'..'z' val ascii = mutableListOf() for (l in letters) { ascii.add(l.code) }

Even for such a simple task, this approach requires a fair amount of code. We’re also forced to define our list as mutable. Now, let’s achieve the same result using the map() function: val result: List = ('a'..'z').map { it.code }

Controlling the Data Flow

226

Notice the brevity of this version. We eliminate the need for a mutable list and a manual for-each loop.

Filtering elements Filtering a collection is another common task where you iterate through the collection and add values to a new collection based on certain conditions. For instance, if you have a range of numbers from 1 to 100, you might want to include only those numbers that are either divisible by 3 or by 5. Using the imperative approach, the code could look like this: val numbers = 1..100 val notFizzbuzz = mutableListOf() for (n in numbers) { if (n % 3 == 0 || n % 5 == 0) { notFizzbuzz.add(n) } }

In a functional programming style, you could achieve the same thing using the filter() function: val filtered: List = (1..100).filter { it % 3 == 0 || it % 5 == 0 }

Observe how much more streamlined the code is. Instead of dictating “how” to perform the operation (like using an if statement), you simply describe “what” you want to do—filter elements based on specific conditions.

Finding elements Locating the first element in a collection that meets specific criteria is yet another routine task. For instance, if you want to find the first number divisible by both 3 and 5 within a list of numbers, you could implement it as follows using an imperative style: fun findFizzbuzz(numbers: List): Int? { for (n in numbers) { if (n % 3 == 0 && n % 5 == 0) { return n } } return null }

Chapter 7

227

Alternatively, you can achieve the same outcome using the find() function: val found: Int? = (1..100).find { it % 3 == 0 && it % 5 == 0 }

Much like its imperative counterpart, the find function returns null if no element satisfies the given criteria. Additionally, there’s a findLast() method that serves a similar purpose but starts its search from the last element of the collection.

Executing code for each element All the higher-order functions we’ve discussed so far have a common trait: they return a stream (or a new collection). However, not all higher-order functions operate this way. Some return a single value like a Unit or a number. These are known as terminator functions. Let’s focus on our first example of a terminator function, forEach(), which returns a Unit type. In Java parlance, the Unit type is similar to void, indicating that the function doesn’t return anything of value. Essentially, forEach() serves the same purpose as a traditional for loop: val numbers = (0..5) numbers.map { it * it }

// Can continue

.filter { it < 20 } // Can continue .forEach { println(it) } // Cannot continue

Additionally, Kotlin provides a forEachIndexed() function that supplies both the index and the actual value during iteration: numbers.map { it * it } .forEachIndexed { index, value -> print("$index:$value, ") }

The output for the code above would be: > 0:0, 1:1, 2:4, 3:9, 4:16, 5:25,

Since Kotlin 1.1, you can also use the onEach() function, which, unlike forEach(), returns the collection, allowing you to continue chaining: numbers.map { it * it } .filter { it < 20 }

Controlling the Data Flow

228 .sortedDescending() .onEach { println(it) } // Can continue now .filter { it > 5 }

As evident, the onEach() function doesn’t terminate the chain.

Summing up elements Similar to forEach(), the reduce() function is also a terminator function. However, unlike forEach(), which results in a Unit, reduce() terminates by returning a single value of the same

type as the collection it’s operating on. If you have a collection of integers, reduce() will return an Int. Let’s look at an example where we sum all the numbers between 1 and 100. Using an imperative approach, the code might look like this: val numbers = 1..100 var sum = 0 for (n in numbers) { sum += n }

Now, using the reduce() function, we can achieve the same result as follows: val reduced: Int = (1..100).reduce { sum, n -> sum + n }

In this version, you’ll notice we’ve eliminated the need for a mutable variable to hold the sum. Unlike other higher-order functions we’ve discussed, reduce() takes two arguments: the first one is the accumulator (akin to the sum variable in the imperative example), and the second one is the next element in the collection. We’ve used the same argument names in both examples for easier comparison. This example was a bit trivial, and could be replaced with the sum() function: val reduced: Int = (1..100).reduce { sum, n -> sum + n }

In fact, any custom aggregation where each element’s processing depends on the result of the previous one can be implemented using reduce. For example, string concatenation: val concatenated = listOf("Hello", "Kotlin", "!").reduce { agg, s -> "$agg $s" } println(concatenated) // "Hello Kotlin !"

Chapter 7

229

Or factorial: val factorial = (1..5).reduce { product, n -> product * n} println(factorial) // 120

The fold() function is similar to reduce(), except it also takes the initial value as input: (1..100).fold(10) { sum, n -> sum + n }

The ability to provide the initial value is also useful when the result type should be different from the collection type. Consider a case where we want to multiply all numbers between 1 and 15. The numbers are integers, but the resulting number is Long: val foldedLong: Long = (1..15).fold(1) { acc, n -> acc * n }

The scan() function works like fold(), but it emits each intermediate result and therefore has a return type of List. This can be useful when you want to have both the accumulated and current value at each step. Consider this code: val scanned: List = (1..100).scan(0) { sum, n -> sum + n }

The output is a list of the accumulated values, ending with the final sum: > [0, 1, 3, 6, 10, 15, 21, 28, 36, 45, 55..., 5050]

This fold() invocation: val folded: Int = (1..100).fold(0) { sum, n -> sum + n }

Would output just this final sum: > 5050

Getting rid of nesting When dealing with collections, you may sometimes encounter a collection of collections. For instance, consider this code: val listOfLists: List = listOf(listOf(1, 2), listOf(3, 4, 5), listOf(6, 7, 8))

What if you want to flatten this into a single list containing all the nested elements, resulting in something like [1, 2, 3, 4, 5, 6, 7, 8]?

Controlling the Data Flow

230

One way to do it is to loop through each nested list and add its elements to a mutable list using the addAll method: val flattened = mutableListOf() for (list in listOfLists) { flattened.addAll(list) }

However, a more idiomatic way to accomplish this in Kotlin is by using the flatMap() function: val flattened: List = listOfLists.flatMap { it }

For this specific case, you could even simplify it further by using the flatten() function: val flattened: List = listOfLists.flatten()

While flatten() is straightforward, flatMap() is often more versatile, as it allows you to apply additional transformations to each nested collection, akin to an Adapter pattern. Although we’ve only scratched the surface of higher-order functions available on collections, the ones we’ve discussed should give you a good foundation for further exploration. With a solid understanding of how to manipulate and iterate over static data streams, let’s turn our attention to applying these same techniques to dynamic data streams.

Exploring concurrent data structures Now that we have a good grasp of common higher-order functions for collections, let’s integrate this understanding with our previous discussion on Kotlin’s concurrency primitives. Our focus here will be on Kotlin’s key concurrent data structures: channels and flows. Before diving into these concurrent structures, however, we need to understand another data structure known as sequences. Although sequences are not concurrent by nature, they serve as an essential stepping stone into the realm of concurrency.

Sequences Functional programming languages have long featured higher-order functions for collections. However, for Java developers, this concept became significant with the introduction of the Stream API in Java 8.

Chapter 7

231

The Stream API offers useful functions like map(), filter(), and others, but requires converting your collection into a stream to use these operations. To obtain your collection again, you would have to collect the stream again. This can lead to an OutOfMemoryError if the stream is infinite, though. In Kotlin, sequences serve a similar purpose to Java streams. Kotlin’s alternative is called a sequence just to avoid naming conflicts with Java streams, for projects that mix Java and Kotlin code together, or that depend on Java libraries. Unlike Java streams, which are specific to the JVM ecosystem, Kotlin sequences are not restricted to the JVM. Sequences offer a blocking API to a potentially infinite stream of data. Creating a new sequence in Kotlin is simple. You can use the generateSequence() function as follows: val seq: Sequence = generateSequence(1L) { it + 1 }

The first argument specifies the initial value, while the second is a lambda function that generates the next value based on the current one. This particular sequence will generate all the long numbers (infinitely). You can also convert existing collections or ranges into sequences using the asSequence() function: (1..100).asSequence()

For more advanced use cases, Kotlin offers a sequence builder. Let’s demonstrate this using a Fibonacci sequence: val fibSeq = sequence { var a = 0 var b = 1 yield(a) yield(b) while (true) { yield(a + b) val t = a a = b b += t } }

Controlling the Data Flow

232

In this example, we’ve constructed a sequence of Fibonacci numbers using the yield() function to emit each subsequent value. One crucial distinction between sequences and collections is their evaluation strategy: sequences are lazy, while collections are eager. This laziness offers performance advantages when working with large collections. For example: val numbers = (1..1_000_000).toList() println(measureTimeMillis { numbers.map { it * it }.take(1).forEach { it } }) // ~50ms

Compare this with a sequence of the same size: println(measureTimeMillis { numbers.asSequence().map { it * it }.take(1).forEach { it } }) // ~5ms

In this example, the sequence-based code runs significantly faster because it only squares a single number, due to its lazy evaluation. The operations on the sequence are only executed if a terminal operation is available. That means the map function needs to be executed on the single element used by the take function. This contrasts with the list approach, where the map function is executed on all elements and then the first one is taken. Sequences, channels, and flows in Kotlin follow the principles of reactive programming, which, although not limited to functional programming, are easier to grasp once you understand the basics of functional programming.

Channels In the last chapter, we explored how to create and manage coroutines. Now, what if you need these coroutines to talk to each other? Java threads typically communicate using the wait()/notify()/notifyAll() pattern or through specialized classes like BlockingQueue from the java.util.concurrent package. Kotlin takes a different approach: it doesn’t have wait() or notify() methods at all. Instead, it uses a feature called channels for communication between coroutines. Channels in Kotlin are quite similar to Java’s BlockingQueue, but with a key difference: channels suspend a coroutine rather than blocking a thread, making it a more efficient alternative.

Chapter 7

233

First, let’s see how we can create a new channel: runBlocking { val chan = Channel() ... }

Channels are type-specific. For example, this channel can only hold integers. Next, let’s spawn a coroutine to read from the channel: runBlocking { ... launch { for (c in chan) { println(c) } } ... }

You can read from a channel simply by iterating through it using a for-each loop. Then, let’s send some values to the channel: runBlocking { ... (1..10).forEach { chan.send(it) } ... }

And finally, we can close the channel once it’s not needed anymore: runBlocking { ... chan.close() }

Closing the channel will also break the listening coroutine out of its for-each loop, leading it to terminate if it has no other tasks.

Controlling the Data Flow

234

This communication model is known as Communicating Sequential Processes (CSP). As evident, channels provide a type-safe and straightforward method for communication between coroutines. Although we manually defined the channels here, we’ll explore ways to further simplify this in the upcoming sections.

Producers If you need a coroutine to emit a continuous stream of values, Kotlin provides the produce() function. This function creates a coroutine associated with a ReceiveChannel, where T is the type of values the coroutine will produce. Here’s how we can reimagine our previous example using the produce() function: val chan = produce { (1..10).forEach { send(it) } } launch { for (c in chan) { println(c) } }

Within the produce() block, the send() function is automatically available, making it easy to push values to the channel. You also have the option to use the consumeEach() function in place of the for-each loop to consume the values: launch { chan.consumeEach { println(it) } }

With this setup, we’ve seen how to associate a coroutine with a channel for effective communication. Let’s now move on to another example demonstrating this concept.

Chapter 7

235

Actors Just like produce(), the actor() function in Kotlin is another way to create a coroutine that’s tied to a channel. The key difference is that, with actor(), the channel goes into the coroutine, not out of it. Here’s a quick example to illustrate: val actor = actor { channel.consumeEach { println(it) } } (1..10).forEach { actor.send(it) }

In this setup, our main function generates values, while the actor coroutine consumes them via the channel. This is conceptually similar to our initial example, but here, the channel and the coroutine are encapsulated within a single entity, simplifying the architecture. If you’ve had experience with languages like Scala that also use the actor model, you might notice some differences. For instance, in some other implementations, actors may have both incoming and outgoing channels, commonly referred to as mailboxes. However, in Kotlin’s actor model, there’s only an inbound channel, effectively serving as the actor’s mailbox. You may notice that actor() is marked with @ObsoleteCoroutinesApi. This is due to a long-running discussion regarding the lack of robustness of this solution. If you’re curious, you can read more about the reasoning here: https://github.com/Kotlin/kotlinx.coroutines/issues/87.

Buffered channels In all our prior examples, we’ve actually been using the unbuffered variant of channels, whether we created those channels explicitly or implicitly. An unbuffered channel is a channel with the capacity of a single element. To clarify what this means, consider a slightly modified version of an earlier example: val actor = actor { var prev = 0L channel.consumeEach {

Controlling the Data Flow

236 println(it - prev) prev = it delay(100) } }

In this snippet, we’ve got an actor object that takes in timestamps and outputs the time difference between consecutive timestamps. We’ve also added a short delay before processing the next value. Instead of sending a series of numbers, we now send the current timestamp to this actor: repeat(10) { actor.send(System.currentTimeMillis()) } actor.close().also { println("Done sending") }

Examining the code’s output, you’ll see something like: > ... > 101 > 103 > 101 > Done sending

Here, our producer is paused until the channel is ready to receive another value, allowing the actor to apply backpressure on the producer. Now, let’s tweak how we initialize our actor: val actor = actor(capacity = 10) { ... }

By default, every channel has a capacity of zero, meaning it can’t hold additional values until the existing one is consumed. Run the modified code, and you’ll notice a different output: > Done sending > ... > 0 > 0

Chapter 7

237

With buffering enabled, the producer no longer waits for the consumer; it sends messages as quickly as it can, while the actor continues to process them at its own speed. Similarly, you could set a capacity on the producer channel: val chan = produce(capacity = 10) { (1..10).forEach { send(it) } }

And on the raw channel as well: val chan = Channel(10)

Buffered channels offer a powerful way to decouple producers from consumers. However, use them judiciously as increasing the channel’s capacity also increases its memory footprint. Finally, let’s note that channels are a relatively low-level tool for managing concurrency. We’ll next explore another type of stream that offers a higher level of abstraction.

Flows A Flow in Kotlin is a cold, asynchronous stream that implements the Observable design pattern, which we explored in Chapter 4, Getting Familiar with Behavioral Patterns. To refresh your memory, the Observable design pattern typically provides two methods: subscribe() for consumers to subscribe to messages, and publish() to broadcast a new message to all subscribers. In the case of a Kotlin Flow, these methods are renamed to collect() and emit() respectively. You can instantiate a new flow using the flow() builder function: val numbersFlow: Flow = flow { ... }

Within the flow constructor, you can utilize emit() to send new values to all subscribers. For instance, let’s create a flow that emits ten numbers: flow { (0..10).forEach { println("Sending $it")

Controlling the Data Flow

238 emit(it) } }

To subscribe to a flow, you use the collect() method on the flow object: runBlocking { numbersFlow.collect { number -> println("Listener received $number") } }

IMPORTANT NOTE: It is necessary to mention that collect() requires a suspend context: either a suspend function or a runBlocking block. Those will be omitted in some examples, but you can always refer to the full examples available on GitHub.

Executing this code shows that the listener prints each received number. Unlike some reactive frameworks, you don’t need special syntax to signal an exception to the listener. You can just use the regular throw expression: val numbersFlow: Flow = flow { println("New subscriber!") (1..10).forEach { println("Sending $it") emit(it) if (it == 9) { throw RuntimeException() } } }

From the subscriber’s perspective, handling exceptions involves enclosing the collect() method in a try/catch block: try { numbersFlow.collect { number -> println("Listener received $number")

Chapter 7

239

} } catch (e: Exception) { println("Got an error") }

In Kotlin, flows are designed to be suspendable, meaning they can pause their execution and release the thread for other tasks, thus avoiding blocking. This makes them a perfect fit for asynchronous operations like network requests, file IO, or computation-heavy tasks. However, it’s important to note that flows are not inherently concurrent. While they can be suspended to avoid blocking, they don’t run on multiple threads by default. For concurrency, you would typically use constructs like Kotlin’s coroutines in conjunction with flows. The flow itself will only run concurrently if explicitly instructed to do so, using operators like flowOn() to specify a dispatcher or concurrent for concurrent collection. Let’s see how to raise ten numbers to a power concurrently: runBlocking { val moreNumbersFlow = (1..10).asFlow() moreNumbersFlow.map { println("Mapping on ${Thread.currentThread().name}") it * it }.flowOn(Dispatchers.Default).collect { println("Got $it on ${Thread.currentThread().name}") } }

This will print: > Mapping on DefaultDispatcher-worker-1 ... > Mapping on DefaultDispatcher-worker-1 > Got 1 on main > Got 4 on main > Got 9 on main > Got 16 on main ...

Using flowOn, we can, for example, make some of the operations concurrent, while the collection itself happens on a single thread.

Controlling the Data Flow

240

They also offer backpressure support, which is seamlessly managed behind the scenes. To see this in action, let’s spawn multiple subscribers: (1..4).forEach { coroutineId -> delay(5000) launch(Dispatchers.Default) { numbersFlow.collect { number -> delay(1000) println("Coroutine $coroutineId received $number") } } }

Each subscriber operates within its own coroutine, with a five-second gap between subsequent subscriptions, allowing for concurrent execution. Now, let’s take a look at a part of the output: > ... > Sending 1 > Coroutine 1 received 5 > Sending 6 > Coroutine 2 received 1 > Sending 2 > Coroutine 1 received 6 > ...

Analyzing the output reveals two key insights: 1. Flows are cold streams: For every new subscriber, the flow starts from the beginning. In our scenario, each subscriber gets all numbers, starting from 1. 2. Flows handle backpressure: The subsequent value isn’t emitted until the prior one has been received by each subscriber. This is analogous to the behavior of unbuffered channels, contrasting with buffered channels where the producer can outpace the consumer. Next, we’ll explore how to modify these two inherent properties of flows, if needed.

Chapter 7

241

Buffering flows In situations where you have ample memory and don’t want to apply immediate backpressure on the producer, you can instruct the consumer to buffer the flow using the buffer() function: numbersFlow.buffer().collect { number -> delay(1000) println("Coroutine $coroutineId received $number") }

Examining the output after adding buffering reveals a significant change: > ... > Sending 8 > Sending 9 > Sending 10 > Coroutine 1 received 1 > Coroutine 1 received 2 >...

With buffering enabled, the flow emits values without waiting for the consumer to catch up, at least until the buffer is full. Once the buffer fills up, the consumer still has the flexibility to collect the values at its own speed. This behavior is akin to that of buffered channels, and indeed, the underlying implementation utilizes a channel. Buffering is particularly useful when each message takes a substantial amount of time to process. Consider the scenario of uploading images from a mobile device. The upload time will vary depending on the image size. Freezing the user interface until each image is uploaded not only results in a poor user experience but also contradicts reactive design principles. Instead, you can allocate a buffer that fits into the available memory and proceed with the image uploads at a pace that doesn’t block the user interface. The UI would only freeze when the buffer is filled to capacity with pending tasks.

Flow exceptions and error handling Exception handling is a critical part of working with Kotlin Flows, especially because errors can propagate through the reactive chain and disrupt the entire stream of data. Fortunately, Kotlin Flows provide a set of functions to handle exceptions gracefully and even recover from them.

Controlling the Data Flow

242

Catching exceptions The catch() function can be used to catch exceptions and take appropriate action. It allows you to log the exception or emit fallback values: flow { var i = 3 repeat(5) { emit(10 / i--) } } .catch { e -> println("Caught exception: $e") } .collect { value -> println(value) }

Here’s the output: > 3 > 5 > 10 > Caught exception: java.lang.ArithmeticException: / by zero

Here, the flow stops once the division by zero happens, but the program doesn’t crash.

Handling completion The onCompletion function is called when the flow collection completes, either successfully or due to an exception. You can use this to perform some cleanup or logging: runBlocking { flow { emit(1) emit(2) } .onCompletion { cause -> if (cause != null) { println("Flow completed with exception: $cause") } else { println("Flow completed successfully") } } .collect { println(it) } }

Chapter 7

243

This outputs: > 1 > 2 > Flow completed successfully

Retrying To retry the collection of a flow in the event of an error, you can utilize the retry() function. First, let’s define a function that has a 50% chance of throwing an exception each time it is called: fun doSomethingRisky(): Int { val randomNumber = Random.nextInt(10) if (randomNumber > 4) { throw RuntimeException(randomNumber.toString()) } return randomNumber }

This function will attempt to collect the flow again whenever an exception is thrown: flow { repeat(3) { emit(doSomethingRisky()) // This might throw an exception } } // Number of retry attempts .retry(10) // Here you can check what type of exception you received { e: Throwable -> println("Got $e, retrying") true } .collect { println(it) }

This code will terminate either after three consecutive successful emits or once a total of 10 exceptions have been thrown.

Controlling the Data Flow

244

Optional retrying If you need different numbers of retries for various types of exceptions, you can use the retryWhen() function. This function provides more nuanced control, allowing you to decide whether to retry based on the type of thrown exception and the number of attempts already made. First, let’s define two classes representing the 4XX and 5XX types of errors that might be returned by a remote service, along with a function that randomly throws these errors: class Http5XX(message: String) : Throwable(message) class Http4XX(message: String) : Throwable(message) fun doSomethingHttpRisky(): Int { val randomNumber = Random.nextInt(10) if (randomNumber < 2) { throw Http5XX(randomNumber.toString()) } else if (randomNumber < 5) { throw Http4XX(randomNumber.toString()) } return randomNumber }

Now, let’s implement custom logic for determining the number of retries for each error type: flow { repeat(3) { emit(doSomethingHttpRisky()) // This might throw an exception } }.retryWhen { e: Throwable, attempts: Long -> println("Got $e, retrying") when { (e is Http5XX) && attempts > 10 -> false (e is Http4XX) && attempts > 3 -> false else -> true } }.collect { println(it) }

You can combine multiple error-handling functions to build robust error-handling mechanisms. Understanding how to handle exceptions properly in Kotlin Flows is crucial for building reactive systems that are resilient to failure. With the right functions, you can recover gracefully from errors and continue processing.

Chapter 7

245

Flow sharing In Kotlin, Flow sharing is an important concept for optimizing resource usage and coordinating multiple consumers that want to read from the same flow of data. By default, flows in Kotlin are “cold,” meaning each subscriber initiates its own separate collection of the flow, causing the flow to be collected multiple times from the beginning. However, there are cases where you might want multiple collectors to share a single subscription to a flow. This is called “hot” sharing and can be achieved in several ways:

shareIn The shareIn() function is the recommended way to share flows in Kotlin as of version 1.6. This function converts a cold flow into a hot flow, allowing multiple subscribers to share emitted values. It runs the upstream flow in a separate coroutine and rebroadcasts its emissions to downstream subscribers. Let’s consider an example. We have a flow of numbers that logs the number it’s about to emit and the thread on which the emission is occurring: val originalFlow = flowOf(1, 2, 3, 4, 5) .onEach { println("Sending $it from ${Thread.currentThread().name}") }

Next, we create a shared flow that runs on a custom dispatcher. This flow starts emitting when there’s at least one subscriber and preserves the two latest messages: val sharedFlow = originalFlow .shareIn( scope = CoroutineScope(newFixedThreadPoolContext(4, "my dispatcher")), started = SharingStarted.Lazily, replay = 2 )

The scope is the coroutine scope where the sharing coroutine will be launched. The SharingStarted strategy controls when sharing should start. For example, SharingStarted.Lazily starts the flow when the first subscriber appears, illustrating the Strategy pattern in real-life applications. The replay parameter specifies the number of the latest values to replay to new subscribers. Now, let’s launch 5 coroutines that subscribe to this flow: repeat(5) { id -> launch(Dispatchers.Default) {

Controlling the Data Flow

246 sharedFlow.map { value ->

println("Coroutine $id got $value on ${Thread.currentThread(). name}") }.collect() } delay(100L) }

We introduce a slight delay between coroutines. Let’s now examine the output: > Sending 1 from my dispatcher-1 > Sending 2 from my dispatcher-1 > Sending 3 from my dispatcher-1 > Sending 4 from my dispatcher-1 > Sending 5 from my dispatcher-1 > Coroutine 0 got 1 on DefaultDispatcher-worker-1 > Coroutine 0 got 2 on DefaultDispatcher-worker-1 > Coroutine 0 got 3 on DefaultDispatcher-worker-1 > Coroutine 0 got 4 on DefaultDispatcher-worker-1 > Coroutine 0 got 5 on DefaultDispatcher-worker-1 > Coroutine 1 got 4 on DefaultDispatcher-worker-1 > Coroutine 1 got 5 on DefaultDispatcher-worker-1 > Coroutine 2 got 4 on DefaultDispatcher-worker-1 > Coroutine 2 got 5 on DefaultDispatcher-worker-1 > ...

Notice that the first coroutine receives all the messages, while subsequent coroutines only receive the last two. This is because our flow starts lazily, with the emission beginning at the first subscription. The delay causes subsequent coroutines to miss the first few messages. Also, note that while we emit from a custom dispatcher, all messages are received on the dispatcher where the listeners are running. In a real-world scenario, you might emit from the IO dispatcher when fetching data from a remote service, while the consumers are running on the Default dispatcher.

stateIn The shareIn() function is an excellent example of the Observer pattern, allowing multiple subscribers to listen to a stream of events. However, for scenarios like UI changes, you might only be interested in the latest event.

Chapter 7

247

If there have been no events yet, you may need to display something like a “Loading” message. This is where stateIn becomes useful. Let’s start by creating a cold flow and then convert it into a hot flow using the stateIn operator: val originalFlow = flowOf(1, 2, 3, 4, 5) .onEach { println("Sending $it from ${Thread.currentThread().name}") } val stateFlow = originalFlow.stateIn( scope = CoroutineScope(Dispatchers.Default), started = SharingStarted.WhileSubscribed(), initialValue = 0 )

Next, we’ll launch several coroutines and subscribe to this flow: repeat(5) { id -> launch(Dispatchers.Default) { stateFlow.map { value -> println("Coroutine $id got $value on ${Thread.currentThread(). name}") }.collect() } delay(100L) }

Now, let’s look at the output: > Coroutine 0 got 0 on DefaultDispatcher-worker-2 > Sending 1 from DefaultDispatcher-worker-1 > Sending 2 from DefaultDispatcher-worker-1 > Sending 3 from DefaultDispatcher-worker-1 > Sending 4 from DefaultDispatcher-worker-1 > Sending 5 from DefaultDispatcher-worker-1 > Coroutine 0 got 5 on DefaultDispatcher-worker-5 > Coroutine 1 got 5 on DefaultDispatcher-worker-4 > Coroutine 2 got 5 on DefaultDispatcher-worker-3 > Coroutine 3 got 5 on DefaultDispatcher-worker-2 > Coroutine 4 got 5 on DefaultDispatcher-worker-3

Controlling the Data Flow

248

Notice how one coroutine received a result even before the flow started emitting messages. This is the initialValue we provided. Due to the slight delay between launching other coroutines, all of them only receive the latest state in that flow. In summary, shareIn is ideal for broadcasting events to multiple subscribers as they occur, whereas stateIn is used for maintaining and observing the latest state. The choice between the two depends on whether you need to observe every emitted value or just the most recent one.

Cancellation Flow cancellation is an important aspect of resource management and control flow in Kotlin. Since flows are tightly integrated with Kotlin coroutines, they inherit the cooperative nature of coroutines, allowing you to cancel a flow when you no longer need to collect from it. Flows are generally collected within a coroutine scope. When that coroutine scope is canceled, the flow collection will also be terminated: val scope = CoroutineScope(Dispatchers.Default) val flow = flow { for (i in 1..10) { emit(i) delay(100) } } scope.launch { flow.collect { value -> println(value) } } // Cancel the scope, which will also cancel the flow collection scope.cancel()

In more complex flows, you can check for cancellation by invoking yield() or suspending methods at different stages of your flow. This will throw a CancellationException if the flow is canceled.

Chapter 7

249

Some functions that are available on flows, like take(), first(), and single(), inherently cancel the flow once they are satisfied. For instance, take(3) will cancel the flow collection after collecting 3 elements: val flow = flowOf(1, 2, 3, 4, 5).take(3)

Cancellation is propagated upstream. When a downstream collector cancels the flow, the upstream flow also gets canceled, thus freeing up resources and preventing any leaks: val flow = flow { emit(doSomeTask()) // This will be cancelled if downstream is cancelled }.map { transform(it)

// This will also be cancelled

}

Understanding flow cancellation is vital for writing robust and resource-efficient code, especially in long-lived or resource-intensive flows. Since flow cancellation is deeply integrated with Kotlin’s coroutine system, it is both flexible and powerful.

Flow builders Flow builders are factory methods that allow you to create flows in Kotlin. They define how data is emitted into the flow and can be customized for various data sources or specific use cases. We already discussed the most commonly used flow builder: flow() val myFlow: Flow = flow { for (i in 1..3) { emit(i) } }

This is the most versatile builder where you can emit multiple values, apply backpressure, and incorporate complex logic. For flows that emit a fixed set of values, you can use flowOf(). It’s similar to listOf() for creating lists: val simpleFlow: Flow = flowOf(1, 2, 3)

Controlling the Data Flow

250

If you already have a collection or a sequence and want to convert it into a flow, there’s the asFlow() adapter: val array = arrayOf(1, 2, 3) val flowFromArray = array.asFlow() val list = listOf(4, 5, 6) val flowFromList = list.asFlow() channelFlow() allows you to create a flow from a channel. You can send values into the channel in

different ways, possibly from different coroutines. It gives you more control over the flow emission: val channelBasedFlow = channelFlow { send(1) send(2) send(3) } callbackFlow() is similar to channelFlow(), but designed for adapting callback-based APIs

into flows. Inside this builder, you can register callbacks which then emit values into the flow: val callbackFlowExample = callbackFlow { val callback = object : MyCallback { override fun onDataReceived(data: Int) { offer(data) } } registerCallback(callback) awaitClose { unregisterCallback(callback) } }

This may be useful if you’re adapting the flow logic to a legacy Android application that uses callbacks. Finally, emptyFlow() creates an empty flow that emits no values: val empty = emptyFlow()

This is most useful when you want to return early from your function, or while writing tests.

Chapter 7

251

These flow builders offer you the flexibility to create flows according to the needs of your application, whether you’re emitting from collections, adapting from callbacks, or dealing with channels.

Conflating flows Suppose we have a Kotlin Flow that simulates changes in stock prices, updating ten times a second. The flow simply increments a number by 1 at each tick as a mock representation of stock prices: val stock: Flow = flow { var i = 0 while (true) { emit(++i) delay(100) } }

However, our UI doesn’t need to update as frequently; updating once a second is sufficient. Using collect() with a delay of 1000 milliseconds, like in previous examples, puts us behind the rate at which the flow is emitting: var seconds = 0 stock.collect { number -> delay(1000) seconds++ println("$seconds seconds -> received $number") }

This approach would yield output like this: > 1 seconds -> received 1 > 2 seconds -> received 2 > 3 seconds -> received 3 > ...

This output shows that we’re applying backpressure to the flow, slowing down its emission rate. Buffering could be an alternative, as long as the buffer is not full, but it’s not ideal because we’d end up indiscriminately discarding most of the emitted values to catch up with the UI refresh rate.

Controlling the Data Flow

252

A more efficient solution is to use conflate(). This ensures that the flow retains only the most recent value, discarding older ones: stock.conflate().collect { number -> delay(1000) seconds++ println("$seconds seconds -> received $number") }

The expected output would now look like this: > 4 seconds -> received 30 > 5 seconds -> received 40 > 6 seconds -> received 49 > ...

This output indicates that the UI updates with the most recent value, and on average, our counter reflects ten updates per second. Thus, the flow remains unsuspended, delivering only the latest emitted value to the subscriber.

Rate-limiting Rate-limiting in Kotlin Flows is a mechanism to control the frequency of emitted items from a flow. It is especially useful in scenarios where you have to control the rate of events being processed to avoid overwhelming the downstream components, such as UI updates or API calls. The debounce() function waits for a specified amount of time after the last emission before letting an item through. If another item arrives within the debounce time, the previous item is discarded: val debouncedFlow = stock.debounce(300L) // 300 milliseconds

This is useful, for example, when you’re handling user input and you want to perform some operation only after the user has stopped typing for a given time. The sample() function emits the most recent item emitted by the original flow within a periodic time interval: val sampledFlow = stock.sample(1000L) // 1 second

It is useful when you’re not interested in all the emitted items but want to sample the flow at a regular interval to get a snapshot of its latest state.

Chapter 7

253

distinctUntilChanged() suppresses duplicate consecutive items emitted by the flow. For ex-

ample, this code: runBlocking { flowOf(1, 1, 2, 2, 2, 3, 4, 4, 3, 5) .distinctUntilChanged() .onEach { println(it) }.collect() }

would output only the following values: > 1 > 2 > 3 > 4 > 3 > 5

This is useful if you need to deduplicate events that may arrive multiple times.

Combining flows Kotlin Flows comes with several advanced functions that allow you to manipulate and transform your reactive streams in more complex ways. Below are some of the advanced Flow functions that you might find useful. flatMapConcat() processes each inner Flow one at a time, in order, waiting for each to complete

before moving on to the next: import kotlinx.coroutines.flow.* ... runBlocking(Dispatchers.Default) { println("flatMapConcat") val flowA = ('A'..'B').map { it.toString() }.asFlow() val flowB = ('a'..'b').map { it.toString() }.asFlow() flowA.flatMapConcat { i -> flowB.map { j -> i + j } } .onEach { println(it) }.collect() }

Controlling the Data Flow

254

The above results in: > Aa > Ab > Ba > Bb

This is similar to a nested loop, just written in a reactive manner. flatMapMerge() processes all inner Flows concurrently, merging their values into a single Flow.

On flows with a small number of elements, like in the previous example, or when using a single-threaded dispatcher, it may look like this function works exactly the same as the previous one. So, let’s instead take a flow of all uppercase letters and all lowercase letters, and make sure that we run it with a multi-threaded dispatcher: runBlocking(Dispatchers.Default) { val flowA = ('A'..'Z').map { it.toString() }.asFlow() val flowB = ('a'..'z').map { it.toString() }.asFlow() flowA.flatMapMerge { i -> flowB.map { j -> i + j } }.onEach { println(it) }.collect() }

If you run this example multiple times, you’ll notice that the output will be different every time, for example: > Mv > Mw > Mx > My > Mz

This is useful when you need to process large amounts of data, and you don’t care about the order this data is processed in. Finally, flatMapLatest() cancels the previous inner Flow and starts collecting the new one whenever a new inner Flow is emitted. Demonstrating this with an example may be a bit complex, but understanding the idea behind this operator may be nevertheless important.

Chapter 7

255

Let’s imagine we have a search API function. For simplicity, our database will be represented by a simple list and will contain only a few values: suspend fun searchApi(searchTerm: String): Flow { val db = listOf( "en.wikipedia.org/wiki/K", "www.merriam-webster.com/dictionary/KO", "dictionary.cambridge.org/dictionary/english/ko", "kotlinlang.org" ) delay(500) return db.filter { searchTerm.lowercase() in it }.asFlow() }

We simulate the slowness of the actual API call by using the delay() function. Now, let’s assume that we also have some UI that lets our user type in search criteria. Every time the value is changed, we emit a new value: val userSearchInputFlow = flow { delay(100) emit("K") delay(50) emit("Ko") delay(150) emit("Kot") ... }

Does it make sense to search for “Ko”, when the user changed the value to “Kot”, now? Most definitely not. You’d like to discard the search result for “Ko”, and start searching for results for “Kot”. And that’s what the following code does: userSearchInputFlow.flatMapLatest { searchTerm -> searchApi(searchTerm) }.onEach { println(it) }.collect()

This code will output a single result that matches the final criteria: > kotlinlang.org

Controlling the Data Flow

256

Now try and replace flatMapLatest() with flatMapConcat() as we discussed before, for example. The output you’ll get will contain values that are no longer relevant: > en.wikipedia.org/wiki/K > www.merriam-webster.com/dictionary/KO > dictionary.cambridge.org/dictionary/english/ko > kotlinlang.org > ...

And that’s exactly what flatMapLatest() prevents. Finally, combine(), as the name suggests, combines the latest values of two flows using a given transform function. If you want to combine more than two flows, you can use combine() multiple times: runBlocking(Dispatchers.Default) { val flowA = ('A'..'Z').map { it.toString() }.asFlow() val flowB = ('a'..'z').map { it.toString() }.asFlow() val flowC = (1 .. 100).map { it.toString() }.asFlow() val flowAB = combine(flowA, flowB) { a, b -> a + b } combine(flowAB, flowC) { a, b -> println(a + b) }.collect() }

The output is not shown here, because it will be slightly different every time, depending on how the flows interleave. This function is most useful when you have multiple sources emitting events. An example can be a dashboard that tracks multiple metrics. Each metric is a single flow, and every time any of those change, you want to refresh your UI showing the latest values for all the metrics. These advanced functions offer robust capabilities for complex data transformations, error handling, and more. By mastering them, you can write more effective Kotlin code for your reactive systems.

Summary This chapter was dedicated to practicing functional programming with reactive principles and learning the building blocks of functional programming in Kotlin. We also learned about the main benefits of reactive systems. For example, according to the Reactive Manifesto, such systems should be responsive, resilient, elastic, and driven by messaging. You should now have a solid understanding of how to manipulate data, filter collections, and locate elements within collections that fulfil specific conditions.

Chapter 7

257

We also examined the key distinctions between cold and hot streams. In a cold stream like a Kotlin Flow, data is emitted only when a subscription is active, and typically every new subscriber receives all the events from the beginning. In contrast, a hot stream like a channel is in continuous operation, emitting data regardless of whether there are active subscribers or not. A new subscriber to a hot stream only receives events emitted after their subscription becomes active. A crucial concept we covered is backpressure—how a consumer that can’t keep up with the data rate can influence the producer. Various strategies, like suspending the producer, buffering events, or conflating the stream, can be employed to handle such scenarios. In the upcoming chapter, we’ll explore the topic of concurrent design patterns, paving the way for creating scalable, maintainable, and extensible systems. We’ll employ coroutines and reactive streams as foundational components to build such systems.

Questions 1. What is the difference between higher-order functions on collections and on concurrent data structures? 2. What is the difference between cold and hot streams of data? 3. When should a conflated channel or flow be used?

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

8

Designing for Concurrency Concurrent design patterns enable us to handle multiple tasks simultaneously while effectively organizing their life cycles. By leveraging these patterns, you can sidestep issues like resource leaks and deadlocks. In this chapter, we’re going to explore concurrent design patterns in Kotlin. We won’t implement all of them, as some are quite complex and involve numerous edge cases that are beyond the scope of this book. Instead, we’ll discuss some common constructs that you’ll encounter frequently while writing concurrent code in Kotlin and examine the design patterns they represent. Throughout, we’ll leverage essential components we’ve already covered, such as coroutines, channels, and functional programming concepts.

The topics we’ll cover in this chapter include: •

Deferred Value



Barrier



Scheduler



Pipeline



Fan-Out



Fan-In



Racing



Mutex



Sidekick

Designing for Concurrency

260

By the end of this chapter, you’ll possess the know-how to efficiently manage asynchronous values, orchestrate the activities of various coroutines, and both distribute and consolidate tasks. Additionally, you’ll have the toolkit to troubleshoot any concurrency-related issues you may encounter.

Technical requirements There are no additional requirements compared to the previous chapter. You can find the source code used in this chapter on GitHub at the following location: https://github.com/PacktPublishing/Kotlin-Design-Patterns-and-Best-Practices_ Third-Edition/tree/main/Chapter08.

Deferred Value The Deferred Value design pattern aims to provide a reference to the outcome of an asynchronous operation. You’ll find similar implementations in Java and Scala through Futures, and in JavaScript through Promises. We touched upon deferred values earlier in Chapter 6, Threads and Coroutines, where we noted that Kotlin’s async() function returns a Deferred type, which is Kotlin’s take on this pattern. Interestingly, this Deferred type is not just an example of the Deferred Value design pattern; it’s also an embodiment of the Proxy design pattern discussed in Chapter 3, Understanding Structural Patterns, as well as the State design pattern featured in Chapter 4, Getting Familiar with Behavioral Patterns. To instantiate a new container for an asynchronous outcome, you can use Kotlin’s CompletableDeferred constructor like so: val deferred = CompletableDeferred()

To populate the Deferred value with a result, we use the complete() function, and if an error occurs in the process, we can use the completeExceptionally() function to pass the exception to the caller. To understand it better, let’s write a function that returns an asynchronous result. Half of the time the result will contain OK, and the other half of the time it will contain an exception: suspend fun valueAsync(): Deferred = coroutineScope { val deferred = CompletableDeferred() launch { delay(100) if (Random.nextBoolean()) {

Chapter 8

261 deferred.complete("OK") } else { deferred.completeExceptionally( RuntimeException() ) }

} deferred }

Notice that the Deferred object is returned almost instantaneously, after which the asynchronous computation begins via launch and some latency is simulated using delay(). To retrieve the asynchronous result, you can use the await() method, as previously discussed in Chapter 6, Threads and Coroutines: runBlocking { val value = valueAsync() println(value.await()) }

The output of that code would be either of the following: > OK > Exception in thread "main" java.lang.RuntimeException

It’s crucial to either complete or exceptionally complete your Deferred objects using complete() or completeExceptionally() to avoid indefinite waiting. If you no longer need the result, you can also cancel it with cancel(): deferred.cancel()

However, there’s a caveat. Cancellation is only verified with suspending functions. This implies that if your code does not contain any suspending functions, the deferred value won’t be discarded until the code execution completes. You’ll rarely need to create your own deferred value. Usually, you would work with the one returned from the async() function. When talking about the Deferred Value design pattern, it’s beneficial to discuss how it stacks up against other mechanisms for handling asynchronous operations.

Designing for Concurrency

262

Callbacks are easy to grasp for most developers. They are supported in many languages, making them a universally understood concept. But nesting multiple callbacks can lead to unreadable code, commonly known as “callback hell.” And dealing with errors in callbacks is often cumbersome. Deferred value is generally more readable and maintainable compared to callbacks. While callbacks execute a function once the asynchronous operation is done, Deferred allows you to treat asynchronous operations almost like regular synchronous code, thus making error handling and result propagation more straightforward. Next, we’ll explore how to wait for multiple asynchronous outcomes simultaneously.

Barrier The Barrier design pattern enables us to pause and wait for multiple concurrent tasks to finish before moving on. This is particularly useful when assembling objects from diverse data sources. Consider this data class: data class FavoriteCharacter( val name: String, val catchphrase: String, val picture: ByteArray = Random.nextBytes(42) )

Imagine the catchphrase comes from one service, and the picture comes from another. You’d like to fetch both data points concurrently: fun CoroutineScope.getCatchphraseAsync(characterName: String) = async { … } fun CoroutineScope.getPictureAsync(characterName: String) = async { … }

The most straightforward way to fetch this data concurrently would look like this: suspend fun fetchFavoriteCharacter(name: String) = coroutineScope { val catchphrase = getCatchphraseAsync(name).await() val picture = getPictureAsync(name).await() FavoriteCharacter(name, catchphrase, picture) }

However, this code fetches the picture only after the catchphrase is done, making it unnecessarily sequential. Let’s fix that:

Chapter 8

263

suspend fun fetchFavoriteCharacter(name: String) = coroutineScope { val catchphrase = getCatchphraseAsync(name) val picture = getPictureAsync(name) FavoriteCharacter(name, catchphrase.await(), picture.await()) }

By moving the await() calls to the data class constructor, we initiate both coroutines simultaneously and wait for their completion as intended. Data classes also offer the advantage of easy destructuring: val (name, catchphrase, _) = fetchFavoriteCharacter("Inigo Montoya") println("$name says: $catchphrase")

When the data types from different async tasks are the same, you can use a list to gather the results: val characters: List = listOf( Me.getFavoriteCharacter(), Taylor.getFavoriteCharacter(), Michael.getFavoriteCharacter() )

For collections of Deferred elements like this, you can use awaitAll() to act as a barrier: println(characters.awaitAll())

Exception handling is an important aspect to consider when using the Barrier design pattern, especially in concurrent environments. If any of the concurrent tasks throw an exception, you’ll often want to handle it in a way that ensures all tasks can either proceed safely or are properly terminated. If any of the tasks fail, awaitAll() will throw an exception, which will be caught in the catch block, and you can decide what action should be taken. See the following example: object GrumpyCat { fun getFavoriteCharacter(): Deferred = CompletableDeferred().apply { completeExceptionally(RuntimeException("Grumpy cat likes no one")) } }

Designing for Concurrency

264

If at least one deferred in the awaitAll call is completed with an exception, that exception is raised: runBlocking { val characters: List = listOf( Me.getFavoriteCharacter(), GrumpyCat.getFavoriteCharacter() ) try { println(characters.awaitAll()) } catch (e: RuntimeException) { println("Caught exception: ${e.message}") } }

This prints: > Caught exception: Grumpy cat likes no one

To sum up, the Barrier design pattern serves as a meeting point for several asynchronous tasks. In the next section, we’ll explore how to abstract the execution of these tasks.

Scheduler The goal of the Scheduler design pattern is to decouple what is being run from how it’s being run and optimize the use of resources when doing so. In Kotlin, Dispatchers are an implementation of the Scheduler design pattern that decouples the coroutine (that is, the what) from underlying thread pools (that is, the how). We’ve already seen dispatchers briefly in Chapter 6, Threads and Coroutines. To remind you, the coroutine builders such as launch() and async() can specify which dispatcher to use. Here’s an example of how you specify it explicitly: runBlocking { // This will use the Dispatcher from the parent coroutine launch { // Prints: main println(Thread.currentThread().name) }

Chapter 8

265

launch(Dispatchers.Default) { // Prints DefaultDispatcher-worker-1 println(Thread.currentThread().name) } }

The default dispatcher creates as many threads in the underlying thread pool as you have CPU cores. Another dispatcher that is available to you is the IO Dispatcher: async(Dispatchers.IO) { for (i in 1..1000) { println(Thread.currentThread().name) yield() } }

This will output the following: > ... > DefaultDispatcher-worker-2 > DefaultDispatcher-worker-1 > DefaultDispatcher-worker-1 > DefaultDispatcher-worker-1 > DefaultDispatcher-worker-3 > DefaultDispatcher-worker-3 > ...

The IO Dispatcher is used for potentially long-running or blocking operations and will create up to 64 threads for that purpose. Since our example code doesn’t do much, the IO Dispatcher doesn’t need to create many threads. That’s why you’ll see only a small number of workers used in this example. We are not limited to the dispatchers Kotlin provides out of the box. We can also define dispatchers of our own. Here is an example of creating a dispatcher that would use a dedicated thread pool of four threads based on ForkJoinPool, which is efficient for divide-and-conquer tasks: val forkJoinPool = ForkJoinPool(4).asCoroutineDispatcher() repeat(1000) {

Designing for Concurrency

266 launch(forkJoinPool) { println(Thread.currentThread().name) } }

If you create your own dispatcher, make sure that you either release it with close() or reuse it, as creating a new dispatcher and holding onto it is expensive in terms of resources.

Pipeline The Pipeline design pattern is like having a team of experts working together to handle complex tasks. Each expert specializes in one part of the job, and they work simultaneously to get things done faster. Let’s explore this idea with an example. Remember back in Chapter 4, Getting Familiar with Behavioral Patterns, when we talked about creating an HTML page parser? Back then, we assumed we already had the HTML pages to work with. Now, let’s design a process to create a never-ending stream of pages. First, we need someone to fetch news pages from the internet every now and then. Think of this as our producer. In code, it looks like this: fun CoroutineScope.producePages() = produce { fun getPages(): List { // In reality, this would fetch pages from the web return listOf( "Cool stuff", "Even more stuff" ) } val pages = getPages() while (this.isActive) { for (p in pages) { send(p) } } }

We use the isActive flag to check if our coroutine is still running. This is a good practice, especially in loops that might run for a long time, as it allows us to stop them if needed.

Chapter 8

267

Since tech news doesn’t change every second, we can check for updates only once in a while. In real code, this delay could be minutes or even hours. Now, we want to turn those raw HTML strings into a structured Document Object Model (DOM). For this, we have another producer, which receives pages from the first one: fun CoroutineScope.produceDom(pages: ReceiveChannel) = produce { fun parseDom(page: String): Document { // In reality, a DOM library would parse the string into a DOM return Document(page) } for (p in pages) { send(parseDom(p)) } }

We use a for loop to iterate over the channel as long as it’s open. This is a clean way to consume data from an asynchronous source without having to deal with complex callbacks. Our third function receives the parsed documents and extracts titles from each one: fun CoroutineScope.produceTitles(parsedPages: ReceiveChannel) = produce { fun getTitles(dom: Document): List { return dom.getElementsByTagName("h1").map { it.toString() } } for (page in parsedPages) { for (t in getTitles(page)) { send(t) } } }

We’re looking for the headers and use getElementsByTagName("h1"). For each header found, we convert it into its string representation.

Designing for Concurrency

268

Now, let’s put all these pieces together into a pipeline: runBlocking { val pagesProducer = producePages() val domProducer = produceDom(pagesProducer) val titleProducer = produceTitles(domProducer) titleProducer.consumeEach { println(it) } }

Think of this like an assembly line, where each component does its part, and the output flows seamlessly to the next step: +----------------+ |

Producer

|

(fetching)

+----------------+

| => | |

+----------------+

|

Parser (parsing)

+----------------+

| => | |

+----------------+

|

Extractor

|

(extracting) |

+----------------+

If you need to stop the entire pipeline, you can simply call cancel() on the first coroutine in line. The cancellation will propagate to all the other coroutines in the pipeline. In summary, the Pipeline design pattern allows us to divide complex tasks into manageable steps, just like a team of specialists working together. Each step is a separate coroutine, making it easy to understand and test.

Fan-Out The purpose of the Fan-Out design pattern is to divide the workload among multiple concurrent processors, or workers, efficiently. To grasp this concept better, let’s revisit the previous section but consider a specific problem: what if there’s a significant disparity in the amount of work at different stages in our pipeline? For instance, fetching HTML content might take much longer than parsing it. In such cases, it makes sense to distribute the heavy lifting across multiple coroutines. In the previous example, each channel had only one coroutine reading from it. However, it’s possible for multiple coroutines to consume from a single channel, effectively sharing the workload. To simplify the problem we’re about to discuss, let’s assume we have only one coroutine producing some results:

Chapter 8

269

fun CoroutineScope.generateWork() = produce { for (i in 1..10_000) { send("page$i") } close() }

And we’ll create a function that generates a new coroutine responsible for reading those results: fun CoroutineScope.doWork( id: Int, channel: ReceiveChannel ) = launch(Dispatchers.Default) { for (p in channel) { println("Worker $id processed $p") } }

This function generates a coroutine that runs on the default dispatcher. Each coroutine listens to a channel and prints every message it receives to the console. Now, let’s kick off our producer. Keep in mind that all the following code pieces should be wrapped in the runBlocking or a suspend main function, but for simplicity, we’ve omitted that part: val workChannel = generateWork()

Next, we can create multiple workers that collaborate to distribute the work among themselves by reading from the same channel: val workers = List(10) { id -> doWork(id, workChannel) }

Now, let’s examine a portion of the program’s output: ... > Worker 4 processed page9994 > Worker 8 processed page9993 > Worker 3 processed page9992 > Worker 6 processed page9987

Designing for Concurrency

270

Note that no two workers receive the same message, and the messages are not printed in the order they were sent. Load balancing is a critical aspect of the Fan-Out design pattern. It ensures that the workload is evenly and efficiently distributed across the available resources, preventing situations where some workers are overloaded while others remain underutilized. Kotlin channels inherently provide a level of fairness in load balancing. When multiple consumer coroutines are waiting to receive data from a channel, the channel distributes data fairly among them. Each consumer gets an opportunity to receive data in a round-robin fashion. This ensures that no single consumer is starved while others receive all the data. Channels also offer mechanisms for backpressure handling. When data is produced faster than it can be consumed, channels can suspend producers until consumers are ready. This helps prevent overloading the system. The Fan-Out design pattern enables efficient distribution of work across a number of coroutines, threads, and CPUs. Next, we’ll take a look at associated design pattern that often complements Fan-Out.

Fan-In The objective of the Fan-In design pattern is to consolidate results generated by multiple workers. This pattern becomes invaluable when workers produce results that need to be gathered and managed. Unlike the Fan-Out design pattern we discussed earlier, which involves multiple coroutines reading from the same channel, Fan-In reverses the roles. In this pattern, multiple coroutines can contribute their results by writing them to the same shared channel. Combining the Fan-Out and Fan-In design patterns lays a solid foundation for building MapReduce algorithms. To illustrate this concept, we’ll make a slight modification to the workers used in the previous example: private fun CoroutineScope.doWorkAsync( channel: ReceiveChannel, resultChannel: Channel ) = async(Dispatchers.Default) { for (p in channel) { resultChannel.send(p.repeat(2))

Chapter 8

271

} }

Now, each worker sends the results of its computations to a common resultChannel. It’s worth noting that this pattern differs from the actor and producer builders we explored earlier. In this case, resultChannel is shared among all the workers. To collect the results from these workers, we employ the following code: runBlocking { val workChannel = generateWork() val resultChannel = Channel() val workers = List(10) { doWorkAsync(workChannel, resultChannel) } resultChannel.consumeEach { println(it) } }

Let’s see an explanation of what this code accomplishes. First, we create resultChannel, a channel that all our workers will utilize to share their results. Next, we supply resultChannel to each worker. In this example, we have ten workers in total. Each worker takes the message it received, repeats it twice, and then sends it onto resultChannel. Finally, in our main coroutine, we consume the results from the channel. This approach allows us to accumulate results generated by multiple concurrent workers in a centralized location. This sample of code generates output similar to the following: ... > page9995page9995 > page9996page9996 > page9997page9997 > page9999page9999 > page9998page9998 > page10000page10000

272

Designing for Concurrency

To draw another analogy, the combination of Fan-In/Fan-Out design patterns is quite similar to the MapReduce programming model, which is widely used for processing and generating large datasets that can be distributed across clusters of computers. It’s often used in big data processing frameworks like Hadoop. In MapReduce, the “Mapping” phase involves applying a function (the Map function) to each input data item and emitting a set of key-value pairs. In Fan-Out, this phase corresponds to the distribution of work or data among multiple workers (coroutines). Each worker can be seen as performing a “mapping” task on its portion of the data, possibly transforming or processing it. After the “Mapping” phase in MapReduce, there’s a “Shuffling and Sorting” step where intermediate key-value pairs are grouped by key and sorted. In Fan-Out, there might not be an explicit shuffling and sorting step, but the results from workers are collected into a common channel (the resultChannel in your example). This channel serves as a central repository for intermediate results. Finally, in MapReduce, the “Reducing” phase involves applying a Reduce function to each group of intermediate key-value pairs to produce the final output. In Fan-In, the consolidation of results from multiple workers (coroutines) in the shared channel corresponds to the “Reducing” phase. Each worker’s contribution is combined or aggregated with others to produce the final result. Both MapReduce and the Fan-In/Fan-Out combination leverage parallelism for improved performance. Multiple workers or tasks can run concurrently, processing different portions of the data. Scalability is a fundamental aspect of both approaches. They can be scaled horizontally by adding more workers or nodes to handle larger datasets or workloads. MapReduce frameworks like Hadoop are known for their built-in fault tolerance mechanisms. If a worker or node fails, the system can recover and reroute tasks to healthy nodes. In a distributed system using the Fan-In/Fan-Out patterns, similar fault tolerance mechanisms can be implemented, ensuring that the failure of one worker doesn’t jeopardize the entire process.

Next, let’s explore another design pattern that can enhance the responsiveness of our code in specific scenarios.

Racing The Racing design pattern is a concurrency pattern that involves running multiple tasks that produce the same type of data concurrently and selecting the result from the task that completes first, discarding the results from the other tasks.

Chapter 8

273

This pattern is useful in scenarios where you want to maximize responsiveness by accepting the result from the fastest task, even if multiple tasks are competing. In Kotlin, you can implement the Racing pattern using the select function on channels. Here’s an example using two weather sources, preciseWeather and weatherToday, where you fetch weather information from both sources and accept the result from the source that responds first: runBlocking { val winner = select { preciseWeather().onReceive { preciseWeatherResult -> preciseWeatherResult } weatherToday().onReceive { weatherTodayResult -> weatherTodayResult } } println(winner) }

In this code, you have two weather producers, and you use select to wait for the first result to arrive, regardless of which source it comes from. This pattern maximizes responsiveness in applications where you can tolerate fetching redundant data from multiple sources and discarding some of it.

Unbiased Select By default, when you use the select clause, it selects the first channel that becomes available. This means it can be inherently biased toward the order in which channels are declared within the select clause. If two events happen simultaneously, it will select the first one. To address this bias and make the selection unbiased, you can use the selectUnbiased function. This function randomly chooses one of the available channels if more than one channel is ready at the same time. It removes the order dependency on channel declaration. Here’s an example using selectUnbiased to select between two movies: runBlocking { val firstOption = fastProducer("Quick&Angry 7") val secondOption = fastProducer("Revengers: Penultimatum") delay(10) val movieToWatch = selectUnbiased { firstOption.onReceive { it }

Designing for Concurrency

274 secondOption.onReceive { it } } println(movieToWatch) }

In this code, selectUnbiased is used to choose between two movie options. Even if both movies are ready at the same time, selectUnbiased will randomly pick one. This ensures a more fair and unbiased selection when dealing with simultaneous events. The Racing design pattern and the concept of Unbiased Select are valuable tools for handling concurrency in scenarios where you need to maximize responsiveness and deal with simultaneous events or tasks. Use selectUnbiased when you want to indicate that the order of the blocks doesn’t matter. For example, in distributed systems or microservices architectures, load balancing is crucial. The Racing pattern can be employed to distribute incoming requests or tasks to multiple instances or services. The first instance to complete the task is selected, ensuring efficient resource utilization. CDNs often employ the Racing pattern to distribute content from multiple edge servers. The nearest server that responds first to a user’s request is selected for content delivery, reducing latency. When dealing with cached data or prefetching data for improved user experience, you can use the Racing pattern to fetch data from both the cache and the remote source simultaneously. The pattern selects the fastest response, reducing perceived latency. In each of these use cases, the Racing design pattern enhances responsiveness, reduces latency, and ensures that applications can make the most efficient use of resources. However, it’s essential to carefully consider error handling, timeouts, and resource management to implement the pattern effectively in your specific context.

Mutex Mutex, also known as mutual exclusion, serves as a way to safeguard a shared state that might be accessed by multiple coroutines simultaneously. Let’s kick off with the familiar scenario we all dread—the counter example. Imagine multiple concurrent tasks attempting to update the same counter: var counter = 0 val jobs = List(10) {

Chapter 8

275

async(Dispatchers.Default) { repeat(1000) { counter++ } } } jobs.awaitAll() println(counter)

As you might have guessed, the result displayed is less than 10,000, which is quite embarrassing! To address this issue, we can introduce a locking mechanism that ensures only one coroutine interacts with the variable at any given time, making the operation atomic. Each coroutine tries to obtain ownership of the counter. If another coroutine is already updating it, our coroutine waits patiently and then attempts to acquire the lock again. After updating, it must release the lock to allow other coroutines to proceed: runBlocking { var counter = 0 val mutex = Mutex() val jobs = List(10) { async(Dispatchers.Default) { repeat(1000) { mutex.lock() counter++ mutex.unlock() } } } jobs.awaitAll() println(counter) }

Now, our example consistently prints the correct number: 10,000.

Designing for Concurrency

276

It’s important to note that Kotlin’s Mutex differs from Java’s Mutex. In Java, using lock() on a Mutex blocks the thread until the lock is acquired. In Kotlin, however, a Mutex suspends the coroutine instead, providing enhanced concurrency. It’s worth mentioning, though, that since lock() is a suspending function, they are only available within coroutine contexts. While this works well for straightforward cases, what if the code within the critical section, between lock() and unlock(), throws an exception? We would need to wrap our code in a try/catch block, which isn’t very convenient: try { mutex.lock() counter++ } finally { mutex.unlock() }

However, omitting the finally block would lead to the lock never being released, potentially causing a deadlock. To address this, Kotlin introduces withLock(): mutex.withLock { counter++ }

Notice how much more concise and readable this syntax is compared to the previous example.

Deadlocks While discussing the Mutex design pattern, it is also very important to understand the concept of a deadlock. A deadlock is a specific condition in a concurrent system where two or more processes (or coroutines) are unable to proceed because they are each waiting for the other(s) to release a resource or a lock. In essence, it’s a state of perpetual waiting, where none of the involved processes can make progress. Deadlocks occur when processes compete for exclusive access to a resource. This resource can be a Mutex (lock), a file, a database record, or any other shared entity. Processes in a deadlock scenario hold resources they’ve already acquired while waiting for additional resources to become available. This waiting creates a circular chain of dependencies. Since resources cannot be forcibly taken away from a process. Only the process holding the resource can voluntarily release it, there is a circular chain of dependencies where Process A is waiting for a resource held by Process B, Process B is waiting for a resource held by Process C, and so on, eventually leading back to Process A.

Chapter 8

277

Let’s consider the following example: fun main() = runBlocking { val mutexA = Mutex() val mutexB = Mutex() val job1 = launch { mutexA.lock() delay(1000) // Simulate some work println("Coroutine 1: Acquired Mutex A, now attempting to acquire Mutex B") mutexB.lock() println("Coroutine 1: Acquired Mutex B") mutexB.unlock() mutexA.unlock() } val job2 = launch { mutexB.lock() delay(1000) // Simulate some work println("Coroutine 2: Acquired Mutex B, now attempting to acquire Mutex A") mutexA.lock() println("Coroutine 2: Acquired Mutex A") mutexA.unlock() mutexB.unlock() } job1.join() job2.join() }

In this code, we have two coroutines, job1 and job2, each attempting to acquire Mutexes mutexA and mutexB. However, they do so in such a way that creates a circular dependency: •

job1 tries to acquire mutexA and then mutexB.



job2 tries to acquire mutexB and then mutexA.

Designing for Concurrency

278

When you run this code, it will result in a deadlock. Both coroutines will print their attempts to acquire the Mutexes but will never proceed past the point where they’re waiting for the other coroutine to release the Mutex they need. In real-world scenarios, you would want to avoid such situations by carefully designing your concurrency logic to prevent circular dependencies and deadlocks, because deadlocks are very hard to detect. One way to prevent deadlocks is to carefully design your concurrency logic to avoid circular dependencies. Ensuring a consistent order in which resources are acquired can help. This is known as resource allocation graph prevention. To resolve the deadlock introduced in the previous code, you need to ensure that the coroutines acquire Mutexes in a consistent order. This prevents the circular dependency that leads to the deadlock. Here’s the corrected code: fun main() = runBlocking { val mutexA = Mutex() val mutexB = Mutex() val job1 = launch { println("Coroutine 1: Attempting to acquire Mutex A") mutexA.lock() delay(1000) // Simulate some work println("Coroutine 1: Acquired Mutex A, now attempting to acquire Mutex B") mutexB.lock() println("Coroutine 1: Acquired Mutex B") mutexB.unlock() mutexA.unlock() } val job2 = launch { // Corrected order println("Coroutine 2: Attempting to acquire Mutex A") mutexA.lock() // Simulate some work delay(1000) // Corrected order

Chapter 8

279 println("Coroutine 2: Acquired Mutex A, now attempting to acquire Mutex B") mutexB.lock() println("Coroutine 2: Acquired Mutex B") mutexB.unlock() mutexA.unlock()

} job1.join() job2.join() }

In this corrected code, both job1 and job2 first attempt to acquire mutexA before trying to acquire mutexB. This ensures a consistent order of acquisition and prevents the circular dependency that led to the deadlock in the previous code. When you run this corrected code, it will execute successfully without deadlocking, and both coroutines will complete their tasks. IMPORTANT NOTE: This only works this easily when both jobs are using the same locks. If job2 needs mutexB and also a mutexC locked by a different job, keeping a specified order is more difficult.

In conclusion, the Mutex design pattern is a crucial tool in concurrent programming, particularly in languages like Kotlin that embrace coroutines. It provides a mechanism for ensuring exclusive access to shared resources, preventing data corruption and race conditions. Mutexes help maintain the integrity of critical sections in concurrent code by allowing only one coroutine at a time to execute within a protected region. While effective in solving synchronization problems, it’s essential to use Mutexes judiciously to avoid deadlocks and ensure efficient concurrency. Additionally, Kotlin’s approach to Mutexes, which suspends coroutines instead of blocking threads, enhances the overall efficiency and safety of concurrent code. Understanding and appropriately applying Mutexes is vital for developing reliable and robust concurrent applications.

Designing for Concurrency

280

Sidekick The Sidekick design pattern enables us to delegate some tasks from our primary worker to a secondary worker. So far, we’ve talked about using select solely as a receiver. However, it’s also possible to use select to send items to another channel. To illustrate, let’s consider an example. First, we initialize batman as an actor coroutine that can process 10 messages every second: val batman = actor { for (c in channel) { println("Batman is dealing with $c") delay(100) } }

Next, we introduce robin, another actor coroutine, albeit a slower one, capable of processing just four messages per second: val robin = actor { for (c in channel) { println("Robin is dealing with $c") delay(250) } }

Here, we have a superhero and his sidekick represented as two actor coroutines. The superhero, being more adept, usually takes less time to handle villains. Sometimes, however, the superhero might be overwhelmed, requiring the sidekick to pitch in. Let’s see how they both handle a list of five villains: val epicFight = launch { for (villain in listOf("Joker", "Bane", "Penguin", "Riddler", "Killer Croc")) { val result = select { batman.onSend(villain) { "Batman" to villain } robin.onSend(villain) {

Chapter 8

281 "Robin" to villain } } delay(90) println(result)

} }

Note that the parameter type for select specifies what is returned from the block, not what is sent to the channels. Hence, we use Pair here. Executing this code yields: > Batman is dealing with Joker > (Batman, Joker) > Robin is dealing with Bane > (Robin, Bane) > Batman is dealing with Penguin > (Batman, Penguin) > Batman is dealing with Riddler > (Batman, Riddler) > Robin is dealing with Killer Croc > (Robin, Killer Croc)

Leveraging a Sidekick channel serves as an effective strategy for providing fallback options. This can be particularly helpful when you need to maintain a consistent data stream but find it challenging to scale your consumer processes. The Sidekick design pattern is particularly beneficial in scenarios where tasks can be partitioned between a primary and a secondary worker, optimizing resource allocation and improving system performance.

Summary In this chapter, we examined a variety of concurrency design patterns in Kotlin, with a focus on core components like coroutines, channels, and deferred values. Deferred values serve as placeholders for values that will be computed asynchronously. The Barrier design pattern synchronizes multiple asynchronous tasks, allowing them to move forward together. With the Scheduler pattern, we can separate the task logic from its runtime execution.

282

Designing for Concurrency

We also discussed the Pipeline, Fan-In, and Fan-Out patterns, which facilitate the distribution of tasks and the collection of results. The Mutex pattern is used to manage concurrent execution, ensuring tasks don’t conflict with one another. The Racing pattern is geared toward improving application responsiveness. Lastly, the Sidekick Channel pattern acts as a backup, taking on work when the primary task struggles to keep up. These patterns equip you with the tools to manage your application’s concurrency in an efficient and scalable way. In the next chapter, we’ll explore how all the functional programming concepts and most of the design patterns we’ve discussed in this chapter can be applied in practice using the Kotlin Arrow framework as an example.

Questions 1. What does it mean when we say that the select expression in Kotlin is biased? 2. When should you use a Mutex instead of a channel? 3. Which of the concurrent design patterns could help you implement a MapReduce or divide-and-conquer algorithm efficiently?

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

Section 3 Practical Application of Design Patterns In this section, you’ll put your understanding of design patterns to practical use by building a real-world application, while learning essential best practices and identifying anti-patterns to avoid. Initially, we’ll introduce a set of best practices and potential pitfalls to be mindful of when developing applications in Kotlin. Following that, we’ll integrate our knowledge of functional programming and coroutines with the Arrow framework, illustrating its contribution to Kotlin development through idiomatic functional programming. This includes showcasing how Arrow facilitates the creation of clear, expressive, and sustainable code, thereby maximizing the benefits of functional programming in Kotlin. Finally, we’ll embark on constructing two microservices: the first utilizing the concurrent framework Ktor, and the second using the reactive framework Vert.x. Throughout this process, we’ll also take the opportunity to explore the application of previously discussed design patterns in real-world scenarios. This section is divided into the following chapters: •

Chapter 9, Idioms and Anti-Patterns



Chapter 10, Practical Functional Programming with Arrow



Chapter 11, Concurrent Microservices with Ktor



Chapter 12, Reactive Microservices with Vert.x

By the end of this section, you should be well versed in Kotlin’s best practices, proficient in using the Arrow framework, and capable of developing microservices using either Ktor or Vert.x.

9

Idioms and Anti-Patterns In the preceding chapters, we explored various aspects of the Kotlin language, delving into the advantages of functional programming and examining concurrent design patterns. This chapter focuses on the dos and don’ts in Kotlin programming. It aims to help you recognize idiomatic Kotlin code and understand patterns that are best avoided and serves as a compilation of best practices, covering a range of topics discussed earlier.

You might consider some of the content in this chapter simpler compared to the discussions on concurrent data structures and design patterns from the previous two chapters. However, I believe it’s beneficial to address idioms as a unified topic, and to do so, we needed to complete the entire discussion on coroutines first. The topics covered in this chapter include: •

Scope functions (and how to utilize them effectively)



Type checks and casts



An alternative to the try-with-resources statement



Inline functions (and how to leverage them)



Algebraic data types



Recursive functions



Reified generics (understanding them and using them)



Using constants efficiently



Constructor overload



Dealing with nulls

Idioms and Anti-Patterns

286



Making asynchronicity explicit



Validating input



Sealed hierarchies vs enums



Context receivers

By the end of this chapter, you’ll have the skills to craft more readable and maintainable Kotlin code and steer clear of common programming pitfalls.

Technical requirements You can find the source code for this chapter here: https://github.com/PacktPublishing/ Kotlin-Design-Patterns-and-Best-Practices_Third-Edition/tree/main/Chapter09.

Scope functions Kotlin’s scoping functions, which are accessible on any object, offer a powerful tool to reduce repetitive code. These functions, functioning as higher-order functions, accept lambda expressions as arguments. In this section, we’ll explore essential scoping functions and demonstrate their application by executing code blocks within the context of specified objects. We’ll use the terms “scope” and “context object” interchangeably to refer to the objects these functions act upon.

let function The let() function is useful for operating on nullable objects, executing code only if the object is non-null. Consider the map of quotes introduced in Chapter 1: val clintEastwoodQuotes = mapOf( "The Good, The Bad, The Ugly" to "Every gun makes its own tune.", "A Fistful Of Dollars" to "My mistake: four coffins." )

To safely fetch and print a quote that might not exist on the map, the preceding code can be simplified using let(): val quote = clintEastwoodQuotes["Unforgiven"] quote?.let { println(it) }

Remember, omitting the safe call operator (?.) before let can lead to unintended behavior, as let() can operate on null values, potentially printing null. There is no NullPointerException thrown when let is accessing a null value. You can see the difference the safe call operator makes below:

Chapter 9

287

Function

quote?.let { println(it) }

quote.let { println(it) }

Output

>

> null

Table 9.1: Output comparison between a “let” function prefixed and not prefixed with a question mark.

apply function Previously mentioned in Chapter 2, Working with Creational Patterns, apply() sets the context inside the block to this and returns the context object. It’s particularly useful for overriding default properties of mutable objects. For instance, consider the following class with multiple default properties: data class JamesBondMovie( var actorName: String = "Sean Connery", var movieName: String = "From Russia with Love" )

Using apply(), we can override properties concisely: val bestSeanConneryMovie = JamesBondMovie().apply { movieName = "From Ukraine with Love" } println("${bestSeanConneryMovie.movieName}: ${bestSeanConneryMovie. actorName}")

This prints: > From Ukraine with Love: Sean Connery

This is often beneficial when working with Java classes that often have numerous setters and an empty default constructor, or when writing single-expression functions.

also function For single-expression functions requiring additional side effects, like logging, the also() function is ideal. It allows for a concise expression while executing side effects: fun multiply(a: Int, b: Int): Int = (a * b).also { println(it) }

Idioms and Anti-Patterns

288

This function assigns the result to it and returns the expression’s result. It’s useful for chaining calls with side effects: val evenSquares = (1..100).toList() .filter { it % 2 == 0 } .also { println(it) } .map { it * it }

Here, we are printing an intermediate result of a chain of higher-order function calls on a collection, without altering the results.

run function Similar to let(), run() sets the context to this. It is most suited for initializing objects or computing results. For instance: val lowerCaseName = JamesBond().run { name = "ROGER MOORE" movie = "THE MAN WITH THE GOLDEN GUN" name.lowercase(Locale.getDefault()) // println("No space in box") }, // Success block { fold( { removeDonut(box, "SRI LANKAN CINNAMON SUGAR") }, { _: NoSuchDonut -> println("No donut for me") }, { donut: Donut -> println("I've got ${donut.name}") } ) } )

The fold() function also incorporates a catch argument to manage exceptions. This demonstrates the functional programming capabilities of Kotlin as a multi-paradigm language. For Raise to operate effectively, encapsulation in DonutBox is removed, exposing a mutable collection. While this is for demonstration, in practical applications, Raise is more suitable for pure functions and immutable data structures.

Practical Functional Programming with Arrow

326

Furthermore, we examine how Arrow’s DSLs, like Either, enable computations in a Raise context and return an Either: fun addDonut(donut: Donut): Either = either { ensure(donuts.size println("All donuts meet the calorie criteria!") }

Given a threshold of 700 calories, donuts exceeding this limit will generate alerts: > Calories 1000 exceed the maximum limit of 700 > Calories 800 exceed the maximum limit of 700

To expand this functionality, we might also want to check for allergens. We can introduce a new class, AllergensPresent, to represent this type of logical failure: sealed interface DonutIssue data class AllergensPresent(val allergensList: Set) : DonutIssue { override fun toString() = "Donut contains allergens: ${allergensList. joinToString(", ")}" } data class TooManyCalories(val max: Int, val given: Int) : DonutIssue { override fun toString() = "Calories $given exceed the maximum limit of $max" }

Practical Functional Programming with Arrow

328

An allergen checker function can be implemented as follows: fun Raise.allergensChecker(allergicTo: Set, donut: Donut): Donut { val presentAllergens = donut.allergens.intersect(allergicTo) ensure(presentAllergens.isEmpty()) { AllergensPresent(presentAllergens) } return donut }

For a comprehensive validation, we can use zipOrAccumulate() instead of mapOrAccumulate(). This function allows the combination of different validations, each potentially returning different types of logical failures: val res: List = box.donuts.map { either { zipOrAccumulate( { caloriesChecker(700, it) }, { allergensChecker(setOf("Milk"), it) } ) { _, _ -> it } } }

The resulting output combines multiple validation outcomes: > Calories exceed the maximum by 1000, the limit being 700 > Detected allergens: [Milk] > Calories surpass the maximum by 800, with a limit of 700

This can only work because each element in the resulting list is an Either, which holds either a list of identified issues for the specific donut or the donut itself if it successfully clears all checks.

Smart constructors One interesting pattern that the Arrow library embraces is smart constructors, which enhance code resilience by preventing logical errors during object creation. To understand this, consider the DonutBoxEither class that uses the Either wrapper: class DonutBoxEither(private val capacity: Int) { // ... }

Chapter 10

329

This class features a basic constructor that takes an integer for capacity; however, allowing values like 0 or negative numbers wouldn’t make sense. Constructors are limited to either returning an object or throwing an exception. To handle this, smart constructors are utilized, effectively resolving such issues. We start by making the constructor of DonutBoxEither private, similar to the Singleton pattern: class DonutBoxEither private constructor(private val capacity: Int) { // ... }

Next, we define a companion object with an invoke operator: class DonutBoxEither private constructor(private val capacity: Int) { companion object { operator fun invoke(capacity: Int): Either = either { ensure(capacity > 0) { NonPositiveCapacity(capacity) } DonutBoxEither(capacity) } } }

This setup allows us to use the companion object’s invoke method as if it were the constructor. However, it can return a type different from DonutBoxEither—specifically, an Either type indicating potential construction failure. The NonPositiveCapacity error class is straightforward: data class NonPositiveCapacity(val capacity: Int)

To use the smart constructor, we can employ a when block, as seen in other Either examples: when (val box = DonutBoxEither(1)) { is Either.Left -> println("Couldn't construct a box due to invalid capacity") is Either.Right -> { val validBox = box.value // Further operations with validBox } }

Practical Functional Programming with Arrow

330

You may recognize that the Smart Constructor is very similar to the Static Factory Method design pattern we discussed in Chapter 2, Working with Creational Patterns. With this approach, we ensure that the object we work with is valid and that we have appropriately handled any logical failures related to its creation.

Alternatives to Either and Raise In addition to Either and Raise, there are other wrapper types available within the Arrow ecosystem and the Kotlin standard library that are worth mentioning for error handling.

Result Kotlin’s standard library offers a wrapper type named Result, similar to Arrow’s Either. It represents a computation that may either succeed or fail. The key difference between Result and Either lies in how they handle failures: Result always uses a Throwable exception for failures, whereas Either allows for any class to represent a logical failure. Here’s a comparison of Kotlin’s Result with Arrow’s Either: Kotlin Result

Arrow Either

fun addDonut(donut: Donut): Result = if (donuts.size < capacity)

fun addDonut(donut: Donut): Either =

{

if (donuts.size < capacity) donuts.add(donut)

{ donuts.add(donut)

Result.success(this)

this.right()

} else { Result.failure(

} else { NoSpaceInBox.left()

NoSpaceInBoxException()

}

) }

Table 10.2: Contrasting Kotlin’s Result to Arrow’s Either

Looking at the comparison between the two, you can also note that what Result lacks is the explicitness about what type of exception the function returns.

Chapter 10

331

Optional The second wrapper we’d like to mention is the Optional wrapper from the Arrow library. This wrapper first appeared in the Scala programming language, and its goal was to help avoid the NullPointerExceptions prevalent in those days. Later, it was introduced in Java as well, to solve similar issues. In Kotlin, though, we have the nullable types, which serve the same purpose, and arguably do it even better. For that reason, Optional may be only useful for easing the migration of Scala developers to Kotlin, or to solve some rare edge cases like the nested nullability problem. Let’s compare the original code that uses nullable types we saw at the start of this chapter with the code that uses the Optional wrapper: Kotlin Nullable types

Arrow Optional

fun removeDonut(name: String): Donut? {

fun removeDonut(name: String): Optional {

return donuts.find { it.name == name }?.let {

return donuts.find { it.name == name }?.let {

donuts.remove(it)

donuts.remove(it)

it

Optional.of(it)

} }

} ?: Optional.empty() }

Table 10.3: Comparing nullable types to Arrow’s Optional wrapper

As you can see, there’s no major benefit to the code that uses Optional—it is more verbose and less idiomatic than simply using Kotlin nullable types. If you’re still curious about Optional uses, you can read more about the nested nullability problem here: https://arrow-kt.io/learn/typed-errors/nullable-and-option/

Ior The final wrapper we should explore is the Ior wrapper. It shares similarities with Either, encompassing Success and Failure types. However, Ior introduces a distinctive third option, namely Both. By leveraging Both, you can represent states that are considered successful but may involve some potential errors during execution, akin to a warning. The workflow within an Either block is straightforward: we execute each step, and if at any point we bind() a Left or encounter a raise, we halt and return that value. If we reach the end without interruptions, we wrap the result in Right. In contrast, ior blocks are somewhat more intricate.

Practical Functional Programming with Arrow

332

We may encounter situations where there are errors to report, yet there is also a value to continue the execution. This raises a question: what should be done if several steps in the block result in Both? It’s up to you as a developer. The ior builder introduces an additional parameter that specifies how to combine two errors into a single one. Let’s look at the example briefly: fun addDonut(donut: Donut): Ior = ior(combineError = { _, both -> both }) { ensure( donuts.size { println("Added a donut successfully, but got warning: ${addDonutResult.leftValue}") when (val result = box.removeDonut("SRI LANKAN CINNAMON SUGAR")) { is Ior.Both -> TODO() is Ior.Left -> println("No donut for me") is Ior.Right -> println("I've got ${result.value.name}") } } is Ior.Left -> println("No space in box") is Ior.Right -> TODO() }

The code is similar to handling Either, but now we have an additional case: Both. In this case, we can access the warning using leftValue.

Advantages of typed errors Summarizing the benefits of using typed errors over exceptions reveals several key advantages. Typed errors enhance type safety, allowing the compiler to detect type mismatches early. This proactive approach aids in identifying bugs before deployment. In contrast, exceptions obscure type information, reducing the compiler’s ability to catch errors during compile time. The explicit listing of potential error conditions in a function’s type signature, inherent to typed errors, clarifies the range of possible errors. This explicitness aids in understanding the error landscape and facilitates comprehensive testing for all error scenarios.

334

Practical Functional Programming with Arrow

Typed errors lend themselves to seamless integration and propagation across multiple function calls, simplifying the creation of modular and composable code. Conversely, ensuring accurate error propagation with exceptions in complex codebases can be challenging. Accumulation patterns, straightforward with typed errors, become complex with exceptions. In addition to that, there are some performance benefits from using typed errors as well. One of the key performance drawbacks of using exceptions, especially in many programming languages, is related to the construction of stack traces. When an exception is thrown, the runtime system typically captures a stack trace detailing the call stack leading up to the point of the exception. This process involves several operations. Capturing the current call stack requires the runtime to inspect and record the current state of the call stack, which can be computationally expensive, especially if the stack is deep. Stack trace information also often requires allocating memory to store details such as method names, line numbers, and other context. This allocation adds to the overhead. In contrast, typed errors, like those used in functional programming patterns, typically don’t involve capturing a stack trace. These errors are treated as regular values that the program logic can handle. This approach means that the runtime overhead is lower, since handling typed errors generally involves fewer operations and no additional memory allocation for stack traces. Typed errors also provide better predictability. Since typed errors are just values, their handling is more predictable and doesn’t disrupt the normal flow of execution. Overall, typed errors offer a structured, predictable, and efficient approach to error handling, contributing to the development of high-quality, maintainable code. The Either type, which represents values as either successes or failures, and the Raise DSL, used for raising typed errors without wrappers, exemplify this approach. Built atop Raise, these functions and builders operate cohesively, allowing for flexible combinations and usage to suit various coding requirements.

High-level concurrency Coroutines are a standout feature in Kotlin, offering advanced capabilities for managing asynchronous computations. While the standard library provides essential coroutine support, it sometimes falls short in more complex scenarios. The Arrow library fills this gap, offering additional functions and primitives that are useful in Kotlin and other programming languages.

Chapter 10

335

To utilize these features, include the following dependency in your project: implementation("io.arrow-kt:arrow-fx-coroutines:1.2.1")

Parallel operations In Chapter 8, Designing for Concurrency, while discussing the Barrier design pattern, we saw the following implementation: val characters: List = listOf( Me.getFavoriteCharacter(), Nana.getFavoriteCharacter(), Sebastian.getFavoriteCharacter() ) println(characters.awaitAll())

Arrow’s parZip function is a prime example of enhancing concurrency. It provides a more intuitive and robust way to handle concurrent operations compared to the approach seen in Chapter 8, Designing for Concurrency, with the Barrier design pattern. For example, fetching favorite characters concurrently can be implemented as follows: parZip( { Me.getFavoriteCharacter().await() }, { Nana.getFavoriteCharacter().await() }, { Sebastian.getFavoriteCharacter().await() } ) { me, taylor, michael -> println("Favorite characters are: $me, $taylor, $michael") }

This function performs concurrent computations and then combines their results in a final block. The parZip also handles exception propagation and cancels running computations if any task fails. In Chapter 7, Controlling the Data Flow, we discussed higher-order functions on collections, such as map(). Arrow enhances that set of functions with parMap, which, as the name suggests, parallelizes the execution of map functions. This won’t speed up CPU-bound tasks, such as local calculations, but is very beneficial for IO work, such as performing a set of HTTP calls: val tasks = 'a'..'z' val wikiArticles = tasks.parMap {

Practical Functional Programming with Arrow

336

fetchAsync("https://en.wikipedia.org/wiki/$it") }

For example, here we fetch the Wiki article about every letter in the English alphabet, and we do it concurrently. Similar to the map() function, the results of parMap will be strictly ordered. That means that if we were to print the second value in the list, for example, it will always correspond to the letter “B”: println(wikiArticles[1])

This prints: > ...B, or b, is the second letter...

Since Kotlin Flows implement the Iteratable interface, parMap is available on them as well. Flows also have some additional operations. For example, if you’re looking for additional performance gains, you can use the parMapUnordered function instead, which doesn’t guarantee the ordering of the results. This would require using a Flow, though: val wikiArticlesUnordered = tasks.asFlow().parMapUnordered { fetchAsync("https://en.wikipedia.org/wiki/$it") }.toList() println(wikiArticlesUnordered[1]) // Probably not B

The parMap and the parMapUnordered functions are based on the Coroutines Dispatcher, which we discussed in Chapter 7, Controlling the Data Flow, meaning that the concurrency is limited by the Dispatcher. If you want to limit the concurrency further, for example, in order not to overwhelm a downstream service, you may do so: val wikiArticlesUnordered = tasks.asFlow().parMapUnordered(concurrency = 2) { fetchAsync("https://en.wikipedia.org/wiki/$it") }.toList()

In the previous section, we discussed the mapOrAccumulate function, which allows us to accumulate validation errors. There is also a parallel version of this function called parMapOrAccumulate.

Chapter 10

337

For example, we could set a timeout for fetching the articles, and then accumulate all the successful articles and all the failures: val wikiArticleOrTimeout: Either = tasks.parMapOrAccumulate { letter -> withTimeout(10L) { fetchAsync("https://en.wikipedia.org/wiki/$letter") } }

In summary, Arrow enhances Kotlin’s concurrency capabilities by providing advanced, intuitive constructs that simplify handling complex asynchronous operations, making the code more efficient, maintainable, and robust.

CyclicBarrier Arrow, known for enhancing Kotlin’s concurrency features, offers various concurrent data structures. One particularly relevant for limiting concurrency in solutions is the CyclicBarrier. In the context of the Barrier design pattern discussed earlier, a CyclicBarrier can be employed to synchronize coroutine operations. Each coroutine must invoke the await method on the barrier object, causing it to suspend until a predefined number of coroutines reach the barrier. When all coroutines have arrived at the barrier, they resume execution. Here’s an adapted example where we fetch Wikipedia articles in batches of three using CyclicBarrier: val barrier = CyclicBarrier(3) runBlocking(Dispatchers.IO) { ('a'..'x').forEachIndexed { index, letter -> launch { fetchAsync("https://en.wikipedia.org/wiki/$letter") barrier.await() println("Fetched letter $letter at ${System. currentTimeMillis() % 1000}") } } }

Practical Functional Programming with Arrow

338

In this example, coroutines complete in groups of three, as dictated by the barrier’s capacity. The output will reflect this batching: > Fetched letter b at 546 > Fetched letter o at 546 > Fetched letter y at 546 > Fetched letter n at 549 > Fetched letter q at 549 > Fetched letter v at 549 > ...

You’ll notice that the barrier automatically resets after completing each batch of coroutines. An observant reader might wonder why we skipped two letters in our loop. Hint: consider the total number of letters in the English alphabet and which numbers are divisible by 3 for the answer. If we were to keep those letters, our code would be stuck forever! The term “cyclic” in CyclicBarrier indicates its reusability after all coroutines have reached and been released from the barrier. The barrier can be reset, and waiting coroutines can be canceled, by calling the reset() method. As emphasized in previous discussions, while it’s often unwise to attempt implementing complex concurrent design patterns from scratch, utilizing the robust implementations provided by the standard library or frameworks like Arrow is highly recommended. In addition to CyclicBarrier, Arrow also offers a suspended implementation of CountDownLatch, which serves a similar synchronization purpose but with a slightly different API. These tools collectively contribute to efficient and controlled concurrency management in Kotlin applications.

Racing Arrow efficiently implements the Racing design pattern through its raceN function. This function provides a straightforward way to execute multiple computations concurrently and return the result of the first one to complete. Let’s see how we can apply this function to the Racing design pattern example discussed in Chapter 8, Designing for Concurrency: val winner: Pair = raceN( { preciseWeather() }, { weatherToday() } ).merge() println("Winner: $winner")

Chapter 10

339

In this example, raceN concurrently runs two computations: preciseWeather() and weatherToday(). Our goal is to obtain a single result, and we are indifferent to which function produces it, as both computations return the same type. To achieve this, we use the merge() function to consolidate these results into a singular value. Our primary focus is on the outcome of the first computation to complete, and this approach aligns with our desired behavior. This implementation, utilizing Arrow’s raceN, is significantly more straightforward and succinct than the manual approach covered in Chapter 8, Designing for Concurrency. It showcases Arrow’s proficiency in simplifying intricate concurrency patterns, thereby enhancing code clarity and maintainability. However, it’s important to note a minor limitation: while select can handle an arbitrary number of computations, the raceN() function is currently limited to a maximum of three.

Resource Resource management is a crucial aspect of software development, particularly for resources like files that can only be accessed by one program at a time. In the previous chapter, we discussed how Kotlin replaces the try-with-resource pattern from Java with its use{} block. For instance, reading and safely closing a file in Kotlin can be done as follows: BufferedReader(FileReader("./build.gradle.kts")).use { println(it.readLines()) }

This method works well for objects implementing the Closeable or AutoCloseable interfaces, as is the case with many Java IO classes. However, there are times when you might need to implement a closeable resource yourself. For example, if you have a wardrobe that needs to have its drawers closed before the doors, you could implement it like this: class Wardrobe : AutoCloseable { private val drawers: List = listOf() override fun close() { drawers.forEach { it.close() } println("Closing the wardrobe") } }

Practical Functional Programming with Arrow

340

Arrow offers an alternative and more comprehensive approach to resource management, focusing on three stages: acquiring, using, and releasing the resource. With Arrow’s Resource, these steps are bundled together, ensuring proper handling even in cases of exceptions or cancellations. For instance, opening and closing a wardrobe can be implemented using ResourceScope in Arrow: suspend fun ResourceScope.openWardrobe(): Wardrobe = install({ Wardrobe().also { println("Opened wardrobe!") } }) { wardrobe, _ -> wardrobe.close() }

This pattern is common in resource acquisition: constructing the object followed by invoking a start method, using Kotlin’s also() scope function. The ResourceScope DSL allows for the installation and safe interaction with resources. The install function takes both the acquisition and release steps as arguments, handling resource management seamlessly. suspend fun getItemFromWardrobe(itemName: String) = resourceScope { val wardrobe = openWardrobe() wardrobe.getItem(itemName) }

In this scenario, openWardrobe is an extension function on ResourceScope, thus it can only be invoked within the resourceScope builder. This ensures that wardrobe.close() is implicitly called, avoiding the need for manual management with a use{} block. Alternatively, Arrow offers the Resource type for a similar purpose: val wardrobeResource: Resource = resource( { Wardrobe().also { println("Opened wardrobe!") } } ) { wardrobe, _ -> wardrobe.close() }

The Resource type in Arrow is lazy, meaning you need to explicitly acquire or unwrap it using the bind() function: suspend fun getItemFromWardrobeResource(itemName: String) = resourceScope { val wardrobe = wardrobeResource.bind() wardrobe.getItem(itemName) }

Chapter 10

341

Both the extension function approach and the Resource approach serve the same purpose. This flexibility exemplifies Kotlin’s multi-paradigm approach, allowing developers to choose between more functional or object-oriented paradigms for resource management.

Software transactional memory Software Transactional Memory (STM) is a powerful abstraction designed for modifying state in concurrent programming. It enables the writing of code that accesses shared state concurrently, facilitating easy composition while maintaining safety guarantees. One of the key advantages of using STM is that it prevents deadlocks and race conditions in programs running within its transactions. The foundational elements of STM are Transactional Variables or TVars. Conceptually, a TVar is a wrapper around a variable that adds a layer of protection against concurrent modifications. Modifying a TVar requires operating within the STM context. This can be achieved by writing an extension function with STM as the receiver. This approach ensures that modifications to shared state are safely managed within the structured framework of STM, reducing the complexity and potential errors associated with concurrent state changes. In order to work with Arrow STM, we need to add the following dependency to our project: implementation("io.arrow-kt:arrow-fx-stm-jvm:1.2.1")

Transactions are extremely important in the finance world, so a common example of using STMs is a transfer between two bank accounts. But let’s talk about something more interesting, like donut boxes, instead. When I transfer a donut from one box to the other, I want to make sure that this operation is atomic. I do this by ensuring: 1. That the donut doesn’t disappear from my box, never to reappear in yours 2. That the donut doesn’t appear in your box without disappearing from mine (although that would be great!) First, we will implement a smart constructor for our DonutBox: class DonutBoxSTM private constructor( private val donuts: MutableList = mutableListOf() ) { companion object {

Practical Functional Programming with Arrow

342

suspend operator fun invoke(vararg donut: Donut): TVar { return TVar.new(DonutBoxSTM(donut.toMutableList())) } } … }

By implementing this pattern, we will be able to construct the wrapped type as if it were the actual type: val myBox: TVar = DonutBoxSTM(Donut("Rum&pecan caramel donut", 1000)) val yourBox = DonutBoxSTM()

Now, let’s implement adding a donut to a box: fun STM.addDonut(boxT: TVar, donut: Donut) { val yourBox = boxT.read() yourBox.add(donut) boxT.modify { yourBox } }

Key things to note here are that, in order to unwrap the TVar, we use the read() function, and to save a new value, we use the modify block. The entire function has to be an extension function of the STM type. Removing a donut is quite similar, as we need to read and write the value again, but here we also add validation that the donut with such a name actually exists in the box: fun STM.removeDonut(boxT: TVar, donutName: String): Donut { val box = boxT.read() val donut = box.remove(donutName) requireNotNull(donut) boxT.modify { box } return donut }

Chapter 10

343

Finally, we’ll implement the transfer of the donut from one box to another by simply composing those functions: fun STM.giveDonut( myBoxT: TVar, yourBoxT: TVar, donutName: String ) { addDonut( yourBoxT, removeDonut( myBoxT, donutName ) ) }

By itself, a function using STM as a receiver does not perform any computations. We say it’s just a description of a transaction. Running a transaction is then done using atomically: atomically { giveDonut(myBox, yourBox, "Rum&pecan caramel donut") }

Now let’s check the contents of the boxes: println(myBox.unsafeRead().checkDonut("Rum&pecan caramel donut")) println(yourBox.unsafeRead().checkDonut("Rum&pecan caramel donut"))

Here, we use the unsafeRead() function to unwrap the DonutBoxSTM type and be able to immediately access its methods. As expected, the output of the code is: > null > Donut(name=Rum&pecan caramel donut, calories=1000, allergens=[])

The donut has moved from myBox to yourBox. In addition to the TVar wrapper, Arrow provides some other transactional data structures, such as TSet and TMap, transactionally safe versions of Kotlin’s Set and Map.

Practical Functional Programming with Arrow

344

Let’s look at an example to understand it better: val set = TSet.new() try { atomically { set.insert("a") set.insert("b") } atomically { set.insert("c") set.insert("d") throw RuntimeException() } } catch (e: RuntimeException) { } atomically { println(set.member("a")) // true println(set.member("b")) // true println(set.member("c")) // false println(set.member("d")) // false }

Operations on TSet are available only within the context of the atomically{} block. Here, we have two transactions, the second of which fails with an exception. We then check to see that the value “a” is still in the set, since the transaction that added this value was successful, while the value “c” is not there. It’s important to note that, typically, utilizing these specialized data structures is more efficient than encapsulating their standard counterparts in a TVar. For instance, a TSet is more performant compared to a TVar. This efficiency gain is because TVar requires locking the entire set during modifications, while TSet is designed to consider only the entries that are actually affected by the modification and thus the locking is more granular.

Chapter 10

345

Resilience Resilience is a fundamental aspect of modern software systems, particularly those that rely on the cooperation of various services. These services can be local (within the same process or machine) or remote, requiring network communication. Such distributed architectures inherently introduce numerous potential points of failure. Resilience refers to the system’s capacity to respond and adapt effectively to such failures. To explore resilience concepts in practice, it’s necessary to include specific dependencies in your project. For instance, when working with the Arrow library in Kotlin, you can add the following dependency to enable resilience features: implementation("io.arrow-kt:arrow-resilience-jvm:1.2.1")

The approach to building a resilient system varies based on several factors, such as the feasibility of retrying requests, the criticality of errors, and the need for administrator intervention in case of fatal issues. Arrow doesn’t prescribe rigid solutions but instead offers a toolkit for creating tailored resilience strategies. The Arrow Resilience library encompasses implementations of some of the key design patterns in resilience: •

Retry and repeat computations with Schedule allows for repeated execution of operations, which is crucial in scenarios where transient failures might occur. By using a Schedule, you can define policies for retries, including delay strategies and conditions for giving up.



The CircuitBreaker pattern is instrumental in preventing system overload. It monitors service calls and, upon detecting a threshold of failures, temporarily halts further calls to the failing service. This gives the affected service time to recover and prevents cascading failures in dependent systems.



Implementing transaction-like behavior across distributed systems is complex, given the lack of a central controlling entity. The Saga pattern helps manage long-running inter-service operations, ensuring consistency and providing mechanisms for compensating actions in case of partial failures.

These patterns, provided by the Arrow Resilience library, equip developers with powerful tools to enhance the robustness and reliability of their systems in the face of failures, whether they stem from internal errors, external dependencies, or network issues.

Practical Functional Programming with Arrow

346

Retry and repeat In many scenarios, particularly when dealing with network calls or external services, there’s a need to retry or repeat actions under certain conditions. Consider a function simulating a server that initially throws an exception due to, say, a long initialization time. After three failed attempts, it begins to return a successful “OK” response: fun serverResponses(): Flow { var requests = 0 var lastErrorTime = System.currentTimeMillis() return flow { if (requests++ < 3) { println("Error occured at ${System.currentTimeMillis() lastErrorTime}") lastErrorTime = System.currentTimeMillis() throw RuntimeException("Something went wrong") } else { println("Server is up") emit("OK") } } }

Using Arrow’s retry() function, we can attempt to call this server up to 10 times: val responses = serverResponses().retry(Schedule.recurs(10)).toList() println(responses)

Note that the retry() function ceases retrying after the first successful response: > [OK]

The Schedule passed to retry() allows the definition of complex policies. For instance, combining a 10-retry policy with an exponential backoff starting at 1 second could look like this: val responses = serverResponses().retry( Schedule.recurs(10) .and(Schedule.exponential(1.seconds)) ).toList()

Chapter 10

347

In this setup, the system waits 1 second before the first retry, then 2 seconds, and so on. If the retry policy is exhausted (10 attempts in this case), retry() throws the last encountered exception. An alternative to retry() is the repeat() mechanism, which should not be confused with the repeat() function from the Kotlin standard library. The repeat() function continues an action as long as it’s successful and the scheduling policy allows. For example, a simulated remote service that returns “OK” for the first three calls and then fails could be represented as: val successThenFailure = sequence { yield { "OK" } yield { "OK" } yield { "OK" } while (true) { yield { throw RuntimeException() } } }.iterator()

Attempt to invoke this service 10 times using repeat(): val scheduleResult = Schedule.recurs(10).repeatOrElse({ println(successThenFailure.next()()) }, { t: Throwable, attempts: Long? -> println("Failed on attempt ${attempts?.inc() ?: 0} with $t") -1 }) println(scheduleResult)

This will output the first three successful calls, stopping at the first error: > OK > OK > OK > Failed on attempt 2 with java.lang.RuntimeException > -1

You might be curious about the use of the Elvis operator in handling failure attempts. This approach is necessary because the attempts are labeled as null, 0, 1, 2. The -1 we return simply aligns with the behavior of the recurs() function, which requires a positive number of attempts to be returned when there are no failures.

Practical Functional Programming with Arrow

348

Retry and repeat is effective for making application code more resilient by uniformly handling all responses or exceptions. For more intricate scenarios, Arrow also provides functions like doWhile() and doUntil() for further customization and control over the retry and repeat behaviors.

Circuit Breaker The Circuit Breaker design pattern, inspired by electrical engineering, is essential for managing service availability in software systems. Its primary role is to protect an overloaded service by failing fast, thus maintaining system stability and preventing cascading failures in distributed systems. The Circuit Breaker implements the State design pattern, and it has three states: 1. Closed State: 1. The default state where requests are processed normally. 2. Each exception increments a failure counter. 3. The Circuit Breaker transitions to the Open state when the failure counter exceeds a specified threshold (maxFailures). 4. A successful request resets the failure count to zero. 2. Open State: 1. In this state, the Circuit Breaker short-circuits all requests by throwing an ExecutionRejected exception. 2. If a request is made after a configured reset timeout, the breaker transitions to the Half-Open state, allowing a test request to pass through. 3. Half-Open State: 1. This state permits one test request while other requests fail fast. 2. The success of the test request resets the Circuit Breaker to Closed, also resetting the resetTimeout and failure count. 3. If the test request fails, the breaker returns to Open, with the resetTimeout multiplied by the exponentialBackoffFactor, up to a maximum resetTimeout.

Chapter 10

349

This can also be depicted by the following diagram: +--------+

failure

+-------+

| Closed | -----------> | Open

|

+--------+

+-------+

^

|

|

| reset timeout,

| test request success

| one request allowed

|

V

|

+-----------+

|________________| Half Open | +-----------+

An example Circuit Breaker in Arrow might be configured to allow up to two consecutive failures. After the threshold is reached, it waits for 1 second before allowing new requests. If failures persist, the wait time increases exponentially, up to 60 seconds: val circuitBreaker = CircuitBreaker( openingStrategy = CircuitBreaker.OpeningStrategy.Count(1), resetTimeout = 10.seconds, exponentialBackoffFactor = 2.0, maxResetTimeout = 60.seconds, )

There are two strategies for opening a Circuit Breaker at the moment. The Count strategy sets a maximum number of failures before opening. The Circuit Breaker strategy only opens after consecutive failures. The Sliding Window strategy: Counts failures within a time window, opening only if the number of failures exceeds a threshold. This is of course a great example of the Strategy design pattern we discussed in Chapter 4, Getting Familiar with Behavioral Patterns. The states of the Circuit Breaker can be monitored using the Observer pattern: circuitBreaker.doOnHalfOpen { println("Half Open!") }.doOnOpen { println("Open!") }.doOnClosed { println("Closed!") }

Practical Functional Programming with Arrow

350

A remote server can be simulated with varying probabilities of failure using a sequence: fun remoteServer(failureChance: Double) = sequence { while (true) { if (Random.nextDouble(1.0) < failureChance) { yield { throw RuntimeException() } } else { yield { "OK" } } } }

Each call to that service should be wrapped with protectOrThrow to manage exceptions and alter the Circuit Breaker’s state accordingly: remoteServer(0.3).forEach { req -> try { delay(400L) circuitBreaker.protectOrThrow { req() }.also { println("Response: $it") } } catch (e: RuntimeException) { println("Server returned exception: $e") } catch (e: CircuitBreaker.ExecutionRejected) { println("Circuit breaker exception: ${e.reason}") } }

The possible output of the simulated remote server protected by the Circuit Breaker, as described, may vary depending on the timing and the occurrence of simulated failures. However, a typical output might look like this: > ... > Closed! > Server returned exception: java.lang.RuntimeException > Open! > Circuit breaker exception: Rejected because the Circuit Breaker is in the Open state, attempting to close in 596 millis > Circuit breaker exception: Rejected because the Circuit Breaker is in the Open state, attempting to close in 194 millis

Chapter 10

351

> Half Open! > Response: OK > Closed! > ...

This output demonstrates the different states of the Circuit Breaker: 1. We start with the Circuit Breaker in a Closed state. 2. The server then throws an exception, as part of the simulated server behavior. 3. The Circuit Breaker switches to the Open state. 4. While in the Open state, additional requests are immediately rejected by the Circuit Breaker, as indicated by the Circuit Breaker exception messages. 5. After the reset timeout, the Circuit Breaker transitions to the Half-Open state, allowing a test request to pass through. 6. The server responds with “OK,” indicating a successful request. 7. Since the test request succeeded, the Circuit Breaker moves to the Closed state, resetting its internal counters and timeouts. Keep in mind that the actual output can vary based on the timing of requests and the simulated rate of failures. The delay between requests that was chosen and the failureChance parameter significantly influence the behavior and state transitions of the Circuit Breaker. There’s also an alternative to protectOrThrow that returns an Either, conventionally called protectOrEither. To enhance system resilience, a Circuit Breaker can be configured with a back-off policy, which is crucial for managing resources and avoiding overload. Unlike simple scheduling, a Circuit Breaker accounts for failures across all function calls to a resource, making it effective for managing parallel calls and shared resources. It is crucial that concurrent threads accessing the same service use the same Circuit Breaker instance, not separate instances with identical parameters. In other words, the Circuit Breaker should act as a Singleton in your system.

Saga The Saga pattern in distributed systems is akin to transactions in databases. It ensures that multiple operations across different services either succeed or fail together, maintaining consistency. Saga orchestrates this by associating each action with a compensatory action, which reverses the changes made by the action if any subsequent steps fail.

Practical Functional Programming with Arrow

352

This functionality is crucial in avoiding inconsistent states in distributed environments. While similar to Software Transactional Memory (STM), Sagas specifically address distributed systems. Consider the operation of a donut shop accepting delivery orders. The process involves several steps: 1. Putting donuts into a box: •

If subsequent steps fail, the donuts shouldn’t just be left in the box; they need to be unpacked and displayed again.

2. Putting a label on the box: •

If a failure occurs after this step, the label needs to be removed.

3. Handing the box to the courier: If the courier doesn’t appear, we just vent our frustration, then proceed to remove the label and unpack the box.



The Saga pattern can be implemented to handle this workflow: val sendDonutsSaga = saga { saga({ putDonuts(box) }) { unpack(box) } saga({ addLabel(box) }) { removeLabel(box) } saga({ passToCourier(box) }) { println("I wasted so much time and the courier never came!") } }

Each step within SagaScope consists of an action and its corresponding compensatory action.

Chapter 10

353

To execute the Saga, the transact method is called: try { sendDonutsSaga.transact() } catch (e: Exception) { println("Failed saga: ${e.message}") }

The output reflects the Saga’s behavior: > Putting donuts in a box > Adding label to the box > Removing the label > Putting donuts back on the counter > Failed saga: Courier never came! SagaScope shares similarities with STM and the Resource pattern in ensuring certain operations oc-

cur at specific points. The key difference lies in when compensations are executed. ResourceScope always runs release actions, while SagaScope only does so if there’s a failure at some point. STM and Saga both guarantee the atomic success or failure of grouped operations. For actions over local data where atomicity is needed, software transactional memory is more appropriate than Sagas or Atomic references, especially for non-distributed scenarios.

Immutable data At the very beginning of this book, back in Chapter 1, Getting Started with Kotlin, we mentioned how Kotlin favors immutability with the values over variables and immutable collections being the default. But the Arrow library takes this concept even further with its Optics module. In order to work with the Optics library, we need to add the following dependency to our project: implementation("io.arrow-kt:arrow-optics:1.2.1")

When discussing data classes in Chapter 2, Working with Creational Patterns, we mentioned that they have a copy method, which is undoubtedly useful for keeping immutable state but may be slightly cumbersome if we wanted to change deeply nested structures.

Practical Functional Programming with Arrow

354

Let’s take the following example. We have a box of donuts that contains just a list of already familiar donuts: data class DonutBoxOptics(val donuts: List) val donutBox = DonutBoxOptics( listOf( Donut("Gianduja Chocolate & Hazelnut Donut", 2000, allergens = listOf("Milk", "Nuts")), Donut("Ginger, Blackberry & Pear Donut", 1500, allergens = listOf("Milk")) ) )

Suppose we realize that we forgot to add the “Wheat” allergen to each donut. How can we address this while considering that the entire data structure is immutable? We could achieve that with the nesting of copy methods: val donutBoxCorrectAllergensCopy = donutBox.copy( donuts = donutBox.donuts.map { donut -> donut.copy(allergens = donut.allergens + "Wheat") } )

But this is a very verbose way of doing so. The Arrow Optics library provides a better way. Arrow provides a solution to this problem in the form of optics. Optics are values that represent access to a value (or values) inside a larger value. For example, we may have an optic focusing (that’s the term we use) on the allergens field of a donut. By combining different optics, we can concentrate on nested elements, like the allergens field within the Donut field within a DonutBox. To comprehend this, we introduce the concept of lenses. Lenses serve as references to fields and provide three primary operations: get (obtains the elements focused on a lens), set (changes the value of the focus to a new one), and modify (transforms the value of the focus by applying a given function). Let’s change the donut box class to have a first lens: data class DonutBoxOptics(val donuts: List) { companion object { val donuts: Lens = TODO()

Chapter 10

355

} }

You may notice that we rarely changed the type of donuts that the box contains to the DonutOptics class. This is just to simplify the example, and so we could modify this class with optics without much interference. Let’s do it now: data class DonutOptics( val name: String, val calories: Int, val allergens: List = listOf() ) { companion object { val name: Lens = TODO() val calories: Lens = TODO() val allergens: Lens = TODO() } }

Again, we repeat the same pattern where we have a companion object that holds the lenses. Now, let’s implement one of the lenses. We’ll start with Donut.name, as it’s the simplest one: data class DonutOptics( val name: String, ... ) { companion object { val name: Lens = Lens( get = { donut -> donut.name }, set = { donut, name -> donut.copy(name = name) } ) ... } }

As you can see, a lens is just a set of a getter and a setter. While the getter is trivial, in the setter case, we make sure to return a copy of an object, keeping it immutable.

Practical Functional Programming with Arrow

356

You can repeat exactly the same process to introduce the lenses for all the other attributes. In fact, the process is so repetitive, Arrow provides a compiler plugin to do that for you. To enable it, add the following to the plugins{} block in your build.gradle dependencies: id("com.google.devtools.ksp") version "2.0.0-RC1-1.0.20"

And this to the dependencies block: ksp("io.arrow-kt:arrow-optics-ksp-plugin:1.2.4")

Once done, you can redeclare your DonutBoxOptics as follows: @optics data class DonutBoxOptics(val donuts: List) { companion object }

Note that you still need the companion object, but it remains empty. The plugin will generate all the required code for you. Now, let’s use our newly introduced lenses to create a correct copy of our donut box: val donutBoxCorrectAllergensOptics = DonutBoxOptics.donuts. modify(donutBox) { donuts -> donuts.map { donut -> DonutOptics.allergens.modify(donut) { allergens -> allergens + "Wheat" } } } println(donutBoxCorrectAllergensOptics)

This code does exactly the same thing as the previous copy() example, just with optics. You may think: “But wait, this code isn’t much shorter than the previous code.” You’re right, and it’s time to introduce two more important concepts: composition of optics and traversal. Traversals for optics are what that map function is for collections.

Chapter 10

357

To traverse all donuts in a box, we can use the every() function, passing the Every.list() traversal. This is basically a Strategy pattern, and there are traversals available for some other common data structures as well, such as Every.map() and Every.either(), for example. DonutBoxOptics.donuts.every(Every.list())

The compose function can now combine lenses to get nested values. That is, for every donut in a box, we get the allergens nested in it: val donutBoxAllergensOptics = DonutBoxOptics.donuts.every(Every.list()) compose DonutOptics. allergens

All that’s left to do is to pass this optic the object we operate on (the DonutBox), and what we want to do with the allergens list: println(donutBoxAllergensOptics.modify(donutBox) { it + "Wheat" })

Now, if you were to compare this example with the one using copy(), you might agree that it is neater. And the benefits only increase the more nested your data structures are. There is just a small drawback when using the plugin: the code needs to be generated every time there is a change in the corresponding code.

Summary This chapter has showcased practical applications of functional programming using the Arrow library, along with implementations of various patterns discussed in previous chapters. The Arrow library provides a powerful toolset for functional programming, enhancing Kotlin’s capabilities in handling immutability, concurrent state modifications, resilience, and data manipulation. Arrow also offers a sophisticated approach to error handling with its support for typed errors and the Either data type. Typed errors in Arrow enhance the robustness of Kotlin programs by making potential errors explicit and manageable at compile time. This approach promotes proactive error handling, contrasting with Kotlin’s traditional exception handling, which can lead to unpredictable runtime errors and code that is more challenging to maintain. The Either type in Arrow is particularly effective, representing a value as either a success type or an error type. This distinction allows developers to handle successful outcomes and errors in a unified, type-safe manner. Functions that return Either explicitly state their error cases, improving code clarity and reliability.

358

Practical Functional Programming with Arrow

This pattern is especially valuable in scenarios where functions might fail for multiple reasons. By using Either, developers can create more expressive, safer APIs that clearly communicate potential failures to the calling code, resulting in more robust and maintainable applications. Arrow introduces STM as a solution for safely handling concurrent state modifications. STM utilizes TVars to ensure that operations fully succeed or fail without side effects, making it highly suitable for scenarios requiring the coordination of multiple operations as a single atomic transaction. The resilience module in Arrow provides tools for gracefully handling failures in distributed systems. It implements key patterns like Retry, Circuit Breaker, and Saga, each targeting different aspects of resilience. Retry involves reattempting operations under specific conditions, Circuit Breaker protects services from overload by failing fast, and Saga manages complex, distributed transactions with compensatory actions. The Saga pattern, Arrow’s solution for managing distributed transactions, pairs each action in a distributed system with a compensatory action. This pairing ensures system consistency, even in the face of partial failures. Sagas are essential in scenarios where multiple interdependent operations across services must succeed or fail together. Arrow also bolsters Kotlin’s emphasis on immutability, a feature particularly beneficial in handling complex, nested data structures. The Arrow Optics library streamlines the manipulation of immutable data with lenses, Traversals, and other optics. These optics facilitate efficient and elegant modifications of deeply nested structures, eliminating the need for verbose nested copy calls. In conclusion, the Arrow library significantly enriches Kotlin’s functional programming ecosystem. It offers advanced solutions for immutable data manipulation, concurrency control, and system resilience. Integrating Arrow into Kotlin projects boosts code reliability, maintainability, and readability, especially in complex, distributed applications. In the upcoming chapter, we will apply these skills and insights by developing a real-life microservice, demonstrating how these Kotlin features can be effectively utilized in a practical scenario using the Ktor framework.

Chapter 10

359

Questions 1. Explain the concept of typed errors in Arrow and how the Either data type enhances error handling in Kotlin programs. 2. Describe the role of TVars in Arrow’s STM and explain how they differ from regular variables. 3. In what scenarios would using Arrow’s Optics library be more beneficial than using traditional methods for modifying immutable data?

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

11

Concurrent Microservices with Ktor In the previous chapter, we explored practical applications of the Arrow framework through some concrete examples, but we didn’t put a complete application together. This chapter will apply the skills we’ve acquired so far by building a complete microservice. We aim for this microservice to be reactive and to mirror real-life scenarios. To achieve this, we’ll employ the Ktor framework, whose benefits we’ll enumerate in this chapter’s first section. In this chapter, we will cover the following topics: •

Getting started with Ktor



Routing requests



Connecting to a database



Configuration management in Ktor



CRUD operations on entities



Testing CRUD operations



Organizing routes in Ktor



Achieving concurrency in Ktor

By the end of this chapter, you’ll have developed a Kotlin-based microservice that is thoroughly tested, as well as capable of reading data from a PostgreSQL database and storing data within it.

Concurrent Microservices with Ktor

362

Technical requirements This chapter introduces the use of a PostgreSQL database. To sidestep the requirement of installing a specific version of PostgreSQL on your local machine, a Docker Compose configuration file is provided. If you don’t have Docker installed, installation instructions can be found here: https://docs. docker.com/get-docker/. The source code for this chapter is available at https://github.com/PacktPublishing/KotlinDesign-Patterns-and-Best-Practices_Third-Edition/tree/main/Chapter11.

Getting started with Ktor You might be weary of developing commonplace applications like to-do or shopping lists. Thus, in this chapter, we will design a microservice for a cat shelter. The microservice will have capabilities to: •

Offer an endpoint to verify the service’s operational status.



Display a list of cats currently residing in the shelter.



Enable the addition of new cats.



Update cat information.



Remove a cat added by mistake from the registry.

For this project, we will utilize Ktor, a concurrent framework developed and maintained by the creators of the Kotlin programming language.

Chapter 11

363

The easiest way to create a Ktor application nowadays is by going to https://start.ktor.io and generating a new project.

Figure 11.1: Project generation on start.ktor.io

For the moment, exclude all additional plugins; we will introduce them gradually later, explaining the purpose of each. After downloading the resulting archive, unzip it and open the folder in IntelliJ IDEA.

Concurrent Microservices with Ktor

364

You will then see the following structure:

Figure 11.2: Project structure

Next, open build.gradle.kts. You should see the following dependencies in the dependencies block: dependencies { ... implementation("io.ktor:ktor-server-core-jvm") implementation("io.ktor:ktor-server-netty-jvm") testImplementation("org.jetbrains.kotlin:kotlin-test-junit: $kotlin_version") }

This code specifies all the libraries your project will use. Since .kts files are Kotlin files, we use regular Kotlin syntax, including values and string interpolation for library versioning, such as $kotlin_version.

Chapter 11

365

IMPORTANT NOTE: The latest version of Ktor at the time of writing is 2.3.9, but it may have been updated since. Check the latest version at Ktor’s official site. The version is controlled by the following plugin: plugins { ... id("io.ktor.plugin") version "2.3.9" }

The implementation() configuration indicates that the library is used in all phases; testImplementation() is for use during tests only. To enable our project to run as an application, we must specify the main file in the configuration: application { mainClass.set("com.example.ApplicationKt") ... }

Next, we’ll examine the contents of the Application.kt file, which should include the following code: fun main() { embeddedServer( Netty, port = 8080, module = Application::module ).start(wait = true) } fun Application.module() { configureRouting() }

This setup ensures that when the application runs, it starts the server using Ktor’s embeddedServer function and listens on port 8080. The module function is designated as the module to load. Modules help us organize code in Ktor, and we’ll discuss them in more detail later in this chapter.

Concurrent Microservices with Ktor

366

The embeddedServer() function embodies the Builder pattern, as discussed in Chapter 2, Working with Creational Patterns. It configures our server, with most arguments having defaults. Here, we specifically set the port to 8080. The wait argument is set to true, ensuring the server remains active and awaiting incoming requests. The mandatory argument for embeddedServer is the server engine. We use Netty, a well-known JVM library, but CIO, developed by JetBrains, is another viable option. CIO and Netty both utilize the Factory pattern to instantiate the server. This integration of multiple design patterns creates a flexible and extensible architecture. Choosing between CIO and Netty as Ktor’s server engine hinges on your project’s specific needs, team expertise, and performance requirements. CIO, designed with Kotlin’s coroutines, offers a more straightforward approach for Kotlin-heavy projects, enhancing code readability and maintainability. It’s generally lighter, presenting advantages for applications seeking efficiency and a smaller footprint, particularly in resource-limited environments. CIO’s Kotlin-aligned design and development by JetBrains, creators of Kotlin, suggests better long-term support and compatibility with Kotlin’s evolution. Conversely, Netty, known for its maturity and stability, is a robust choice for large-scale production environments. It excels in high-concurrency and throughput scenarios, backed by a vast community and comprehensive documentation. Ultimately, while CIO appeals with its simplicity and native Kotlin integration, Netty stands out for its feature-richness and proven performance. The decision should align with your project’s constraints and goals. To switch to using CIO, a new dependency is required: dependencies { ... implementation("io.ktor:ktor-server-cio") }

Then, update the server engine in the embeddedServer function to CIO: embeddedServer(CIO, port = 8080) { ... }.start(wait = true)

Chapter 11

367

No other code changes are needed when switching server engines due to embeddedServer() utilizing the Bridge design pattern, allowing for interchangeable components. Ktor typically organizes code into modules. The module in our example is defined as an extension function on the Application object: fun Application.module() { configureRouting() }

With this setup, initiating the web server will yield a response of “Hello World!” when http:// localhost:8080 is accessed in a browser. Let’s now open the Routing.kt file and see how this is achieved: fun Application.configureRouting() { routing { get("/") { call.respondText("Hello World!") } } }

We use the call object, or context, to manage requests and responses. This object provides methods to parse requests and respond in various formats, which we will examine further in this chapter. Now that our server is operational, let’s delve into defining different responses for each request to the server.

Routing requests Now, let’s examine the routing block more closely: routing { get("/") { call.respondText("Hello World!") } }

This block defines all the URLs that our server will handle. Currently, it only manages the root URL. When this URL is requested, the server returns a text response, “OK”, to the user.

Concurrent Microservices with Ktor

368

Next, we explore how to return a JSON response: get("/status") { call.respond(mapOf("status" to "OK")) }

Instead of the respondText() method, we use respond(), which accepts an object rather than a string. In this example, we pass a map of strings. However, running this code as is will cause an exception, as objects aren’t automatically serialized into JSON. We can overcome this by using the kotlinx-serialization and ktor-server-content-negotiation libraries. We begin by adding them to our dependencies: dependencies { ... implementation("io.ktor:ktor-server-content-negotiation-jvm") implementation("io.ktor:ktor-serialization-kotlinx-json") }

Next, add the following block before the routing{} block: install(ContentNegotiation) { json() } routing { ... }

Running the code now, our browser will display: > {"status":"OK"}

We have successfully set up our first route, which returns a serialized JSON object. To confirm the functionality of our application, navigate to http://localhost:8080/status in your web browser. However, this approach is somewhat manual and cumbersome. In the upcoming section, we will learn how to create a test for the /status endpoint, offering a more automated solution.

Testing the service To begin our first test, create a new file named ServerTest.kt in the src/test/kotlin directory.

Chapter 11

369

First, ensure that you have this dependency in your dependencies block: dependencies { ... testImplementation("io.ktor:ktor-server-tests-jvm") }

Then, insert the following content into the ServerTest.kt file: class ServerTest { @Test fun testStatus() { testApplication { application { mainModule() } val response = client.get("/status") assertEquals(HttpStatusCode.OK, response.status) assertEquals("""{"status":"OK"}""", response.bodyAsText()) } } }

In Kotlin, tests are structured within classes, where each test is represented as a method annotated with @Test. This organization relies on the testing framework being used. JUnit is the default choice, but Kotest (https://kotest.io/), which operates without annotations, is another viable option. The test method initiates a test server using the testApplication{} block. Here, we specify the application modules to test. Our current application, being simple with only a single module, is fully tested, but as the application expands, it’s practical for each test to target specific modules. Within the testApplication{} block, we are provided a simple HTTP client. We issue a GET request to the /status endpoint and verify the response’s status code and JSON body. To understand it better, let’s make a short detour and discuss how the HTTP client works in Ktor.

Concurrent Microservices with Ktor

370

Connecting to other HTTP services While the application developed in this chapter doesn’t make requests to external services for simplicity, nearly every real-world service must communicate with other services over the network. To facilitate this, Ktor provides the HttpClient. In our tests, the client is created automatically, but we can also define it explicitly. First, we need to add two new dependencies to our project: dependencies { ... implementation("io.ktor:ktor-client-core") implementation("io.ktor:ktor-client-cio") implementation("io.ktor:ktor-client-content-negotiation") }

The ktor-client-core is the core dependency providing the main client functionality, while ktor-client-cio is an engine dependency for processing network requests. The Ktor HTTP client, built on Kotlin coroutines, is designed for asynchronous and efficient HTTP requests to external services. It supports non-blocking IO operations, ideal for concurrent tasks and handling multiple requests. Creating a client instance in Ktor is very similar to creating a server instance: val client = HttpClient(CIO) { install(ContentNegotiation) { json() } }

Here, the ContentNegotiation plugin is installed, and the json() function configures JSON serialization using the default JSON serializer from Ktor, typically based on kotlinx.serialization. The client is highly customizable with various features and plugins. It’s crucial to manage the lifecycle of the client properly in long-running applications, reusing the client instance instead of creating new ones for each request. Now, since we understand how the HTTP client works a bit more, let’s go back to our testStatus() test.

Chapter 11

371

So far, we’ve concentrated on the service’s infrastructure, rather than its core functionality of managing cats. For this purpose, a database is required. In the next section, we’ll look at how Ktor addresses this using the Exposed library.

Connecting to a database To manage and access cats in our application, a database connection is necessary. We’ll use PostgreSQL, but the process is similar for other SQL databases. Firstly, we require a library to facilitate this connection. We’ll employ the Exposed library by JetBrains, offering a Kotlin-friendly approach to interacting with relational databases. Start by adding the following dependencies to the build.gradle.kts file: ... val exposedVersion = "0.48.0" dependencies { ... implementation("org.jetbrains.exposed:exposed-core:$exposedVersion") implementation("org.jetbrains.exposed:exposed-dao:$exposedVersion") implementation("org.jetbrains.exposed:exposed-jdbc:$exposedVersion") implementation("org.postgresql:postgresql:42.5.1") }

With the libraries set up, the next step is establishing the database connection. Create a new file named DB.kt in /src/main/kotlin and include the following content: object DB { private val host = System.getenv("DB_HOST") ?: "localhost" private val port = System.getenv("DB_PORT")?.toIntOrNull() ?: 5432 private val dbName = System.getenv("DB_NAME") ?: "cats_db" private val dbUser = System.getenv("DB_USER") ?: "cats_admin" private val dbPassword = System.getenv("DB_PASSWORD") ?: "abcd1234" fun connect() = Database.connect( url = "jdbc:postgresql://$host:$port/$dbName", driver = "org.postgresql.Driver", user = dbUser,

Concurrent Microservices with Ktor

372 password = dbPassword ) }

In this setup, we use the Singleton pattern for the DB object to ensure a single database instance, a concept we discussed in Chapter 2, Working with Creational Patterns. For each variable required to connect to the database, we first attempt to read them from environment variables. If an environment variable is not set, we default to a predefined value using the Elvis operator. To initiate the PostgreSQL database locally, execute this command in your terminal from the project folder: $ docker-compose up

This command launches a database inside a Docker container and makes it accessible on localhost:5432 for our application. Currently, the database configuration is set to read from environment variables. However, Ktor provides a more efficient method for managing this. Let’s explore this approach.

Configuration management in Ktor Configuration management is a key aspect of customizing applications in Ktor, as it enables you to define and organize crucial settings, including server ports, database credentials, and API keys. Ktor, by default, searches for an application.conf file in the src/main/resources directory. We’ll be working with a file in the HOCON (Human-Optimized Config Object Notation) format, although Ktor also accommodates other formats like YAML. HOCON is particularly effective because it offers a user-friendly approach to configuration. It is a human-readable and -writable format, designed to simplify the process of configuring applications. HOCON’s structure and syntax make it straightforward for developers to manage complex configurations, enhancing both efficiency and clarity. This file typically includes configuration such as server port, host, and settings for various modules. You can access these configuration values in Ktor using environment.config.property(path), where path is the configuration key. Different configurations for environments like development, staging, and production are supported through separate files.

Chapter 11

373

Our DB object can be rewritten to read configurations from this file: object DB { ... fun connect(config: ApplicationConfig) = connect( host = config.property("db.host").getString(), port = config.property("db.port").getString().toInt(), dbName = config.property("db.dbName").getString(), dbUser = config.property("db.dbUser").getString(), dbPassword = config.property("db.dbPassword").getString() ) private fun connect( host: String, port: Int, dbName: String, dbUser: String, dbPassword: String ) = Database.connect( url = "jdbc:postgresql://$host:$port/$dbName", driver = "org.postgresql.Driver", user = dbUser, password = dbPassword ) }

In this setup, environment variable references are removed for security, and secrets are not stored in code. In a production environment, configurations should be read from runtime-provided environment parameters. The application.conf may look like this: db { host = ${DB_HOST} port = 5432 port = ${?DB_PORT} dbName = ${DB_NAME}

Concurrent Microservices with Ktor

374 dbUser = ${DB_USER} dbPassword = ${DB_PASSWORD} }

Here, default values are specified first and can be overridden by environment variables, for example, ${?DB_HOST}. The ? indicates that the environment variable is optional. Note the importance of the parameter order: specify the default value first, then the environment variable. This init function should be called during application startup or test setup, typically from the Ktor application environment: fun Application.mainModule() { DB.init(environment.config) // Rest of the application setup }

For the sake of simplicity, we’ll continue to use environment variables to streamline our tests. With the configuration set, you’re ready to define the first table in the database.

Defining tables with Exposed In order to work with our database, the Exposed framework needs to manage the tables in it. Let’s define our first table that will hold all the information about our cats. In the DB.kt file, define a Singleton object to represent this table: object CatsTable : IntIdTable() { val name = varchar("name", 20).uniqueIndex() val age = integer("age").default(0) }

Let’s break down this definition: IntIdTable signifies a table with an Int-type primary key. Other types such as Long and UUID

are also supported. The object’s body defines columns: •

A name column is a varchar string of up to 20 characters.



An age column is an integer, defaulting to 0.



The name column is unique, meaning each cat must have a distinct name.

Chapter 11

375

A data class represents individual cats: data class Cat(val id: Int, val name: String, val age: Int)

Finally, add the following code to the mainModule() function: fun Application.mainModule() { DB.connect(environment.config) transaction { SchemaUtils.create(CatsTable) } ... }

Every time the application starts, this code establishes a database connection using DB.connect(), and then proceeds to a transaction block. In the realm of Exposed, a transaction represents a series of one or more SQL statements that are executed collectively as a single unit. The importance of transactions lies in their ability to uphold the database’s consistency and integrity. The transaction block utilizes the statement SchemaUtils.create(CatsTable) to construct the CatsTable in the database. It defaults to the name cats, as no specific table name is provided. If the table already exists, the process takes no further action, thus maintaining the consistency of the database schema. The use of a transaction here is vital, as it ensures that the creation of the CatsTable is an atomic operation. In other words, the table creation either completely succeeds, or any partial modifications are reverted if an error arises during the process. Such atomicity is crucial in ensuring the database remains in a consistent state, averting the risks of incomplete or inconsistent data structures. Now that a database connection is established and the necessary table is created, subsequent code can leverage this connection to interact with the CatsTable.

Creating new entities We’re now ready to add the first cat to our virtual shelter. Following the REST principles, this should be done via a POST request, with the request body resembling: {"name": "Meatloaf", "age": 4}

Concurrent Microservices with Ktor

376

We’ll start by crafting a new test. Let’s add the following function to our ServerTest.kt file: @Test fun `POST creates a new cat`() { ... }

Using backticks in Kotlin allows function names to contain spaces, enabling more descriptive test names. Let’s delve into the test’s body: @Test fun `POST creates a new cat`() { testApplication { application { mainModule() } val response = client.post("/cats") { header(HttpHeaders.ContentType, ContentType.Application. FormUrlEncoded.toString()) setBody( listOf( "name" to "Meatloaf", "age" to 4.toString() ).formUrlEncode() ) } assertEquals(HttpStatusCode.Created, response.status) } }

As previously discussed, the testApplication{} block sets up the testing environment. Here, we make a POST request, which requires appropriate headers set via the header() function, and the body being set as discussed above. We check if the HTTP response status is 201 Created. Running this test now will result in a 404 error, as the post /cats endpoint isn’t implemented yet.

Chapter 11

377

Let’s enhance our routing block to include this endpoint: routing { post("/cats") { ... call.respond(HttpStatusCode.Created) } }

To create a new cat, we need to read the POST request’s body using receiveParameters(): val parameters: Parameters = call.receiveParameters() val name = requireNotNull(parameters["name"]) val age = parameters["age"]?.toInt() ?: 0

The receiveParameters() function returns a case-insensitive map. We fetch the cat’s name and default the age to 0 using the Elvis operator if it’s not provided. Next, we add the values to the database: transaction { CatsTable.insertAndGetId { cat -> cat[CatsTable.name] = name cat[CatsTable.age] = age } }

We use a transaction{} block to modify the database, utilizing insertAndGetId() to populate the new row with name and age. Running the test now should pass, but a second run will fail due to the unique constraint on cat names and a lack of database cleaning between tests. To ensure test consistency, we need a method to reset the database after each test run.

Making the tests consistent To enhance our test setup for consistent results, let’s modify the ServerTest class to include cleanup after all tests have been executed. Set up the environment before all tests with the @BeforeAll annotation: @BeforeAll fun setup() {

Concurrent Microservices with Ktor

378 DB.connect() transaction { SchemaUtils.create(CatsTable) } }

And add the following cleanup method using the @AfterAll annotation: @AfterAll fun cleanup() { DB.connect() transaction { SchemaUtils.drop(CatsTable) } }

Here, @BeforeAll creates the necessary table before any test runs, while @AfterAll ensures the database table is dropped after all tests in the class have run. This setup replaces the @BeforeEach approach we had before, which runs before each individual test, with a class-level setup and teardown. But we’ll have some more uses for the @BeforeEach annotation later in this chapter. For this configuration to function correctly, we also need to add the @TestInstance annotation to our test class: @TestInstance(TestInstance.Lifecycle.PER_CLASS) class ServerTest { ... }

The default lifecycle for a test in JUnit is PER_METHOD. However, to run multiple tests and perform cleanup afterward, we need to set our test class lifecycle to PER_CLASS. This is necessary because, at the JVM level, these methods are required to be static. With these changes, our tests should now run consistently. In the upcoming section, we’ll explore fetching all cats from the database using the Exposed library.

Fetching all entities First, we’ll organize our tests by grouping them into a nested class: @Nested inner class `With cat in DB` {

Chapter 11

379

@Test fun `GET with ID fetches a single cat`() { ... } }

Nested test classes are excellent for encapsulating specific test scenarios. In our case, we want to run tests under the condition that a cat already exists in the database. Add this setup and teardown to the nested test class: private lateinit var id: EntityID @BeforeEach fun setup() { DB.connect() id = transaction { CatsTable.insertAndGetId { cat -> cat[name] = "Fluffy" cat[age] = 2 } } } @AfterEach fun teardown() { DB.connect() transaction { CatsTable.deleteAll() } }

Before each test, we create a cat in the database and delete all cats after each test. We keep track of the created cat’s ID in a variable. Our test class for fetching all cats will look like this: @Test fun `GET without ID fetches all cats`() {

Concurrent Microservices with Ktor

380 testApplication { application { mainModule() } val response = client.get("/cats")

assertEquals("""[{"id":$id,"name":"Fluffy","age":2}]""", response. bodyAsText()) } }

In this test, the cat’s ID is interpolated into the expected response, since it changes with each execution. Let’s implement the new route to fetch all cats: get("/cats") { val cats = transaction { CatsTable.selectAll().map { row -> Cat( row[CatsTable.id].value, row[CatsTable.name], row[CatsTable.age] ) } } call.respond(cats) }

We use selectAll() to retrieve all rows from the table, mapping each row to our data class. However, running this test will fail with a serialization error. By default, Ktor doesn’t automatically serialize custom data classes into JSON. To resolve this, add the Kotlin serialization plugin to your build.gradle.kts file: plugins { ... kotlin("plugin.serialization") version "..." }

Chapter 11

381

This plugin generates serializers at compile time for classes annotated with @Serializable. To make our test pass, annotate the Cat class: @Serializable data class Cat( val id: Int, val name: String, val age: Int )

With this addition, the test for fetching a cat by its ID should now pass. After learning how to fetch all cats in the database, let’s focus on retrieving a single cat by ID.

Fetching a single entity Adhering to REST practices, we’ll set up two new routes for fetching cats: /cats for all cats and / cats/{id} for a specific cat, where {id} is the cat’s unique identifier. Let’s add these routes: routing { get("/cats") { ... } get("/cats/{id}") { ... } }

The first route is similar to the /status route we defined earlier. The second route uses a path parameter (not a query parameter as previously mentioned) for the cat’s ID, indicated by curly brackets. To read the path parameter, we access the parameters map: val id = requireNotNull(call.parameters["id"]).toInt()

Concurrent Microservices with Ktor

382

If an ID is provided, we attempt to fetch the corresponding cat from the database: val cat = transaction { val row = CatsTable.select { CatsTable.id eq id }.firstOrNull() if (row != null) { Cat( row[CatsTable.id].value, row[CatsTable.name], row[CatsTable.age] ) } else { null } }

In this scenario, a transaction is initiated, and a select statement is used to retrieve a cat with the specified ID. It’s important to note that Exposed requires a transaction for all database interactions, including purely read operations. If a cat is found, we respond with its JSON representation. If not, we return a 404 Not Found status: if (cat == null) { call.respond(HttpStatusCode.NotFound) } else { call.respond(cat) }

Now, let’s write a test to fetch a specific cat: @Test fun `GET with ID fetches a single cat`() { testApplication { application { mainModule() } val response = client.get("/cats/$id")

Chapter 11

383 assertEquals("""{"id":$id,"name":"Fluffy","age":2}""", response. bodyAsText())

} }

The next step is to tidy up our code. Currently, everything resides in a single file. It would be more efficient to separate all cat-related routes into a different file, which we’ll do in the next section.

Organizing routes in Ktor In Ktor, structuring multiple routes belonging to the same domain can be streamlined for better organization and readability. Our current routing{} block includes various endpoints, some of which are related to cats. Here’s how our routing block looks currently: routing { get("/status") { ... } post("/cats") { ... } get("/cats") { ... } get("/cats/{id}") { ... } }

To improve the structure, we can extract all cat-related routes into a separate function: fun Routing.catsRoutes() { ... }

In IntelliJ IDEA, there’s even an option to automatically generate an extension function on the Routing class.

Concurrent Microservices with Ktor

384

To tidy this up, we replace the cat-related routes with a dedicated function: routing { get("/status") { ... } catsRoutes() }

We can now consolidate all our cat-related routes into a newly defined function: fun Routing.catsRoutes() { post("/cats") { // ... } get("/cats") { // ... } get("/cats/{id}") { // ... } }

To further streamline our code and reduce repetition, we can utilize the route function: fun Routing.catsRoutes() { route("/cats") { post { // ... } get { // ... } get("/{id}") { // ... }

Chapter 11

385

} }

This approach uses a single route block to group all routes under the /cats path, making the code more organized and maintainable. This approach streamlines our code, making it cleaner and more maintainable. It separates concerns, grouping all cat-related routes together, and it’s a great example of idiomatic Ktor code organization. We can take this approach one step further by introducing service objects. Let’s take a look at the following example: get { val cats = transaction { CatsTable.selectAll().map { row -> Cat( row[CatsTable.id].value, row[CatsTable.name], row[CatsTable.age] ) } } call.respond(cats) }

In this case, we mix two levels of abstraction: one level involves working with a database, while the other deals with handling HTTP requests and responses. If we separate all the database operations into a distinct interface that is injected into our service, our code will become much more readable and easier to test for different concerns. Additionally, we are already familiar with all the necessary tools. First, every routing block can specify which services it needs: fun Routing.catsRoutes(service: CatsService) { ... }

Then, the logic of handling the database, such as opening transactions, can be easily encapsulated in that service.

Concurrent Microservices with Ktor

386

For example, this would be our interface containing all the DB operations we have implemented so far: interface CatsService { fun findAll(): List fun find(id: Int): Cat? fun create(name: String, age: Int): EntityID }

And the logic of handling transactions would move into the implementation of this service: class CatsServiceImpl : CatsService { override fun findAll(): List { return transaction { CatsTable.selectAll().map { row -> Cat( row[CatsTable.id].value, row[CatsTable.name], row[CatsTable.age] ) } } } ... }

Now, in our initialization code, we can also initialize the service and pass it to the routes function: val catsService = CatsServiceImpl() routing { ... catsRoutes(catsService) }

And with that, our route becomes as simple as: get("/{id}") { val cats = service.findAll() call.respond(cats) }

Chapter 11

387

Deleting an entity So far, we’ve simply dropped the table in our tests to delete cats. However, this approach is inadequate for a real application. We need to provide a way to delete a single cat by ID. Let’s implement the delete route: delete("/{id}") { val id = requireNotNull(call.parameters["id"]).toInt() val deleted = catsService.delete(id) if (deleted == 0) { call.respond(HttpStatusCode.NotFound) } else { call.respond(HttpStatusCode.OK) } }

Adding the new method to the interface we introduced is trivial: interface CatsService { ... fun delete(id: Int): Int }

We’ve already discussed reading parameters from the URL when fetching a single cat. The deletion logic using Exposed is similar to the select logic: override fun delete(id: Int): Int { return transaction { CatsTable.deleteWhere { CatsTable.id eq id } } }

The deleteWhere method returns the number of rows affected. If zero rows were affected, it indicates that no cat with the specified ID exists to delete. Now, let’s write a test for this deletion: @Test fun `DELETE deletes a cat`() { testApplication {

Concurrent Microservices with Ktor

388 application { mainModule() } val response = client.delete("/cats/$id")

assertEquals(HttpStatusCode.OK, response.status) val deletedResponse = client.get("/cats/$id") assertEquals(HttpStatusCode.NotFound, deletedResponse.status) } }

This test is straightforward. We’re now left with one more CRUD operation to implement: updating cat details.

Updating an entity Finally, we need to implement functionality to update a cat. We start by defining the route: put("/{id}") { val id = requireNotNull(call.parameters["id"]).toInt() val parameters: Parameters = call.receiveParameters() val name = requireNotNull(parameters["name"]) val age = parameters["age"]?.toInt() ?: 0 val updated = ... if (updated == 0) { call.respond(HttpStatusCode.NotFound) } else { call.respond(HttpStatusCode.OK) } }

Here, we need to read both the URL parameters and the request body, similar to what we did when creating a cat.

Chapter 11

389

Now, let’s look at how we can update an entity using Exposed: override fun update(id: Int, name: String, age: Int): Int = transaction { CatsTable.update({ CatsTable.id eq id }) { cat -> cat[CatsTable.name] = name cat[CatsTable.age] = age } }

The update syntax is a bit more complex, as it takes two blocks. The first block specifies conditions, like with select and delete. The second block sets new values, akin to the insert block. This operation combines the aspects of Exposed that we’ve learned so far. Let’s also write a test for this logic: @Test fun `PUT updates a cat`() { testApplication { application { mainModule() } val response = client.put("/cats/$id") { header(HttpHeaders.ContentType, ContentType.Application.FormUrlEncoded.toString()) setBody( listOf( "name" to "Meatloaf", "age" to 4.toString() ).formUrlEncode() ) } assertEquals(HttpStatusCode.OK, response.status) val updatedResponse = client.get("/cats/$id") assertEquals("""{"id":$id,"name":"Meatloaf","age":4}""", updatedResponse.bodyAsText()) } }

390

Concurrent Microservices with Ktor

With this, we’ve covered an essential aspect of CRUD operations. Next, we’ll delve into Ktor’s concurrency capabilities. In Chapter 6, Threads and Coroutines, we discussed how Kotlin primarily achieves concurrency through coroutines. However, we haven’t yet explored starting a single coroutine. This will be our focus in the upcoming section.

Achieving concurrency in Ktor Looking back at the code we’ve written in this chapter, you may be under the impression that the Ktor code is not concurrent at all. However, this couldn’t be further from the truth. All Ktor functions used in this chapter are built upon coroutines and the concept of suspending functions. For each incoming request, Ktor initiates a new coroutine to handle it. This functionality is inherently supported by the CIO server engine, which is fundamentally based on coroutines. Ktor’s concurrency model is designed to be efficient yet unobtrusive, a crucial aspect of its architecture. Furthermore, the routing blocks, which define our endpoints, have access to CoroutineScope. This allows us to invoke suspending functions within these blocks. An example of such a suspending function is call.respond(), frequently used in our examples. Suspending functions offer opportunities for context switching and concurrent execution of other code. Consequently, the same resources can handle a significantly larger number of requests than they would in a non-concurrent environment. At this point, we’ll conclude and summarize what we’ve learned about application development using Ktor. The framework not only simplifies handling HTTP requests but also efficiently manages concurrency, thanks to its coroutine-based architecture.

Summary In this chapter, we’ve developed a thoroughly tested service using Kotlin, employing the Ktor framework for web functionalities and Exposed for database operations. We delved into Ktor’s use of design patterns, such as Factory, Singleton, and Bridge, which offer a flexible and well-organized framework for our code. We delved into database interaction using the Exposed framework, learning how to declare, create, and drop tables, as well as how to insert, fetch, and delete entities. Additionally, we’ve seen the application of Ktor’s HTTP client to make requests to external services, enriched by its support for JSON serialization and asynchronous operations.

Chapter 11

391

We briefly touched on the Exposed framework’s features. Apart from the DSL API used in this chapter, Exposed also offers a DAO API, allowing interaction with objects instead of writing DSL queries. For further reading, see https://github.com/JetBrains/Exposed/wiki. Moreover, we discussed Ktor’s configuration management, highlighting how you can use it to cater for different environments and the flexibility of managing configurations with HOCON files and environment variables. But, of course, we couldn’t cover all that there is to know about Ktor. If you’re curious to learn more about the other features it provides, be sure to check out some of the official examples: https://ktor.io/learn/. In the next chapter, we’ll explore an alternative web application development approach using the Reactive framework Vert.x. This comparison will help us understand the differences and tradeoffs between concurrent and Reactive paradigms in web application development.

Questions 1. How are the Ktor applications structured, and what are their benefits? 2. What are plugins in Ktor, and what are they used for? 3. What is the main problem that the Exposed library solves?

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

12

Reactive Microservices with Vert.x In the previous chapter, we familiarized ourselves with the Ktor framework and, with it, we created a web service that could store cats in its database. Picking up from our previous work, this chapter shifts our focus to using the Vert.x framework with Kotlin. In this chapter, we’ll write a similar service to the one we wrote in the previous chapter, using the Vert.x framework instead. Vert.x, a Reactive framework, ties in with the principles we discussed in Chapter 7, Controlling the Data Flow. It brings to the table numerous advantages that meet our project’s needs, such as increased scalability, a non-blocking development model, and the ability to handle many concurrent data flows efficiently. These attributes make Vert.x an excellent option for developing microservices that are both responsive and robust. In this chapter, we will not only enumerate the benefits of Vert.x but also explore its practical uses. We aim to demonstrate how it enables developers to build more effective, scalable, and manageable microservices. While we’ll touch on key features like its event-driven nature and lightweight design, an exhaustive analysis of all its aspects is beyond the scope of this chapter. For a comprehensive understanding of Vert.x, additional information is available on the official website: https://vertx.io.

Reactive Microservices with Vert.x

394

The following topics will be the focus of this closing chapter: •

Getting started with Vert.x



Routing requests



Verticles



Handling requests



Testing Vert.x applications



Working with databases



Understanding Event Loop



Communicating with Event Bus

By the end of this chapter, you will understand the distinct approaches employed by Ktor and Vert.x for developing web applications. This knowledge will enable you to confidently choose the most suitable approach for your next service.

Technical requirements Like the previous chapter, this chapter will also assume that you have Docker already installed and that you have basic knowledge of working with it. We’ll also use the same table structure we created with Ktor. You can find the full source code for this chapter here: https://github.com/PacktPublishing/ Kotlin-Design-Patterns-and-Best-Practices_Third-Edition/tree/main/Chapter12.

Getting started with Vert.x Vert.x is a Reactive framework that is asynchronous and non-blocking. Let’s understand what this means by looking at a concrete example. We’ll start by creating a new Kotlin Gradle project. You can follow the steps from the previous chapter for that. Alternatively, you can also generate a new project by using start.vertx.io. Next, add the following dependencies to your build.gradle.kts file: val vertxVersion = "4.5.6" dependencies { implementation(platform("io.vertx:vertx-stack-depchain:$vertxVersion")) implementation("io.vertx:vertx-web") implementation("io.vertx:vertx-lang-kotlin")

Chapter 12

395

implementation("io.vertx:vertx-lang-kotlin-coroutines") }

Similar to what we discussed in the previous chapter, all the dependencies must be of the same version to avoid any conflicts. That’s the reason why we are using a variable for the library version—to be able to change all of the libraries together. The following is an explanation of each dependency: •

vertx-core is the core library.



vertx-web is needed since we want our service to be REST-based.



vertx-lang-kotlin provides idiomatic ways to write Kotlin code with Vert.x.



Finally, vertx-lang-kotlin-coroutines integrates with the Kotlin coroutines, which we discussed in detail in Chapter 6, Threads and Coroutines.

Then, we create a file called server.kt in the src/main/kotlin folder with the following content: fun main() { val vertx = Vertx.vertx() vertx.createHttpServer().requestHandler{ ctx -> ctx.response().end("OK") }.listen(8081) println("open http://localhost:8081") }

That’s all you need to start a web server that will respond with OK when you open http:// localhost:8081 in your browser. Now, let’s understand what happens here. First, we create a Vert.x instance using the Factory method from Chapter 3, Understanding Structural Patterns. The requestHandler method is just a simple listener or a subscriber. If you don’t remember how it works, check out Chapter 4, Getting Familiar with Behavioral Patterns, for the Observable design pattern. In our case, it will be called for each new request. That’s the asynchronous nature of Vert.x in action. Next, let’s learn how to add routes in Vert.x.

Reactive Microservices with Vert.x

396

Routing requests Notice that no matter which URL we specify, we always get the same result. Of course, that’s not what we want to achieve. Let’s start by adding the most basic endpoint, which will only tell us that the service is up and running: fun main() { val vertx = Vertx.vertx() vertx.createHttpServer().requestHandler{ ctx -> ctx.response().end("OK") }.listen(8081) println("open http://localhost:8081") }

This code is designed to produce the same response for any type of request, whether it’s a GET or POST, and irrespective of the URL. Typically, this isn’t the desired behavior. In REST architecture, it’s common practice to define distinct paths for various actions. To facilitate this, we’ll employ the Router. The Router enables the definition of specific handlers for different HTTP methods and URLs. Now, let’s add a /status endpoint that will return an HTTP status code of 200 and a message stating OK to our user: fun main() { val vertx = Vertx.vertx() val router = Router.router(vertx) router.get("/status").handler { ctx -> ctx.response().setStatusCode(200).end("OK") } vertx.createHttpServer().requestHandler(router).listen(8081) println("open http://localhost:8081/status") }

Now, instead of specifying the request handler as a block, we will pass this function to our router object. This makes our code easier to manage.

Chapter 12

397

We learned how we return a flat text response in the very first example. So, now, let’s return JSON instead. Most real-life applications use JSON for communication. Let’s replace the body of our status handler with the following code: val router = router.get("/status").handler { ctx -> val json = json { obj( "status" to "OK" ) } ctx.response().setStatusCode(200).end(json.toString()) }

Here, we are using a DSL, which we discussed in Chapter 4, Getting Familiar with Behavioral Patterns, to create a JSON object. You can open http://localhost:8081/status in your browser and make sure that you get {"status": "OK"} as a response. Now, let’s discuss how we can structure our code better with the Vert.x framework.

Verticles As our project progresses, the server.kt file, containing our current code, is growing increasingly large. To manage this, we need to separate different parts of the code. In Vert.x, this can be accomplished by organizing the code into distinct classes known as “verticles.” You can think of a verticle as a lightweight actor. We discussed actors in Chapter 5, Introducing Functional Programming. Let’s see how we can create a new verticle that will encapsulate our server: class ServerVerticle : CoroutineVerticle() { override suspend fun start() { val router = router() vertx.createHttpServer() .requestHandler(router) .listen(8081) println("open http://localhost:8081/status") }

Reactive Microservices with Vert.x

398

private fun router(): Router { // Our router code comes here now val router = Router.router(vertx) ... return router } }

Every verticle has a start() method that handles its initialization. As you can see, we moved all the code from our main() function to the start() method. If we run the code now, though, nothing will happen. That’s because the verticle hasn’t been started yet. There are different ways to start a verticle, but the simplest way is to pass the instance of the class to the deployVerticle() method. In our case, this is the ServerVerticle class: fun main() { val vertx = Vertx.vertx() vertx.deployVerticle(ServerVerticle()) }

Here is another, more flexible way to specify the class name as a string: fun main() { val vertx = Vertx.vertx() vertx.deployVerticle("ServerVerticle") }

If our verticle class is not in the default package, we’ll need to specify the fully qualified path for Vert.x to be able to initialize it. Having moved our code into a separate class, we’ve achieved a more organized structure. Next, we’ll explore how to apply similar refactoring techniques to more efficiently organize our routes.

Handling requests As we discussed earlier in this chapter, all requests in Vert.x are handled by the Router class. We covered the concept of routing in the previous chapter, so now, let’s just discuss the differences between the Ktor and Vert.x approaches to routing requests.

Chapter 12

399

For now, we’ll set up two endpoints in our router: one for deleting a cat by its ID, and another for updating its details: private fun router(): Router { // Our router code comes here now val router = Router.router(vertx) router.delete("/cats/:id").handler { ctx -> // Code for deleting a cat } router.put("/cats/:id").handler { ctx -> // Code for updating a cat } return router }

Both endpoints receive a URL parameter. In Vert.x, we use a colon notation for this. To be able to parse JSON requests and responses, Vert.x has a BodyHandler class. Now, let’s declare it as well. This should come just after the instantiation of our router: val router = Router.router(vertx) router.route().handler(BodyHandler.create())

This will tell Vert.x to parse the request body into JSON for any request. Notice that the /cat prefix is repeated multiple times in our code now. To avoid that and make our code more modular, we can use a subrouter, which we’ll discuss in the next section.

Subrouting the requests Subrouting allows us to split routes into multiple classes to keep our code more organized. Let’s move the new routes to a new function by following these steps: We’ll leave the /status endpoint as is, but we’ll extract all the other endpoints into a separate function: private fun catsRouter(): Router = Router.router(vertx).apply { delete("/:id").handler { ctx -> // Code for deleting a cat } put("/:id").handler { ctx ->

Reactive Microservices with Vert.x

400 // Code for updating the cat } ... }

Within this function, we construct a dedicated Router object that exclusively manages the routes related to cats, distinct from the status routes. Notice the efficiency and simplicity introduced by the use of the apply() function. It allows for direct access to the routing methods and enables the router to be returned immediately after configuration. Now, we need to connect the router we just created to our main router using the subRouter() function: private fun router(): Router = Router.router(vertx).apply { route().handler(BodyHandler.create()) get("/status").handler { ctx -> ... } route("/cats/*").subRouter(catsRouter()) }

Keeping our code clean and well separated is very important. Extracting routes into subrouters helps us with that. Now, let’s discuss how this code can be tested.

Testing Vert.x applications To test our Vert.x application, we’ll use the JUnit 5 framework, which we discussed in the previous chapter. You’ll need the following two dependencies in your build.gradle.kts file: dependencies { ... testImplementation("io.vertx:vertx-junit5") testImplementation("org.junit.jupiter:junit-jupiter:5.9.1") testImplementation("org.jetbrains.kotlinx:kotlinx-coroutinestest:1.8.0") }

Our first test will be located in the /src/test/kotlin/ServerTest.kt file.

Chapter 12

401

The basic structure of all the integration tests looks something like this: @TestInstance(TestInstance.Lifecycle.PER_CLASS) class ServerTest { private val vertx: Vertx = Vertx.vertx() @BeforeAll fun setup() = runTest { vertx.deployVerticle(ServerVerticle()).coAwait() } @AfterAll fun tearDown() { // You want to stop your server once vertx.close() } @Test fun `status should return 200`() { } }

This structure is different from what we’ve seen in Ktor. Here, we start the server ourselves, in the setup() method. Since Vert.x is Reactive, the deployVerticle() method will return a Future object immediately, releasing the thread, but that doesn’t mean that the server verticle has started yet. To avoid this race, we can use the await() method, which will block the execution of our tests until the server is ready to receive requests. Now, we want to issue an actual HTTP call to our /status endpoint, for example, and check the response code. For that, we’ll use the Vert.x web client. Let’s add it to our build.gradle.kts dependencies section: dependencies { ... testImplementation("io.vertx:vertx-web-client") }

Reactive Microservices with Vert.x

402

Since we only plan to use WebClient in tests, we specify testImplementation instead of implementation. But WebClient is so useful that you’ll probably end up using it in your production code anyway. After adding this new dependency, we need to instantiate our web client in the setup method: lateinit var client: WebClient @BeforeAll fun setup() { vertx.deployVerticle(ServerVerticle()) client = WebClient.create( vertx, WebClientOptions() .setDefaultPort(8081) .setDefaultHost("localhost") ) }

The setup() method will be called once before all the tests start. In this method, we are deploying our server verticle and creating a web client with some defaults for all our tests to share. Now, let’s write a test to check that our server is up and running: @Test fun `status should return 200`() { runBlocking { val response = client.get("/status").send().await() assertEquals(200, response.statusCode()) } }

Now, let’s understand what happens in this test: •

client is an instance of WebClient that is shared by all our tests. We invoke the /status

endpoint using the get verb. This is a Builder design pattern, so to issue our request, we need to use the send() method. Otherwise, nothing will happen. •

Vert.x is a Reactive framework, so instead of blocking our thread until a response is received, the send() method returns a Future. Then, we use await(), which adapts a Future to a Kotlin coroutine to be able to wait for the results concurrently.

Chapter 12



403

Once the response is received, we check it in the same way that we did in other tests—by using the assertEquals function, which comes from JUnit.

Now that we know how to write tests in Vert.x, let’s discuss how we can work with databases in a Reactive manner.

Working with databases To be able to progress further with our tests, we need the ability to create entities in the database. For that, we’ll need to connect to the database. First, let’s add the following two lines to our build.gradle.kts dependencies section: dependencies { ... implementation("org.postgresql:postgresql:42.5.1") implementation("io.vertx:vertx-pg-client") implementation("com.ongres.scram:client:2.1") }

The first line introduces the PostgreSQL driver into our project, a necessary component for interfacing with PostgreSQL databases. The second line adds the Vert.x JDBC client, enabling Vert.x to interface with any JDBC-supported database using this driver. The third line integrates the SCRAM authentication mechanism, a security feature often used in modern PostgreSQL setups, ensuring secure database access. Now, we want to hold the database configuration somewhere. For local development, it may be fine to have those configurations hardcoded. We’ll execute the following steps to do this: When we connect to the database, we need to specify the following parameters at the very least: •

Username



Password



Host



Database name

We’ll store the preceding parameters in a Singleton object: object Db { val username = System.getenv("DATABASE_USERNAME") ?: "cats_admin" val password = System.getenv("DATABASE_PASSWORD") ?: "abcd1234" val database = System.getenv("DATABASE_NAME") ?: "cats_db"

Reactive Microservices with Vert.x

404

val host = System.getenv("DATABASE_HOST") ?: "localhost" }

Our Singleton object has four members. For each, we check whether an environment variable was set, and if there’s no such environment variable, we provide a default value using the Elvis operator. Now, let’s add a function that will return a connection pool. A connection pool is an implementation of the Object Pool design pattern. This pattern aims to optimize resource usage and performance. It involves pre-allocating a set of initialized objects—in this case, database connections—and keeping them ready for use, rather than repeatedly creating and destroying them. This approach reduces the overhead of establishing a new connection for every database operation, thus enhancing the efficiency and responsiveness of applications: object Db { ... fun connect(vertx: Vertx): SqlClient { val connectOptions = PgConnectOptions() .setPort(5432) .setHost(host) .setDatabase(database) .setUser(username) .setPassword(password) val poolOptions = PoolOptions() .setMaxSize(20) return Pool.pool( vertx, connectOptions, poolOptions ) } }

Our connect() method creates two configuration objects: PgConnectOptions sets the configuration for the database we want to connect to, while PoolOptions specifies the configuration of the connection pool.

Chapter 12

405

Now, all we need to do is instantiate the database client in our test: ... lateinit var db: SqlClient @BeforeAll fun setup() { runTest { ... db = Db.connect(vertx) } }

The runTest() method is used here to establish a coroutine context. Unlike Ktor, where testApplication automatically provides a suspending block, Vert.x requires us to manually set up a coroutine context. The runTest() is akin to runBlocking() but offers additional conveniences, like the ability to bypass delay() calls in the code and set a timeout for the test execution. Having done that, let’s create a new Nested class in our test file for cases where we expect to have a cat in our database: @Nested inner class `With Cat` { @BeforeEach fun createCats() { ... } @AfterEach fun deleteAll() { ... } }

Contrary to the Exposed framework covered in the previous chapter, Vert.x’s database client doesn’t offer specialized methods for tasks like insertion or deletion. Rather, it provides a more foundational API that permits the execution of various queries on the database, albeit without the benefit of type safety.

Reactive Microservices with Vert.x

406

First, let’s write a query that will clean our database: @AfterEach fun deleteAll() = runTest { db.preparedQuery("DELETE FROM cats") .execute().coAwait() }

The basic structure for working with the database client in Vert.x is to pass a query to the prepareQuery() method, then execute it using execute(). I hope that by now you recognize the Builder design pattern when you meet it. To make sure a query is fully executed before moving to the next test, we use the coAwait() function. This function temporarily suspends the current coroutine until the specified Vert.x future completes. As an integral part of Vert.x’s Kotlin coroutine support, coAwait() is a suspending extension function designed for asynchronously awaiting a Vert.x future’s completion within a coroutine context. Put simply, coAwait() lets the coroutine pause without blocking the thread it’s running on, resuming only once the Vert.x future has finished its task. Next, we will compose an additional query to add a cat to the database before each test is executed: private lateinit var catRow: Row @BeforeEach fun createCats() = runTest { val result = db.preparedQuery( """INSERT INTO cats (name, age) VALUES ($1, $2) RETURNING ID""".trimIndent() ).execute(Tuple.of("Binky", 7)).coAwait() catRow = result.first() }

Here, we are using the preparedQuery() method once more, but this time, our SQL query string contains placeholders. Each placeholder starts with a dollar sign and their indexes start with 1. Then, we pass the values for those placeholders to the execute() method. Tuple.of is a Factory method design pattern that you should be able to recognize well by now. We also want to remember the ID of the cat that we create since we’ll use that ID to delete or update the cat. For this reason, we store the created row in a lateinit variable.

Chapter 12

407

We now have everything prepared to write our test: @Test fun `delete deletes a cat by ID`() = runTest { val catId = catRow.getInteger(0) client.delete("/cats/${catId}").send().coAwait() val result = db.preparedQuery("SELECT * FROM cats WHERE id = $1") .execute(Tuple.of(catId)).coAwait() assertEquals(0, result.size()) }

First, we get the ID of the cat we want to delete from the database row using the getInteger() method. Unlike parameters that start with 1, the columns of a database row start with 0. So, by getting an integer at index 0, we get the ID of our cat. Then, we invoke the web client’s delete() method and wait for it to complete. Afterward, we execute a SELECT statement on our database, checking that the row was indeed deleted. If you run this test now, it will fail because we haven’t implemented the delete endpoint yet. We’ll do that in the next section. But before we do that, we need to understand one more crucial concept: the Event Loop.

Understanding Event Loop The goal of Event Loop is to continuously check for new events in a queue, and each time a new event comes in, to quickly dispatch it to a function that knows how to handle it. This way, a single thread or a very limited number of threads can handle a huge number of events. In the case of web frameworks such as Vert.x, events may be requests to our server. To understand the concept of the Event Loop better, let’s go back to our server code and attempt to implement an endpoint for deleting a cat: val db = Db.connect(vertx) router.delete("/:id").handler { ctx -> val id = ctx.request().getParam("id").toInt() db.preparedQuery("DELETE FROM cats WHERE ID = $1") .execute(Tuple.of(id))

Reactive Microservices with Vert.x

408 .await() ctx.end() }

This code is very similar to what we’ve written in our tests in the previous section. We read the URL parameter from the request using the getParam() function, then we pass this ID to the prepared query. This time, though, we can’t use the runBlocking adapter function, since it will block the Event Loop. Vert.x uses a limited number of threads, up to twice the number of your CPU cores, to run all its code efficiently. However, this means that we cannot execute any blocking operations on those threads since it will negatively impact the performance of our application. To solve this issue, we can use a coroutine builder we’re already familiar with: launch(). Let’s see how this works: router.delete("/:id").handler { ctx -> launch { val id = ctx.request().getParam("id").toInt() db.preparedQuery("DELETE FROM cats WHERE ID = $1").execute(Tuple. of(id)).await() ctx.end() } }

Since our verticle extends CoroutineVerticle, we have access to all the regular coroutine builders that will run on the Event Loop. Now, all we need to do is mark our routing functions with the suspend keyword: private suspend fun router(): Router { ... } private suspend fun catsRouter(): Router { ... }

Now, let’s add another test for updating a cat: @Test fun `put updates a cat by ID`() = runTest { val catId = catRow.getInteger(0)

Chapter 12

409

val requestBody = json { obj("name" to "Meatloaf", "age" to 4) } client.put("/cats/${catId}") .sendBuffer(Buffer.buffer(requestBody.toString())) .coAwait() val result = db.preparedQuery("SELECT * FROM cats WHERE id = $1") .execute(Tuple.of(catId)).coAwait() assertEquals("Meatloaf", result.first().getString("name")) assertEquals(4, result.first().getInteger("age")) }

This test is very similar to the deletion test, with the only major difference being that we use sendBuffer and not the send() method, so we can send a JSON body to our put endpoint. We create the JSON similarly to what we saw when we implemented the /status endpoint earlier in this chapter. Now, let’s implement the put endpoint for the test to pass: private fun catsRouter(): Router = Router.router(vertx).apply { ... put("/:id").handler { ctx -> val id = ctx.request().getParam("id").toInt() val body = ctx.body().asJsonObject() db.preparedQuery("UPDATE cats SET name = $1, age = $2 WHERE ID = $3") .execute( Tuple.of( body.getString("name"), body.getInteger("age"), id ) ).await() ctx.end() } }

Reactive Microservices with Vert.x

410

Here, the main difference from the previous endpoint we’ve implemented is that this time, we need to parse our request body. We can do that by using the bodyAsJson property. Then, we can use the getString and getInteger methods, which are available in JSON, to get the new values for name and age. With this, you should have all the required knowledge to implement other endpoints as needed. Now, let’s learn how to structure our code in a better way using the concept of Event Bus.

Communicating with Event Bus Event Bus is an implementation of the Observable design pattern, which we discussed in Chapter 4, Getting Familiar with Behavioral Patterns. We’ve already mentioned that Vert.x is based on the concept of verticles, which are isolated actors. We’ve already seen the other types of actors in Chapter 6, Threads and Coroutines. Kotlin’s coroutines library provides the actor() and producer() coroutine generators, which create a coroutine bound to a channel. Similarly, all the verticles in the Vert.x framework are bound by Event Bus and can pass messages to one another using it. Now, let’s extract the code from our ServerVerticle class into a new class, which we’ll call CatVerticle. Any verticle can send a message over Event Bus by choosing between the following methods: •

request() will send a message to only one subscriber and wait for a response.



send() will send a message to only one subscriber, without waiting for a response.



publish() will send a message to all subscribers, without waiting for a response.

No matter which method is used to send the message, you subscribe to it using the consumer() method on Event Bus. Now, let’s subscribe to an event in our CatsVerticle class: class CatsVerticle : CoroutineVerticle() { override suspend fun start() { val db = Db.connect(vertx) vertx.eventBus().consumer("cats:delete"){req-> launch { val id = req.body() db.preparedQuery("DELETE FROM cats WHERE ID = $1") .execute(Tuple.of(id)).await()

Chapter 12

411 req.reply(null) } }

} }

The generic type of the consumer() method specifies the type of message we’ll receive. In this case, it’s Int. The string that we provide to the method—in our case, cats:delete—is the address we subscribe to. It can be any string, but it is good to have some convention, such as what type of object we operate on and what we want to do with it. Once the delete action has been executed, we respond to our publisher with the reply() method. Since we don’t have any information to send back, we simply send null. Now, let’s replace our previous delete route with the following code: router.delete("/:id").handler { ctx -> val id = ctx.request().getParam("id").toInt() vertx.eventBus().request("cats:delete", id) { ctx.end() } }

Here, we send the ID of the cat we received from the request to one of our listeners using the request() method, and we specify that the type of our message is Int. We also use the same address we specified in the consumer code. Since we have split our code into a new verticle, we need to remember to start it as well. Add the following line to both the main() function and the setup() method in your test: vertx.deployVerticle(CatsVerticle())

Next, let’s learn how to send complex objects over Event Bus.

Sending JSON over Event Bus As our final exercise, let’s learn how to update a cat. For that, we’ll need to send more than just an ID over Event Bus.

Reactive Microservices with Vert.x

412

Let’s rewrite our put handler, as follows: private fun catsRouter(): Router = Router.router(vertx).apply { ... put("/:id").handler { ctx -> val id = ctx.request().getParam("id").toInt() val body: JsonObject = ctx.body().asJsonObject().mergeIn(json { obj("id" to id) }) vertx.eventBus().request("cats:update", body) { res -> ctx.end(res.result().body().toString()) } } }

Here, you can see that we can send JSON objects over Event Bus easily. We merge the ID we receive as a URL parameter with the rest of the request body and send this JSON over Event Bus. When a response is received, we output it back to the user. Now, let’s see how we consume the event we just sent: vertx.eventBus().consumer("cats:update") {req -> launch { val body = req.body() db.preparedQuery("UPDATE cats SET name = $1, age = $2 WHERE ID = $3") .execute( Tuple.of( body.getString("name"), body.getInteger("age"), body.getInteger("id") ) ).await() req.reply(body.getInteger("id")) } }

We moved our logic from Router to our CatsVerticle class, but since we use JSON to communicate, the code stayed almost the same. In our verticle, we listen to the cats:update event, and once we receive the response, we extract the name, age, and ID from the JSON object to confirm that the operation was successful.

Chapter 12

413

This concludes our chapter. There is still much for you to learn about the Vert.x framework in case you’re curious. For example, notice that we didn’t implement the GET and POST endpoints in our service. With the knowledge you’ve gained from this chapter at hand, though, you should be able to do so with some confidence, and it can prove to be a good practice.

Summary This chapter concludes our journey into the design patterns in Kotlin. Vert.x uses actors, called verticles, to organize the logic of the application. Actors communicate between themselves using Event Bus, which is an implementation of the Observable design pattern. We also discussed the Event Loop pattern, how it allows Vert.x to process lots of events concurrently, and why it’s important not to block its execution. Now, you should be able to write microservices in Kotlin using two different frameworks, and you can choose what approach works best for you. Vert.x provides a lower-level API than Ktor, which means that we may need to think more about how we structure our code, but the resulting application may be more performant as a result. As often happens, it’s a tradeoff between performance and developer experience. Do you want to work with a database in a typesafe manner? Looking for the most idiomatic Kotlin framework? Then pick Ktor. But if you need to squeeze every bit of performance out of your application, certainly give Vert.x a try. In this concluding chapter, let’s take a moment to recap the key topics we’ve explored throughout the book. By now, you should have a solid understanding of implementing classic design patterns in Kotlin. We’ve debated the pros and cons of functional programming, including how to integrate its principles effectively in Kotlin. You’ve also been introduced to the choice between a reactive approach and concurrent design patterns, all of which are seamlessly supported in Kotlin through coroutines. Additionally, we’ve examined a range of concurrent data structures and design patterns brought to life by these coroutines. Towards the end, our focus shifted to best practices and common anti-patterns in Kotlin programming. We also took a closer look at some practical libraries that are invaluable to Kotlin developers, such as Arrow, Ktor, Exposed, and Vert.x, equipping you with a toolkit for more effective and efficient Kotlin development.

Reactive Microservices with Vert.x

414

Since this is the end of this book, all that’s left is for me to wish you the best of luck in learning about Kotlin and its ecosystem. You can always get some help from me and other Kotlin enthusiasts by going to https://kotlinlang.org/community/. Happy learning!

Questions 1. What’s a verticle in Vert.x? 2. What’s the goal of Event Bus? 3. Why shouldn’t we block the Event Loop?

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

Assessments Chapter 1, Getting Started with Kotlin Question 1 What’s the difference between var and val in Kotlin?

Answer The val keyword is used to declare an immutable reference, which means the value it holds cannot be changed once it’s assigned. On the other hand, the var keyword declares a mutable reference, allowing the value it holds to be reassigned multiple times.

Question 2 How do you extend a class in Kotlin?

Answer To extend a class, you specify a colon followed by the superclass name and its constructor. If the superclass is a regular class, it must be declared as open, as, by default, Kotlin classes are final and cannot be extended unless explicitly allowed using the open keyword.

Question 3 How do you add functionality to a final class?

Answer To add functionality to a final class in Kotlin, which cannot be inherited due to its finality, one can utilize extension functions. These functions enable you to “attach” new methods to any class, including final classes. However, it’s important to note that extension functions only have access to the public and internal members of the class they’re extending. They cannot access private or protected members.

416

Assessments

Chapter 2, Working with Creational Patterns Question 1 Name two uses for the object keyword we learned about in this chapter.

Answer The object keyword serves two primary purposes. First, it is used to declare a singleton when used at the global scope, ensuring only one instance of the class exists. Second, in conjunction with the companion keyword inside a class, it is utilized to create a companion object, which is similar to a collection of static methods and properties in Java. This companion object allows for the creation of static-like methods and properties accessible without creating an instance of the class, as we covered in the Factory design pattern.

Question 2 What is the apply() function used for?

Answer The apply() function in Kotlin is a higher-order extension function primarily used for configuring objects. When called on an object, it allows you to make multiple modifications to that object within a block. Within this scope, you can call methods and access properties of the object directly without using its name. After executing the block, apply() returns the object itself, not the result of the last expression in the block. This makes it particularly useful for initializing or configuring objects and chaining method calls on the configured object. It is important to know that inside the apply scope, no private members can be accessed.

Question 3 Provide one example of a static factory method.

Answer An example of a static factory method discussed is the valueOf() method found in the Long class in the Java Standard library, which Kotlin also utilizes. This method is a static factory because it allows the creation of Long objects from various inputs, such as a string or a long primitive. In Kotlin, while you can directly use this Java method due to its interoperability with Java, it’s more common to use Kotlin’s own idiomatic conversions like toLong() or literal suffixes for creating Long objects.

Assessments

417

Chapter 3, Understanding Structural Patterns Question 1 What differences are there between the implementations of the Decorator and Proxy design patterns?

Answer While the Decorator and Proxy design patterns in Kotlin might appear similar in implementation, as both involve creating a wrapper around an object, their core intents and use cases differ significantly. The Decorator pattern is used to dynamically add new responsibilities to objects without altering their structure. It involves wrapping an object and providing additional functionality while maintaining the same interface. In contrast, the Proxy pattern controls access to an object, often to handle costly operations or add a layer of security. The proxy provides an interface identical to the underlying object but may change the behavior by, for instance, delaying the creation of the object, controlling access, or logging requests. Thus, while their structures are similar, their purposes and effects on the object’s behavior are distinct.

Question 2 What is the main goal of the Flyweight design pattern?

Answer The primary goal of the Flyweight design pattern is to optimize memory usage and improve performance in resource-intensive applications by minimizing the amount of memory used by objects. It achieves this by sharing as much data as possible with similar objects; in other words, it reuses a common state among multiple objects instead of storing identical data in each object. This is particularly useful in scenarios where a large number of similar objects are required to be kept in memory, and each object maintains a significant amount of redundant information. In Kotlin, this pattern involves separating the intrinsic state (shared and immutable) from the extrinsic state (unique to each object) of an object and creating a factory that manages the sharing and reuse of Flyweight objects based on their intrinsic state.

418

Assessments

Question 3 What is the difference between the Facade and Adapter design patterns?

Answer The Facade design pattern is primarily used to provide a simplified interface to a complex system, library, or framework. By creating a facade, you hide the system’s complexity and provide an easier way to access its functionality. This pattern doesn’t change the underlying system’s operations but presents them in a more user-friendly way. On the other hand, the Adapter design pattern is used to enable two incompatible interfaces to work together. It acts as a bridge between two otherwise incompatible classes, allowing them to communicate with each other. The adapter wraps around one of the interfaces and transforms its calls to a format and interface that the other class understands. Thus, while the Facade simplifies interaction with a complex system without changing its underlying functionality, the Adapter allows different systems or interfaces to work together without altering their individual code.

Chapter 4, Getting Familiar with Behavioral Patterns Question 1 What’s the difference between the Mediator and Observer design patterns?

Answer The Mediator and Observer design patterns serve distinct purposes in managing object interactions and communication in software design. The Mediator pattern aims to reduce direct communication between classes, making them less coupled by introducing a central authority (mediator) through which all communication flows. In contrast, the Observer pattern is used for creating a subscription mechanism, allowing multiple objects (observers) to listen and react to events or changes in another object (the subject). This pattern facilitates a one-to-many dependency between objects, allowing an object to notify multiple other objects about its state changes without them being tightly coupled.

Assessments

419

The key difference is that the Mediator pattern centralizes control and orchestration of interactions between different classes, often reducing direct communication between them, while the Observer allows multiple objects to observe and react to changes in another object independently, promoting loose coupling.

Question 2 What is a domain-specific language (DSL)?

Answer A DSL is a type of programming language or specification language dedicated to a particular problem domain, a particular problem representation technique, and/or a particular solution technique. Unlike general-purpose programming languages like Kotlin, which are designed for writing software in a wide variety of application domains, DSLs are specialized to a specific area or aspect of a software application or a specific set of tasks. Kotlin, as a language, is particularly conducive to the creation of DSLs because of its concise syntax and powerful features like extension functions, higher-order functions, and type inference. This encourages developers to design DSLs to express domain-specific logic more naturally and succinctly, enhancing readability and maintainability.

Question 3 What are the benefits of using a sealed class or interface?

Answer The primary benefit of using a sealed class or interface in Kotlin is ensuring type safety and exhaustiveness in when expressions. Since all subclasses or implementations of a sealed class or interface are known at compile time and must be declared in the same file as the sealed class, the Kotlin compiler can verify that all possible cases are covered in a when statement. This reduces the risk of runtime errors and eliminates the need for an else clause in when expressions dealing with sealed types. Additionally, sealed classes and interfaces enable a more structured and maintainable hierarchy of types, making it easier to represent a fixed set of closely related types, such as states in a state machine or variants of a domain-specific entity. This feature is especially beneficial for modeling domain logic in a more type-safe and expressive manner.

420

Assessments

Chapter 5, Introducing Functional Programming Question 1 What are higher-order functions?

Answer Higher-order functions are a fundamental concept in Kotlin and many other programming languages, referring to functions that can take other functions as parameters, return a function, or both. These functions are powerful tools for creating more abstract, concise, and reusable code. They enable operations like passing behavior as an argument to a function, returning a function from a function, or storing a function in a data structure. Some common examples of higher-order functions in Kotlin include map, filter, and forEach, which are part of Kotlin’s collections API. Higher-order functions are a cornerstone of functional programming, allowing for a more expressive and declarative programming style.

Question 2 What is the tailrec keyword in Kotlin?

Answer The tailrec keyword is used to mark a function as tail recursive, which is a special form of recursion where the recursive call is the last operation in the function. When a function is marked with tailrec, the Kotlin compiler optimizes the recursion, converting it into a loop during compilation. This optimization helps to prevent stack overflow errors that can occur in traditional recursive function calls, especially with large numbers of iterations. It’s important to note that not all recursive functions can be marked as tailrec; the recursive call must be in the tail position, meaning it must be the last operation the function performs. This feature leverages Kotlin’s support for functional programming practices and helps in writing more efficient recursive algorithms.

Question 3 What are pure functions?

Answer Pure functions are characterized by two main properties. First, they always return the same output for the same input, meaning they do not depend on any external state or data. Second, they have no side effects, meaning they do not alter any external state or data, including performing IO operations, modifying global variables, or changing object properties.

Assessments

421

The lack of side effects and dependence on external state makes pure functions predictable, easier to test, and less prone to bugs. In Kotlin, as in other programming languages, writing pure functions is encouraged to improve code clarity, maintainability, and testability.

Chapter 6, Threads and Coroutines Question 1 What are the different ways to start a coroutine in Kotlin?

Answer Coroutines are typically started using two primary builder functions: launch() and async(). The launch() function is used for fire-and-forget coroutines, where you do not need the result of the operation; it returns a job that can be used to control the execution of the coroutine (like cancelation). On the other hand, async() is used when you need to compute some result; it returns a Deferred object, which is a future-like construct that can be awaited for the result. Additionally, Kotlin provides other coroutine builders like runBlocking, which is mainly used for bridging non-coroutine code with coroutines and should be used sparingly, primarily for main functions and tests. Each of these coroutine builders serves different use cases and offers different control mechanisms over the coroutines they start, enabling flexible and powerful concurrent programming in Kotlin.

Question 2 With structured concurrency, if one of the coroutines fails, all the siblings will be canceled as well. How can we prevent that behavior?

Answer In Kotlin’s structured concurrency model, to avoid the cancellation of all sibling coroutines when one fails, you can use supervisorScope instead of coroutineScope. The supervisorScope creates a scope in which coroutines are independent in terms of failure and cancelation. This means that if one coroutine fails within a supervisorScope, it does not automatically cancel the other sibling coroutines. This is different from the regular coroutineScope, where if one child coroutine fails, all other coroutines within the same scope are also canceled. The supervisorScope is particularly useful in scenarios where you want to handle errors of individual coroutines independently without affecting others running in parallel.

422

Assessments

Question 3 What is the purpose of the yield() function?

Answer In Kotlin coroutines, the yield() function is used to give up the current time slice of the executing coroutine, allowing other coroutines to run. It is a way to indicate that the coroutine is willing to be suspended to allow other coroutines to use the current thread. yield() does not return a value but suspends the coroutine and schedules it for resuming at a later time. This function is particularly useful in situations where you have long-running or computation-heavy coroutines and you want to ensure fair scheduling or avoid blocking the thread for too long, thereby maintaining responsiveness. It’s important to note that yield() is a cooperative function; it works only if the coroutine checks for suspension points, and its effect is also influenced by the coroutine’s dispatcher.

Chapter 7, Controlling the Data Flow Question 1 What is the difference between higher-order functions on collections and concurrent data structures?

Answer The primary difference between higher-order functions on standard collections and concurrent data structures in Kotlin lies in their approach to handling data and concurrency. For standard collections (like list, set, map, etc.), higher-order functions, such as map, filter, and forEach, operate on the entire collection in a non-concurrent manner. When these functions are used, they process each element of the collection sequentially and, in some cases, may create a new collection with the transformed data. They do not inherently support concurrency or parallel processing. On the other hand, higher-order functions on concurrent data structures or parallel processing frameworks are designed to handle data reactively or concurrently. These functions can process elements in parallel or in a non-blocking, asynchronous manner. For instance, in Kotlin’s coroutines, higher-order functions can operate on streams of data, handling each element as it becomes available, potentially across different threads. This approach is more suitable for handling real-time data, large datasets, or operations that benefit from parallel processing.

Assessments

423

It’s important to note that the behavior of higher-order functions in concurrent scenarios heavily depends on the underlying data structure or framework being used and its configuration (like coroutine dispatchers in the case of Kotlin coroutines).

Question 2 What is the difference between cold and hot streams of data?

Answer In the context of reactive programming, such as in Kotlin coroutines, the terms “cold” and “hot” streams refer to different behaviors of data streams in response to subscribers. A cold stream is one where the data sequence is independently created for each subscriber. This means that every subscriber gets its own data stream, starting from the beginning of the sequence. Cold streams are like a DVD; every viewer starts watching from the beginning, regardless of when they start viewing. On the other hand, a hot stream broadcasts the same data sequence to all subscribers, and a new subscriber only gets data from the point of subscription forward. It’s akin to a live broadcast on TV; viewers who tune in late start watching from the current moment, not from the beginning of the broadcast. Hot streams are useful for representing events or data that are intrinsically independent of individual subscribers, such as stock prices or sensor data.

Question 3 When should a conflated channel or flow be used?

Answer A conflated channel or flow should be used in situations where you have a fast-producing source (producer) and a slower consumer, and it is acceptable to drop intermediate values in favor of the most recent one. In such cases, when the consumer is ready to receive a new value, it will get the latest value sent by the producer, while any values sent in the meantime are discarded. This is particularly useful in scenarios where only the most current state or data is relevant, and older values become obsolete quickly. Some examples include real-time status updates, the most recent sensor readings, or the latest UI state. By using a conflated channel or flow, you can ensure that the consumer always deals with the most up-to-date information without the overhead of processing every single value emitted by the producer. This approach can help in reducing memory usage and improving overall performance in scenarios where processing every emitted value is not critical.

424

Assessments

Chapter 8, Designing for Concurrency Question 1 What does it mean when we say that the select expression in Kotlin is biased?

Answer A select expression is “biased” in situations where multiple channels are ready to be received at the same time. In such a case, the select expression prioritizes the channels based on their order of appearance within the select block. This means that if there’s a tie (a “draw”) between two or more channels, the channel that is listed first in the select expression will be selected and its corresponding branch executed. This bias towards the first channel in the list ensures predictable behavior but also requires careful consideration while ordering the channels in the select block.

Question 2 When should you use a mutex instead of a channel?

Answer In Kotlin coroutines, a mutex (mutual exclusion) should be used when you need to protect access to a shared resource to ensure that only one coroutine can access or modify it at a time. This is particularly important in scenarios where concurrent access or modifications by multiple coroutines could lead to inconsistent or incorrect states, such as when updating shared states or performing non-thread-safe operations. On the other hand, channels are used for communication between coroutines, allowing them to safely pass data back and forth. Channels are the preferred tool when you need to transfer data from one coroutine to another, especially when these coroutines operate concurrently or asynchronously. They provide a way to send and receive data streams, ensuring proper synchronization and communication between different parts of your coroutine-based application.

Question 3 Which of the concurrent design patterns could help you implement MapReduce or a divide-and-conquer algorithm efficiently?

Answer In concurrent programming, especially in the context of implementing MapReduce or divide-and-conquer algorithms, the Fan-Out and Fan-In design patterns are highly effective.

Assessments

425

Fan-Out involves distributing parts of a task among multiple workers. In the context of MapReduce or divide and conquer, the Fan-Out pattern can be used to split the data into smaller, manageable chunks and distribute these chunks to different workers (or coroutines, in the case of Kotlin). Each worker processes its assigned chunk independently, performing the “map” or divide part of the algorithm. After the Fan-Out stage, the Fan-In pattern is used to aggregate the results from all the workers. In the MapReduce framework, this corresponds to the “reduce” phase, where the results of the individual map operations are combined to form the final output. The Fan-In pattern ensures that the results from all workers are collected and merged efficiently.

Chapter 9, Idioms and Anti-Patterns Question 1 What is the alternative to Java’s try-with-resources in Kotlin?

Answer The use() function serves as the alternative to Java’s try-with-resources. This function is an extension function on the Closeable interface, which includes any class that implements the Java Closeable or AutoCloseable interface. When you call use() on a Closeable resource, it ensures that the resource is closed once the block of code within use is executed, regardless of whether the execution was normal or resulted in an exception. This automatic management of resource closure helps in preventing resource leaks and makes the code cleaner and safer. For example, when working with file streams or database connections, which need to be closed after use, the use() function provides a convenient and idiomatic way in Kotlin to handle these resources reliably.

Question 2 What are the different options for handling nulls in Kotlin?

Answer Kotlin provides several options for handling nullability, which is a core aspect of the language’s design to prevent the common NullPointerException: •

Safe call operator (?.): This allows you to call a method or access a property on an object that can be null. If the object is null, the call returns null instead of throwing an exception.

Assessments

426



The not-null assertion operator (!!): This transforms any value into a non-null type, triggering a NullPointerException if the value is actually null. It’s appropriate for scenarios where you’re confident the value won’t be null. However, its usage in production code is generally discouraged and should be quite rare.



Smart casts: Kotlin’s smart cast feature automatically casts types in certain situations, such as after a null check, allowing safe access to methods or properties.



Scope functions (let, run, apply, and also): These facilitate more expressive null checks and operations on nullable objects. let and run, in particular, are commonly used in conjunction with the safe call operator (?.) to execute code blocks exclusively when the object is non-null, ensuring that these operations are safely executed only on non-null objects.



Elvis operator (?:): This works with the safe call operator. If the expression to the left of ?: is not null, the Elvis operator returns it; otherwise, it returns the expression to the right.

Question 3 Which problem can be solved by reified generics?

Answer The concept of reified generics is used to solve the problem of type erasure that occurs on the JVM at runtime. Normally, due to type erasure, the specific type information of generics is not available at runtime, which limits certain operations. For example, you cannot check if an object is of a generic type T because T is erased to its upper bound (like Any?) at runtime. However, by marking the type parameter of an inline function with the reified keyword, Kotlin allows the preservation and use of the actual generic type information at the call site. This inlining process inserts the actual type directly into the generated bytecode, thereby preserving the type information. With reified generics, you can perform operations that are not possible with regular generics in Java, such as type checks or obtaining a KClass reference of the generic type. For instance, you can safely check if an object is an instance of a generic type T or access the class of T, which enables more expressive and type-safe programming. This feature is particularly useful in scenarios involving reflection, type checks, or when you need to pass class literals as parameters.

Assessments

427

Chapter 10, Practical Functional Programming with Arrow Question 1 Explain the concept of typed errors in Arrow and how the Either data type enhances error handling in Kotlin programs.

Answer The concept of typed errors in Arrow, leveraging the Either data type, enhances error handling in Kotlin programs by introducing a more expressive and safer way to represent and handle errors. Unlike Kotlin’s standard exceptions, which are untyped and are not part of the function signature, Arrow allows for defining errors as specific types. This typed approach enables more granular control over error handling, allowing developers to explicitly define what kinds of errors can occur and how they should be handled. Either is a functional programming concept implemented in Arrow. It represents a value of one

of two possible types (either an A or a B). In error handling, one type (left) typically represents an error, and the other (right) represents a success. This clear distinction helps in writing more predictable and safer code. Using Either for error handling, as opposed to exceptions, encourages developers to think about and handle errors explicitly. This leads to more robust code, as all potential error cases are accounted for at compile time, reducing the likelihood of unhandled exceptions at runtime. Either in Arrow integrates well with other functional constructs and can be easily combined and

transformed, making it a powerful tool for building complex error-handling flows in a declarative and expressive way. Overall, typed errors and the use of Either in Arrow provide a more type-safe, explicit, and functional approach to error handling in Kotlin, enhancing code clarity and reliability.

Assessments

428

Question 2 Describe the role of TVars in Arrow’s STM and explain how they differ from regular variables.

Answer Transactional Variables (TVars) in Arrow’s Software Transactional Memory (STM) play a crucial role in managing state in a concurrent environment. Here’s how they differ from regular variables: •

TVars are designed for safe concurrent access. Unlike regular variables, which can lead to race conditions and inconsistencies when accessed by multiple threads, TVars ensure safe and consistent state changes even in the presence of concurrent modifications.



Modifications to TVars occur within transactions. These transactions can be composed, retried, and are atomic. This means changes to TVars either fully happen or don’t happen at all, ensuring data integrity.



Changes made to TVars in a transaction are isolated until the transaction commits. This isolation differs from regular variables, where changes are immediately visible to other threads.



TVars in STM provide a robust mechanism for handling shared mutable states in concurrent programming, enhancing safety and simplifying complex synchronization tasks.

Question 3 In what scenarios would using Arrow’s Optics library be more beneficial than using traditional methods for modifying immutable data?

Answer Optics simplifies the modification and querying of deeply nested immutable data structures, reducing complexity and boilerplate code. In applications with intricate domain models, Optics can make the code more readable and maintainable when dealing with immutable objects. For projects embracing functional programming, Optics aligns well with principles like immutability and referential transparency. Optics provides a type-safe way to work with immutable data, reducing the risk of runtime errors and making the code more expressive and declarative.

Assessments

429

Chapter 11, Concurrent Microservices with Ktor Question 1 How are the Ktor applications structured and what are their benefits?

Answer Ktor applications are structured around the concept of modules, where each module is essentially an extension function of the Application class. This modular approach offers several benefits: •

By dividing the application into different modules, each responsible for a specific aspect of the application (like authentication, routing, handling specific types of requests, etc.), the code becomes more organized and manageable. This separation facilitates a cleaner architecture and easier maintenance.



Modules in Ktor are designed to be reusable. You can easily reuse a module across different applications or in different parts of the same application, promoting code reuse and reducing redundancy.



As each module encapsulates specific functionality, it allows for more focused and efficient testing. You can test different aspects of your application in isolation, making the testing process more straightforward and thorough.



Modular architecture in Ktor aids in scalability. As the application grows, new features can be added as separate modules without affecting the existing code base significantly. This makes it easier to scale and evolve the application over time.



Working with modules allows different teams to work on different parts of the application simultaneously without much interference, enhancing development speed and collaboration.



Overall, the modular structure of Ktor applications provides a flexible, maintainable, and scalable approach to building web applications, making it a popular choice for Kotlin developers.

Assessments

430

Question 2 What are plugins in Ktor and what are they used for?

Answer In Ktor, plugins (formerly known as features) are components that are used to extend the functionality of a Ktor application. They address cross-cutting concerns and provide a modular way to add various capabilities to the application. Plugins are essential for handling common tasks and functionalities in web applications. Here are some key aspects and uses of plugins in Ktor: •

Plugins can add a wide range of functionalities to a Ktor application. This includes handling requests and responses, setting headers, managing sessions, handling authentication and authorization, and logging.



Plugins like ContentNegotiation are used for serializing outgoing data and deserializing incoming data, supporting formats like JSON and XML.



Routing in Ktor is itself implemented as a plugin. It allows defining the routes and handling HTTP requests in a structured and organized manner.



Plugins can automatically handle response headers, manage CORS policies, and deal with other header-related tasks.



Plugins contribute to the modular and flexible design of Ktor applications, allowing developers to easily add or remove functionalities as needed.



In summary, plugins in Ktor provide a powerful and flexible way to incorporate various functionalities into a Ktor application, making it easier to build robust and feature-rich web applications using Kotlin.

Question 3 What is the main problem that the Exposed library solves?

Answer The Exposed library in Kotlin is designed to solve the problem of complexity and boilerplate code associated with directly using Java Database Connectivity (JDBC) or SQL queries for database interactions. It provides a higher-level, idiomatic Kotlin API for working with databases, offering the following key benefits: •

Exposed offers a DSL for constructing SQL queries in a type-safe manner. This reduces the risk of SQL syntax errors and runtime issues, as queries are checked at compile-time.

Assessments

431



Exposed abstracts the underlying database, making the code more portable and less dependent on specific database implementations. It supports popular databases like PostgreSQL and MySQL.



Exposed simplifies create, read, update, and delete (CRUD) operations and other common database tasks, reducing the amount of boilerplate code needed.



In essence, Exposed addresses the complexities and verbosity of traditional SQL and JDBC approaches by offering a more Kotlin-friendly way of interacting with databases, blending the power of SQL with the simplicity and expressiveness of Kotlin.

Chapter 12, Reactive Microservices with Vert.x Question 1 What’s a verticle in Vert.x?

Answer In Vert.x, a verticle is a key component of the framework’s reactive architecture. It is similar to an actor in the Actor model and serves as a building block for writing asynchronous, non-blocking, and reactive applications. A verticle represents a lightweight execution unit that encapsulates a portion of your application’s business logic. Here are some important aspects of a verticle: •

Reactive and asynchronous: Verticles are designed to handle events in a non-blocking and reactive manner, making them suitable for handling IO-intensive and scalable applications.



Verticles allow developers to divide their applications into smaller, manageable, and modular units of deployment. Each verticle can be developed, deployed, and scaled independently, promoting a microservices-like architecture.



By encapsulating specific functionalities or business logic, verticles help in organizing the code base and separating concerns, enhancing maintainability and testability.



Verticles can be deployed multiple times within a single Vert.x instance and the Vert.x runtime can distribute events to them in a load-balanced manner. This enables efficient utilization of system resources and aids in building highly scalable applications.

In summary, a verticle in Vert.x is a fundamental unit of modularity and execution that facilitates building scalable, maintainable, and reactive applications by structuring the business logic into small, event-driven, and independently deployable components.

Assessments

432

Question 2 What’s the goal of the Event Bus?

Answer The Event Bus in Vert.x is a core component of the framework and serves as the backbone for communication between different parts of a Vert.x application. Its main goals are: •

The Event Bus supports communication between verticles and other components while minimizing tight coupling. In this model, components communicate by exchanging messages without the need for direct dependencies, as they connect through subscription mechanisms instead. This method encourages a modular and more easily maintained architecture.



It enables asynchronous message passing, which is fundamental to building reactive applications. This helps in handling operations that are IO intensive or time consuming without blocking the Event Loop.



By facilitating message passing between verticles, the Event Bus helps in distributing the workload and managing traffic, which is essential for scaling applications.



It supports various messaging patterns, including point-to-point, request-response, and publish-subscribe, providing flexibility in how components interact with each other.



The Event Bus helps in building fault-tolerant systems where components can communicate and coordinate their actions even in the presence of failures.

In essence, the Event Bus is a powerful mechanism in Vert.x that enables efficient, scalable, and maintainable inter-component communication in a distributed and reactive environment. It is pivotal in leveraging the asynchronous, event-driven nature of Vert.x applications.

Question 3 Why shouldn’t we block the Event Loop?

Answer In event-driven frameworks like Vert.x, the Event Loop is a critical component responsible for handling incoming events, such as IO operations, in a non-blocking and asynchronous manner. Here are the key reasons why blocking the Event Loop can be detrimental: •

The Event Loop operates with a limited number of threads (often equal to the number of CPU cores). These threads are designed to handle a large number of concurrent operations efficiently by quickly processing and dispatching events.

Assessments

433



When a thread in the Event Loop is blocked, it cannot process other incoming events or requests. This can lead to a significant performance bottleneck, as other operations have to wait for the blocked thread to become available.



Blocking the Event Loop undermines the scalability of the application. The event-driven architecture is meant to handle many simultaneous operations with a small number of threads. Blocking these threads prevents the system from scaling effectively under high load.



For user-facing applications, blocking the Event Loop can result in a noticeable delay in processing user interactions, leading to a poor user experience.

To maintain high performance, scalability, and responsiveness, it’s crucial to avoid blocking the Event Loop. Instead, long-running or blocking operations should be handled using asynchronous APIs, worker verticles, or other mechanisms that offload such tasks from the Event Loop.

Learn more on Discord Join our community’s Discord space for discussions with the author and other readers: https://discord.com/invite/xQ7vVN4XSc

packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe? •

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals



Improve your learning with Skill Plans built especially for you



Get a free eBook or video every month



Fully searchable for easy access to vital information



Copy and paste, print, and bookmark content

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Other Book You May Enjoy If you enjoyed this book, you may be interested in this other book by Packt:

Mastering Kotlin for Android 14 Harun Wangereka ISBN: 9781837631711 •

Build beautiful, responsive, and accessible UIs with Jetpack Compose



Explore various app architectures and find out how you can improve them



Perform code analysis and add unit and instrumentation tests to your apps



Publish, monitor, and improve your apps in the Google Play Store



Perform long-running operations with WorkManager and persist data in your app



Use CI/CD with GitHub Actions and distribute test builds with Firebase App Distribution



Find out how to add linting and static checks on CI/CD pipelines

438

Other Book You May Enjoy

Packt is searching for authors like you If you’re interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Share your thoughts Now you’ve finished Kotlin Design Patterns and Best Practices, Third Edition, we’d love to hear your thoughts! If you purchased the book from Amazon, please click here to go straight to the Amazon review page for this book and share your feedback or leave a review on the site that you purchased it from. Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Index A

B

abstract classes 33

backpressure 224

Abstract Factory design pattern 56-58 casts 58, 59 Factory Methods collection 61-63 smart casts 60, 61 subclassing 59, 60 variable shadowing 61

Barrier design pattern 262-264

Adapter design pattern 85-88 adapters, in real world 89 existing code, adapting 88, 89 limitations 90

Builder design pattern 63-67 default arguments 69, 70 fluent setters 67-69

Algebraic Data Types (ADTs) 292-294 alias 9 animal choir example, Observer pattern 158-162 arguments 9 arrays 20 Arrow working with 318 Arrow Resilience library 345 retry and repeat 346, 347 asynchronicity making explicit 304, 305

basic types 10 Bridge design pattern 91, 92 changes, bridging 93 constants 94, 95 type aliasing 94

C casts 289 Chain of Responsibility 131-134 channels 232, 233 actors 235 buffered channels 235, 236 producers 234 CIO 366 Circuit Breaker design pattern 348 Closed State 348 example 349-351 Half-Open State 348 Open State 348

Index

440

classes 28, 29 code structure 5 comments 7 Hello Kotlin 8 less verbose print function 9 naming conventions 5 no arguments 9 no semicolons 10 no static modifier 9 no wrapping class 8 packages 6 coding conventions reference link 6 collections alternative implementations 19 higher-order functions 225 Command design pattern 125-129 advantages 130 commands, undoing 130 comments 7 Communicating Sequential Processes (CSP) 234

tables, defining with Exposed 374, 375 tests, making consistent 377, 378 connection pool 404 constants 299 using efficiently 299, 300 constructor overload 300-302 context receivers 310-314 control flow 21 if expression 21, 22 when expression 22 coroutines 198, 199 canceling 215-217 key facts 203 starting 199-201 working 203-207 coroutineScope builder 214 creational patterns working with 45 Cucumber reference link 135 custom getters 30-32

comparison 13

custom setters 30-32

Composite design pattern 96, 97 composites, nesting 99, 100 secondary constructors 98 varargs keyword 98, 99

D

concurrency achieving, in Ktor 390

databases working with 403-407

concurrent data structures exploring 230

data classes 36

configuration management, in Ktor 372, 373 all entities, fetching 378-380 entities, creating 375-377 single entity, fetching 381, 382

daemon thread 192 creating 192

data structures 16 lists 16, 17 maps 18 mutability 18 sets 17

Index

deadlock 276-279 Decorator pattern 76 class, enhancing 76, 77 inheritance problem 78-81 limitations 84, 85 operator overloading 82-84

441

F Facade design pattern 100-102 Factory Method design pattern 51 example 51, 52 Static Factory Method 53-55

Deferred Value design pattern 260-262

Fan-In design pattern 270-272

design patterns 39 in real life 39, 40 misconceptions 39 using, in Kotlin 41

Fan-Out design pattern 268-270

design process 40 dispatchers 208-210 switching 210, 211 Document Object Model (DOM) 267 domain-specific language (DSL) 135

E elastic principle 223 Elvis operator 77 equality 13 reference link 13 equals() method 13 error handling 241 Event Bus 410 communicating with 410, 411 JSON, sending over 411, 412 Event Loop design pattern 407-409 exception handling 241 exceptions, catching 242 expressions using 183 extension functions 37, 38

flow collection handling 242, 243 optional retrying 244 retrying 243 flows 237-240 buffering 241 builders 249, 250 cancellation 248 combining 253-256 conflating 251, 252 rate-limiting 252 flow sharing 245 cold sharing 245 hot sharing 245 shareIn() function 245-248 Flyweight design pattern 102-104 limitations 105 memory, saving 104 for-each loop 25 for loop 26, 27 functional programming 168 benefits 168 immutability 169 pattern matching 183, 184 recursion 185, 186 tuple 172

Index

442

functions declaring 14 functions as values 173-175 closures 176 currying 179-181 higher-order functions 175 it notation 176 memoization 181, 182 pure function 177, 178

H Hello Kotlin 8 higher-order functions, on collections 225 code execution, for each element 227 elements, filtering 226 elements, finding 226 elements, mapping 225 elements, summing up 228, 229 nesting, getting rid of 229, 230 high-level concurrency 334 CyclicBarrier 337, 338 parallel operations 335, 336 Racing design pattern 338 Resource 339, 340 HOCON 372

I if expression 21, 22 immutability 169 immutable collections 169, 170 shared mutable state, limitations 170, 171 immutable data 353-357 inheritance 28, 35, 36 inline functions 291 input validation 305-307

interfaces 32 Interpreter design pattern 135 call suffix syntax 140 DSL-for-SQL language 136-140 DSL Marker 140 Iterator design pattern 117-120

J Java package naming rules reference link 6 Java records versus Kotlin data classes 37 Java virtual machine (JVM) 19, 190 jobs 201-203

K Kenny 142 Kotest reference link 135 Kotlin goals 4 Kotlin data classes versus Java records 37 Ktor concurrency, achieving 390 configuration management 372-374 database, connecting to 371, 372 routes, organizing 383-386 working with 362-367

L lists 16, 17 loops 25 for-each loop 25

Index

for loop 26, 27 while loop 27, 28

443

Plain Old Java Object (POJO) 36 private keyword 34

M

properties 29, 30

maps 18

Prototype design pattern 70, 71, 76 prototype, starting from 72, 73

Mediator design pattern 141-144 limitations 146 middleman 144-146 Memento design pattern 146-149

protected keyword 34

Proxy design pattern 106, 107 lazy delegation 107

message-driven principle 224

R

multi-paradigm language 5

race condition 171

mutual exclusion (Mutex) 274-276 deadlock 276-279

Racing design pattern 272 unbiased selection 273, 274

N

Reactive Manifesto 222 URL 222

Netty 366

Reactive principles elastic principle 223 message-driven principle 224 resilient principle 223 responsive principle 222

No-Arg Compiler plugin URL 301 Null Pointer Exception (NPE) issue 4 nulls dealing, with 302-304 null safety 15, 16

O Observer design pattern 157, 158 animal choir example 158-162 onCompletion function 242

P

recursion 185, 186 recursive functions 295-297 reified generics 297, 298 request routing, in Ktor 367, 368 HTTP services, connecting to 370 service, testing 368, 369 requests, Ktor routing 367, 368

packages 6

requests, Vert.x handling 398, 399 routing 396, 397 subrouting 399, 400

pattern matching 183, 184

resilience 345

Pipeline design pattern 266-268

resilient principle 223

package-level functions 8

Index

444

resource allocation graph 278

static keyword 9

responsive principle 222

Strategy design pattern 112-114 functions, as first-class citizens 114-116

route organization, in Ktor 383-386 entity, deleting 387 entity, updating 388-390

S Saga pattern 351 implementing 352, 353 Scheduler design pattern 264-266 scope functions 286 also() function 287 apply() function 287 let() function 286 run() function 288 with() function 288 sealed hierarchies versus, enums 308, 309 semicolons 10

string interpolation 23, 24 structured concurrency 211-214 coroutine, canceling 215-217 coroutineScope builder 215 timeouts, setting 218, 219

T Template method 153-157 text working with 23 threads 190 costing 196-198 creating 191, 192 daemon thread 192 thread safety 192-194

sequences 230, 232

thread synchronization mechanisms 195, 196

Service-Oriented Architecture (SOA) 39

TreeMap 19

sets 17

try-with-resources statement alternative 290

Sidekick design pattern 280, 281 single-expression function 14 Singleton pattern 46-50 Sliding Window strategy 349 Software Transactional Memory (STM) 341-344, 352 State design pattern 120 fifty shades 120-122 state of nation 123, 124 statement 183 Static Factory Method design pattern 53 advantages 53 demonstration 54, 55

tuples 172, 173 type checks 289 typed errors 318-323 advantages 333, 334 failures, collecting 326-328 Ior wrapper 331, 332 Optional wrapper 331 Raise type 323-326 Result wrapper 330 smart constructors 328-330 type erasure 297 type inference 11, 12

Index

types 10 basic types 10

U universally unique identifier (UUID) 201 UpperCamelCase 6

V values 12 variance annotations contravariance 293 covariance 293 invariant 293 verticles 397, 398 Vert.x 394 applications, testing 400-402 requests, handling 398, 399 requests, routing 396, 397 requests, subrouting 399, 400 working with 394, 395 visibility modifiers 34 Visitor design pattern 149, 150 crawler, writing 150-153

W when expression 22 while loop 27, 28

445

Download a free PDF copy of this book Thanks for purchasing this book! Do you like to read on the go but are unable to carry your print books everywhere? Is your eBook purchase not compatible with the device of your choice? Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost. Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application. The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily. Follow these simple steps to get the benefits: 1. Scan the QR code or visit the link below:

https://packt.link/free-ebook/9781805127765

2. Submit your proof of purchase. 3. That’s it! We’ll send your free PDF and other benefits to your email directly.