ASP.NET Core: Cloud-ready, Enterprise Web Application Development 9781788296526

3,467 340 25MB

English Pages [1047] Year 2017

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

ASP.NET Core: Cloud-ready, Enterprise Web Application Development
 9781788296526

  • Commentary
  • decrypted from 96D190697D84752A7642A98DF5DF6D4B source file

Table of contents :
Cover
Credits
Table of Contents
Preface
Module 1: Learning ASP.NET Core MVC Programming
Module 1
Learning ASP.NET Core MVC Programming
Chapter 1: Introduction to ASP.NET Core
How web applications work
So what is an HTTP protocol?
Request-response pattern
Stateless nature of HTTP
Client-side and server-side
HTTP methods
GET method
POST method
What is ASP.NET?
ASP.NET Web Forms
ASP.NET Web Pages
ASP.NET MVC
Features of ASP.NET MVC
Convention over configuration
Separation of concerns
Control over the generated HTML
Better support for unit testing
ASP.NET 5
Features of ASP.NET 5
Summary
Chapter 2: Setting Up the Environment
Purpose of IDE
Visual Studio offerings
System requirements
Visual Studio Community 2015
Visual Studio Professional
Visual Studio Enterprise
Visual Studio Test Professional
Installing Visual Studio Community
Installing ASP.NET 5
Project structure in ASP.NET 5 application
File-based project
Support for full .NET and .NET core
The Project.json package
Controllers
Models
Views
Migrations
The wwwroot folder
Other files
Summary
Chapter 3: Controllers
Role of the Controller in ASP.NET MVC applications
Introduction to routing
Creating ASP.NET 5 application
Installing the ASP.NET Core NuGet package in your application
Our first Controller
IActionResult
Adding Views
Adding Models
Passing data from Controller to View
Filters
Authorization filters
Resource filters
Action filters
Exception filters
Result filters
Summary
Chapter 4: Views
The View engine and the Razor View engine
Razor View engine
Programming in Razor View engine
Variables in Razor View
The for loop
The while loop
The foreach loop
The if condition
Layout
Creating _ViewStart.cshtml
Creating _Layout.cshtml
Adding a page-specific View
Generating HTML
HTML Helpers
Partial View
Calling the Partial View
View components
Creating a View component
Tag Helpers
Creating custom Tag Helpers
Summary
Chapter 5: Models
Models
Models specific to a View component
Note on ViewModels
Data flow with respect to a Model
Model binding
The Entity Framework
The Entity Framework is independent of ASP.NET MVC
Creating console applications with the Entity Framework
Installing the Entity Framework 7 NuGet package
Using the NuGet Package Manager
Using the Package Manager Console
Installing Entity Framework commands
Creating Model classes
Creating the DbContext class
Create a migration
How the SaveChanges method works
Updating the record
Deleting the record
Using the Entity Framework in ASP.NET MVC applications
Database migration
Summary
Chapter 6: Validation
Client-side and server-side validation
Server-side validation
Updating View models with the Data Annotation attribute
Updating the View model to display the validation error message
Updating the controller action method to verify the model state
Client-side validation
Implementation
Summary
Chapter 7: Routing
Convention-based routing
Example 1
Example 2
Example 3
Example 4
Attribute-based routing
Route attribute at the controller level
Passing routing values in HTTP action verbs in the Controller
Route Constraints
Summary
Chapter 8: Beautifying ASP.NET MVC Applications with Bootstrap
Knowing HTML and CSS
Bootstrap
Bootstrap Grid system
Forms
Inline forms
Horizontal forms
Table CSS classes
Striped tables
Hover tables
Bordered tables
Contextual classes in table
Buttons
Button sizes
Contextual colors
Using Bootstrap in your ASP.NET MVC application
Installing with Bower
HTML doctype
Summary
Chapter 9: Deployment of ASP.NET Core Application
The project.json file
The dependencies node
The frameworks node
Microsoft Azure
Signing up to Microsoft Azure
Prerequisites to Azure deployment
Deploying the ASP.NET Core application in Azure
Deploying the ASP.NET Core web application in the Linux environment
Creating a Linux machine
Installing the PuTTY client
Installing of .NET Core in a Linux machine
Creating a new ASP.NET 5 project
Configuring the Nginx server
Summary
Chapter 10: Building HTTP-based Web Services Using ASP.NET Web API
HTTP basics
HTTP verbs
GET method
PUT method
POST method
DELETE method
Fiddler tool
Dependency Injection
Delete method
The Web API method for deleting a resource
Web Repository layer code for deleting the employee data
Summary
Chapter 11: Improving Performance of an ASP.NET Core Application
The UI layer
Reducing the number of HTTP requests
Using GZip compression
Using the Content Delivery Network (CDN)
Using JavaScript wherever possible
Using CSS stylesheets
Minification of JavaScript and CSS files and their combination
The caching process
Client-side caching
Response caching
The web/application layer
No business logic in Views
Using asynchronous logging
The DB layer
Understanding the queries generated by the ORM
Using classic ADO.NET if you really want to
Return only the required data
Fine tuning the indices
Using the correct column type and size for your database columns
Avoiding correlated subqueries
Generic performance improvement tips
Avoiding the Response.Redirect method
Using string builder
Summary
Chapter 12: ASP.NET Core Identity
Authentication
Authorization
Adding the relevant dependencies to the project.json file
Adding ApplicationUser and ApplicationDbContext classes
Configuring the application to use Identity
Creating ViewModels
Creating Controllers and associated action methods
Creating Views
E-mail and SMS services
Securing an action method in a Controller
Summary
Module 2: ASP.NET Core and Angular 2
Module 2
ASP.NET Core and Angular 2
Chapter 1: Getting Ready
Two players  one goal
What's new in Angular 2?
The ASP.NET Core revolution
The field of choice – single-page application
Common features of a competitive SPA
Product owner expectations
A sample single-page application project
The vision
Core requirements
Preparing the workspace
Setting up the project
Package and resource managers
Installing the packages
NuGet and ASP.NET
Project.json
Startup.cs
Testing the HTTP request pipeline
Newtonsoft.Json
JavaScript, TypeScript, or Dart?
Adding the tsconfig.json file
Introducing NPM
Working with Gulp
Dependent tasks
Using Grunt instead of Gulp
Adding Angular 2
Using NPM
Adding Typings
Using a CDN
Upgrading the typescriptServices.js file
Setting up the client code
The component file
A word on components and modules
The module file
The bootstrap file
The module loader configuration file
Why use a dynamic module loader?
The index.html file
First run
Caching issues
The new ASP.NET Core configuration pattern
A faster alternative using scaffolding tools
The reasons to say no (not yet)
A quick scaffolding sample
References
Suggested topics
Summary
Chapter 2: ASP.NET Controllers and Server-Side Routes
Data flow
The role of the ViewModel
Our first Controller
The ItemViewModel
The ItemsController
Understanding routes
Defining routing
Routing through the ages
Handling routes in ASP.NET Core
Convention-based routing
Attribute-based routing
Three choices to route them all
Adding more routes
The dummy data provider
Dealing with single items
Suggested topics
Summary
Chapter 3: Angular 2 Components and Client-Side Routing
Master-detail binding
The Item class
The ItemService class
The ItemListComponent class
A quick implementation test
The ItemDetailComponent class
Adding the @angular/forms library
Updating the root module
Connecting the dots
Testing it up
Adding additional lists
Multiple component instances
Two-way data binding
[Disabling two-way data binding]
Disabling two-way data binding
Client-side routing
Adding the @angular/router library
PathLocationStrategy versus HashLocationStrategy
Refactoring our app
Adding the AppRouting scheme
Adding the HomeComponent
Adding new components
AboutComponent
LoginComponent
PageNotFoundComponent
Updating the root module
Revising the master-detail strategy
Adding the ItemDetail route
Changing the ItemListComponent
Updating the ItemDetailComponent
Full routing test
Handling rewrites in ASP.NET Core
Showing the issue
Understanding the causes
Implementing the fix
Validating the outcome
Suggested topics
Summary
Chapter 4: The Data Model
Getting ready
Installing the EntityFramework Core
Data modeling approaches
The model-first approach
Pros
Cons
The database-first approach
Pros
Cons
The code-first approach
Pros
Cons
Making a choice
Creating entities
Items
Comments
Users
Defining relationships
Setting up DbContext
Database initialization strategies
Choosing the database engine
Updating appsettings.json
Creating the database
Updating Startup.cs
Configuring the EF tools
Adding the initial migration
Understanding migrations
Implementing a data seed strategy
Creating a DbSeeder class
Private members
Constructor
Public methods
Seed methods
Utility methods
Adding the DbSeeder to Startup.cs
Handling exceptions
Seeding the database
Updating the ItemsController
Installing TinyMapper
Implementing the mapping
Testing the data provider
Suggested topics
Summary
Chapter 5: Persisting Changes
Add, update, and delete items
Updating the Web API
Adapting the client
Improving the ItemService
Updating the GUI
Add new
Update and Delete
Testing it out
Adding a new test
Update test
Delete test
Splitting the ItemDetail component
Adding the ItemDetailView component
Refactoring ItemDetail into ItemDetailEdit
Updating the Root module
Updating the Routes
Tracking the outdated references
Implementing the tab menu
Template
Class code
Styles
Testing it out
Suggested topics
Summary
Chapter 6: Applying Styles
How bad is it, doc?
Introducing LESS
Style sheet languages
CSS
CSS code sample
What is LESS and why to use it
Variables
Import directives
Nested selectors
Mixins
Extend pseudo-class
LESS docs and support
Systematically Awesome Style Sheets, Stylus, and other alternatives
Configuring LESS
Adding the LESS file
Updating Gulp script
Linking the stylesheet
Testing it up
DIY versus framework-based styling
Do it yourself approach
Pros
Cons
Framework-based approach
Pros
Cons
Conclusions
Adding Bootstrap
Choosing a native directives module
Installing ng2-bootstrap
Applying styles
Main navigation bar
Detecting the active route
Welcome View
Item Detail View
Display mode
Edit mode
Conclusions
Suggested topics
Summary
Chapter 7: Authentication and Authorization
Do we really need these?
Authentication
Third-party authentication
Authorization
Third-party authorization
Proprietary or third-party?
Choosing an authentication mode
Installing AspNetCore.Identity
Adding the package
Updating the project classes
ApplicationDbContext.cs
ApplicationUser.cs
Startup.cs
DbSeeder.cs
Updating the database
Authentication methods
Sessions
Tokens
Signatures
Two-factor
Conclusions
Implementing JSON web token authentication
JWT provider
Private members
Static members
Constructor
Public methods
Private methods
Extension methods
Full source code
Adding the middleware to the pipeline
Angular 2 login form
Adding styles
Updating the root module file
UI and validation test
AuthService component
Updating the AppModule
Updating the LoginComponent
Login test
AuthHttp wrapper
Adding authorization rules
Adapting the client
Testing the client
Protecting the server
Retrieving the user ID
Authorization test
Suggested topics
Summary
Chapter 8: Third-Party Authentication and External Providers
Introducing OpenID connect servers
Installing OpenIddict
Adding MyGet to the package sources
Installing the OpenIddict package
Updating ApplicationUser and ApplicationDbContext
Adding a new migration
Updating the DbSeeder
Moving literal values to appsettings.json
Configuring the Startup class
Updating the Web.Config rewrite rules
Seeding the database
External authentication providers
OAuth2 authorization flow
Facebook
Creating a Facebook App
Storing the keys in App settings
Adding the Facebook middleware
Adding the AccountsController
BaseController
AccountsController
Namespaces
Class declaration
Constructor
ExternalLogin
ExternalLoginCallBack
Logout
Updating the ItemsController
Configuring the client
Updating the LoginComponent
Updating the AuthService
Updating the AppComponent
Understanding zones
Testing it out
Google
Twitter
Elevated permissions request
Troubleshooting
Conclusions
Suggested topics
Summary
Chapter 9: User Registration and Account Edit
User Registration
UserViewModel
AccountsController
Get
Add
E-mail confirmation
Update
Delete
User class
AuthService
UserEditComponent
Connecting the dots
Updating the root module
Implementing the route
Adding the Register link
Defining the styles
Updating the menu
Testing it out
Edit Account
Two routes, one component
Adapting the UserEditComponent
Updating the navigation menu
Adding styles
Final test
Conclusions
Suggested topics
Summary
Chapter 10: Finalization and Deployment
Switching to SQL Server
Installing SQL Server 2016 Express
Installing SQL Server Management Studio
Configuring the database
Changing the authentication mode
Adding the OpenGameList database
Adding the OpenGameList login
Mapping the login to the database
Adding a SQL Server connection string
Creating an application settings file for production
Updating AppSettings
Updating external providers
Configuring the publishOptions
Checking the Startup class
Updating the launchSettings.json
Publishing our native web application
Creating a Publish profile
File System Publish profile
FTP Publish profile
Configuring IIS
Installing the ASP.NET Core module for IIS
Adding the website
Configuring the Application Pool
Firing up the engine
Troubleshooting
The Kestrel test
Suggested topics
Summary
Module 3: ASP.NET Core 1.0 High Performance
Module 3
ASP.NET Core 1.0 High Performance
Chapter 1: Why Performance Is a Feature
Performance as a feature
Common classes of performance problems
Language considerations
Types of performance problems
When performance matters
Slower is sometimes better
Why issues are missed
Measuring
The benefits of planning ahead
Understanding hardware
Storage access speeds
Scaling approach changes
Tools and costs
Tools
Looking at some alternative tools
The new .NET
Summary
Chapter 2: Measuring Performance Bottlenecks
Tools
SQL
SQL Server Profiler
Executing a simple query
MiniProfiler
Application profiling
Glimpse
Using Glimpse
IDE
Monitoring HTTP
Browsers
Chrome
Firefox
Fiddler
Network
Microsoft Message Analyzer
Wireshark
Roll your own
Science
Repeatability
Only change one thing
Summary
Chapter 3: Fixing Common Performance Problems
Latency
Asynchronous operations
Simple asynchronous tools
Background queuing
Hangfire
Select N+1 problems
Efficient paging
Static site generators
Pragmatic solutions with hardware
A desktop example
Web applications
Oversized images
Image resolution
Image format
Summary
Chapter 4: Addressing Network Performance
Internet protocols
TCP/IP
Slow-start
HTTP
Headers
HTTP methods
Status codes
Encryption
Key exchange
Delay diagnostics
Performance tweaks
HTTP/2
WebSockets
Compression
Lossless compression algorithms
Bundling and minification
Bundling
Minification
Changes in ASP.NET Core
Image optimization
PNG
JPEG
Other image formats
Resizing images
Caching
Browser
Server
Proxy servers between you and your users
CDNs
Summary
Chapter 5: Optimizing I/O Performance
Input/output
Categories of I/O
Disks
Virtual file systems
Databases
APIs
Network diagnostics tools
Ping
Tracert
Nslookup
Build your own
Solutions
Batching API requests
Efficient DB operations
Database tuning
Reporting
Aggregates
Sampling
Inserting data
GUIDs
Advanced DB topics
Simulation and testing
Summary
Chapter 6: Understanding Code Execution and Asynchronous Operations
Getting started with the core projects
.NET Core
ASP.NET Core
Kestrel
Data structures
Lists
Dictionaries
Collection benchmarks
Bloom filters
Hashing and checksums
Hashing benchmarks
Serialization
SIMD CPU instructions
Parallel programming
Task Parallel Library
Parallel LINQ
Parallel benchmarking
Parallel programming limitations
Practices to avoid
Reflection
Regular expressions
String concatenation in tight loops
Dynamic typing
Synchronous operations
Exceptions
Summary
Chapter 7: Learning Caching and Message Queuing
Why caching is hard
Web caching
Caching background
HTTP headers
Cache busting
Service workers
Service worker example
Web and proxy servers
IIS
Varnish
Working with a content delivery network
When not to cache
Application layer caching
Redis
Database result set caching
Message queuing
Coffee shop analogy
Message queuing styles
Common messaging patterns
Unicast
Pub/sub
RabbitMQ
Queuing frameworks and libraries
Summary
Chapter 8: The Downsides of Performance-Enhancing Tools
Managing complexity
Understanding complexity
Complexity reduction
Frameworks
Architecture
Monolith versus microservices
Architecture comparison
Refactoring
A culture of high performance
A blameless culture
Intellectual dishonesty
Slow down to go faster
From the ground up
Shared values
The price of performance
Distributed debugging
Logging
Error logging
Application Insights
Integrated logging
Centralized logging
Statistics
Managing stale caches
Summary
Chapter 9: Monitoring Performance Regressions
Profiling and measurement
Testing
Automated testing
Continuous integration
Slow testing
Fixing performance regressions
Load testing
Realism
Realistic environments
Realistic workloads
Feature switching
Experimenting for science
A/B testing
User interface testing
Web UI testing tools
Automating UI performance tests
Staying alert
DevOps
DevOps tooling
Provisioning
Monitoring
Hosting
Summary
Chapter 10: The Way Ahead
Reviewing what we learned
Further reading
Going native
Processor architecture
Hardware is hard
Machine learning
Big data and MapReduce
Orleans
Custom transports
Advanced hashing
Library and framework support
The future
Summary
Bibligraphy
Thanks page
Index

Citation preview

WOW! eBook www.wowebook.org

ASP.NET Core: Cloud-ready, Enterprise Web Application Development Create fast, scalable, and high performance applications with C# and ASP.NET Core A course in three modules

BIRMINGHAM - MUMBAI

WOW! eBook www.wowebook.org

ASP.NET Core: Cloud-ready, Enterprise Web Application Development

Copyright © 2017 Packt Publishing All rights reserved. No part of this course may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this course to ensure the accuracy of the information presented. However, the information contained in this course is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this course. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this course by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. Published on: June 2017 Production reference: 1020617 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN-13P 978-1-78829-652-6 www.packtpub.com

WOW! eBook www.wowebook.org

Credits Author Mugilan T. S. Ragupathi Valerio De Sanctis James Singleton

Content Development Editor Kinnari Sanghvi

Reviewers Anuraj Parameswaran Mustafa Erhan Ersoy Anand Ranjan Pandey Vincent Maverick Durano Jason De Oliveira

Graphics Abhinash Sahu Production Coordinator Shraddha Falebhai

WOW! eBook www.wowebook.org

Table of Contents Preface Chapter 1:

1 7

Module 1 Learning ASP.NET Core MVC Programming

7 7

Chapter 2: Introduction to ASP.NET Core

8

How web applications work So what is an HTTP protocol?

8 9 9 10 11 12 13 14 14 15 16 16 17 17 18 18 18 19 20 20

Request-response pattern Stateless nature of HTTP

Client-side and server-side HTTP methods GET method POST method What is ASP.NET? ASP.NET Web Forms ASP.NET Web Pages ASP.NET MVC Features of ASP.NET MVC Convention over configuration Separation of concerns Control over the generated HTML Better support for unit testing ASP.NET 5 Features of ASP.NET 5 Summary

Chapter 3: Setting Up the Environment Purpose of IDE Visual Studio offerings System requirements Visual Studio Community 2015 Visual Studio Professional Visual Studio Enterprise Visual Studio Test Professional

Installing Visual Studio Community

WOW! eBook www.wowebook.org

21 21 22 22 22 23 23 23 24

Installing ASP.NET 5 Project structure in ASP.NET 5 application File-based project

Summary

28 32 33 33 34 34 34 34 34 35 35 35

Chapter 4: Controllers

36

Support for full .NET and .NET core The Project.json package Controllers Models Views Migrations The wwwroot folder Other files

Role of the Controller in ASP.NET MVC applications Introduction to routing Creating ASP.NET 5 application Installing the ASP.NET Core NuGet package in your application Our first Controller IActionResult Adding Views Adding Models Passing data from Controller to View Filters Authorization filters Resource filters Action filters Exception filters Result filters Summary

Chapter 5: Views

37 38 41 45 48 52 52 55 57 58 59 60 60 60 60 60 61

The View engine and the Razor View engine Razor View engine Programming in Razor View engine Variables in Razor View The for loop The while loop The foreach loop The if condition

Layout Creating _ViewStart.cshtml Creating _Layout.cshtml

[ ii ] WOW! eBook www.wowebook.org

61 61 62 62 64 65 65 65 66 68 68

Adding a page-specific View Generating HTML HTML Helpers Partial View Calling the Partial View View components Creating a View component Tag Helpers Creating custom Tag Helpers Summary

69 71 73 76 79 79 79 84 87 91

Chapter 6: Models

92

Models Models specific to a View component Note on ViewModels Data flow with respect to a Model Model binding The Entity Framework The Entity Framework is independent of ASP.NET MVC Creating console applications with the Entity Framework Installing the Entity Framework 7 NuGet package Using the NuGet Package Manager Using the Package Manager Console

Installing Entity Framework commands Creating Model classes Creating the DbContext class

Create a migration How the SaveChanges method works Updating the record Deleting the record Using the Entity Framework in ASP.NET MVC applications Database migration Summary

Chapter 7: Validation

92 98 98 99 100 101 102 103 103 104 107 109 110 110 111 116 116 117 118 127 130 131

Client-side and server-side validation Server-side validation Updating View models with the Data Annotation attribute Updating the View model to display the validation error message Updating the controller action method to verify the model state Client-side validation

[ iii ] WOW! eBook www.wowebook.org

132 134 135 136 137 139

Implementation Summary

143 146

Chapter 8: Routing

147

Convention-based routing Example 1 Example 2 Example 3 Example 4 Attribute-based routing Route attribute at the controller level Passing routing values in HTTP action verbs in the Controller Route Constraints Summary

Chapter 9: Beautifying ASP.NET MVC Applications with Bootstrap Knowing HTML and CSS Bootstrap Bootstrap Grid system Forms Inline forms Horizontal forms

Table CSS classes Striped tables Hover tables Bordered tables Contextual classes in table

Buttons Button sizes

Contextual colors Using Bootstrap in your ASP.NET MVC application Installing with Bower HTML doctype Summary

Chapter 10: Deployment of ASP.NET Core Application The project.json file The dependencies node The frameworks node Microsoft Azure Signing up to Microsoft Azure Prerequisites to Azure deployment

[ iv ] WOW! eBook www.wowebook.org

147 148 149 149 149 152 154 155 156 156 157 157 162 162 163 164 165 166 166 167 167 168 169 170 170 171 173 174 177 178 181 181 182 182 183 191

Deploying the ASP.NET Core application in Azure Deploying the ASP.NET Core web application in the Linux environment Creating a Linux machine Installing the PuTTY client Installing of .NET Core in a Linux machine Creating a new ASP.NET 5 project Configuring the Nginx server Summary

Chapter 11: Building HTTP-based Web Services Using ASP.NET Web API HTTP basics HTTP verbs GET method PUT method POST method DELETE method Fiddler tool Dependency Injection Delete method The Web API method for deleting a resource Web Repository layer code for deleting the employee data

Summary

Chapter 12: Improving Performance of an ASP.NET Core Application The UI layer Reducing the number of HTTP requests Using GZip compression Using the Content Delivery Network (CDN) Using JavaScript wherever possible Using CSS stylesheets Minification of JavaScript and CSS files and their combination The caching process Client-side caching Response caching The web/application layer No business logic in Views Using asynchronous logging The DB layer Understanding the queries generated by the ORM

[v] WOW! eBook www.wowebook.org

193 210 211 220 226 229 233 235 236 237 238 238 239 239 240 240 250 260 260 260 261 262 265 265 269 269 270 270 272 275 276 277 279 279 280 280 280

Using classic ADO.NET if you really want to Return only the required data Fine tuning the indices Using the correct column type and size for your database columns Avoiding correlated subqueries

Generic performance improvement tips Avoiding the Response.Redirect method Using string builder Summary

Chapter 13: ASP.NET Core Identity

281 281 282 282 282 282 283 283 283 284

Authentication Authorization Adding the relevant dependencies to the project.json file Adding ApplicationUser and ApplicationDbContext classes Configuring the application to use Identity Creating ViewModels Creating Controllers and associated action methods Creating Views E-mail and SMS services Securing an action method in a Controller Summary

Chapter 14:

284 285 286 287 288 290 291 292 297 298 301 302

Module 2 ASP.NET Core and Angular 2

302 302

Chapter 15: Getting Ready

303

Two players one goal What's new in Angular 2? The ASP.NET Core revolution The field of choice – single-page application Common features of a competitive SPA Product owner expectations A sample single-page application project The vision Core requirements Preparing the workspace Setting up the project Package and resource managers Installing the packages NuGet and ASP.NET

[ vi ] WOW! eBook www.wowebook.org

304 304 305 306 306 307 309 310 311 312 312 315 316 316

Project.json Startup.cs Testing the HTTP request pipeline Newtonsoft.Json JavaScript, TypeScript, or Dart? Adding the tsconfig.json file Introducing NPM Working with Gulp Dependent tasks Using Grunt instead of Gulp Adding Angular 2 Using NPM Adding Typings Using a CDN Upgrading the typescriptServices.js file

Setting up the client code The component file A word on components and modules The module file The bootstrap file The module loader configuration file Why use a dynamic module loader? The index.html file

First run Caching issues The new ASP.NET Core configuration pattern A faster alternative using scaffolding tools The reasons to say no (not yet) A quick scaffolding sample References Suggested topics Summary

Chapter 16: ASP.NET Controllers and Server-Side Routes Data flow The role of the ViewModel Our first Controller The ItemViewModel The ItemsController Understanding routes Defining routing Routing through the ages Handling routes in ASP.NET Core Convention-based routing

[ vii ] WOW! eBook www.wowebook.org

317 318 319 322 323 324 327 329 333 334 337 337 340 342 342 343 343 345 345 347 347 349 350 352 354 355 357 358 358 359 360 360 361 362 364 364 366 367 371 371 371 373 374

Attribute-based routing Three choices to route them all

Adding more routes The dummy data provider

Dealing with single items Suggested topics Summary

Chapter 17: Angular 2 Components and Client-Side Routing Master-detail binding The Item class The ItemService class The ItemListComponent class A quick implementation test

The ItemDetailComponent class Adding the @angular/forms library Updating the root module Connecting the dots

Testing it up Adding additional lists Multiple component instances Two-way data binding Disabling two-way data binding

Client-side routing Adding the @angular/router library PathLocationStrategy versus HashLocationStrategy

Refactoring our app Adding the AppRouting scheme Adding the HomeComponent Adding new components AboutComponent LoginComponent PageNotFoundComponent Updating the root module Revising the master-detail strategy Adding the ItemDetail route Changing the ItemListComponent Updating the ItemDetailComponent

Full routing test Handling rewrites in ASP.NET Core Showing the issue Understanding the causes Implementing the fix Validating the outcome

[ viii ] WOW! eBook www.wowebook.org

375 375 375 379 379 386 386 387 389 389 390 393 396 400 402 403 404 406 407 408 412 413 414 414 416 417 418 420 422 422 423 423 424 426 426 427 429 432 435 436 436 437 439

Suggested topics Summary

439 439

Chapter 18: The Data Model

441

Getting ready Installing the EntityFramework Core Data modeling approaches The model-first approach Pros Cons The database-first approach Pros Cons The code-first approach Pros Cons Making a choice

Creating entities Items Comments Users Defining relationships Setting up DbContext Database initialization strategies Choosing the database engine Updating appsettings.json Creating the database Updating Startup.cs Configuring the EF tools Adding the initial migration Understanding migrations

Implementing a data seed strategy Creating a DbSeeder class Private members Constructor Public methods Seed methods Utility methods

Adding the DbSeeder to Startup.cs Handling exceptions

Seeding the database Updating the ItemsController Installing TinyMapper

[ ix ] WOW! eBook www.wowebook.org

441 442 445 445 446 447 447 448 448 448 449 449 449 450 450 453 456 457 460 462 463 464 465 465 465 466 467 468 469 475 475 475 476 476 477 478 479 480 481

Implementing the mapping

482 487 488 489

Testing the data provider Suggested topics Summary

Chapter 19: Persisting Changes

490

Add, update, and delete items Updating the Web API Adapting the client

Testing it out Suggested topics Summary

490 491 494 494 497 497 500 502 505 507 509 510 512 514 515 515 517 518 519 520 521 524 527 528

Chapter 20: Applying Styles

529

Improving the ItemService Updating the GUI Add new Update and Delete Testing it out Adding a new test Update test Delete test

Splitting the ItemDetail component Adding the ItemDetailView component Refactoring ItemDetail into ItemDetailEdit Updating the Root module Updating the Routes Tracking the outdated references Implementing the tab menu Template Class code Styles

How bad is it, doc? Introducing LESS Style sheet languages CSS CSS code sample

What is LESS and why to use it Variables Import directives Nested selectors Mixins Extend pseudo-class

LESS docs and support

[x] WOW! eBook www.wowebook.org

529 530 530 531 531 532 533 534 535 536 537 538

Systematically Awesome Style Sheets, Stylus, and other alternatives Configuring LESS Adding the LESS file Updating Gulp script Linking the stylesheet Testing it up DIY versus framework-based styling Do it yourself approach Pros Cons

Framework-based approach Pros Cons

Conclusions Adding Bootstrap Choosing a native directives module Installing ng2-bootstrap Applying styles Main navigation bar Detecting the active route

Welcome View Item Detail View Display mode Edit mode

Conclusions Suggested topics Summary

Chapter 21: Authentication and Authorization Do we really need these? Authentication Third-party authentication Authorization Third-party authorization Proprietary or third-party? Choosing an authentication mode Installing AspNetCore.Identity Adding the package Updating the project classes ApplicationDbContext.cs ApplicationUser.cs Startup.cs

[ xi ] WOW! eBook www.wowebook.org

538 539 539 540 543 544 545 545 545 546 546 546 547 547 548 549 550 552 552 555 558 565 565 568 577 577 578 579 580 580 581 581 582 583 584 584 585 586 586 587 588

DbSeeder.cs

Updating the database Authentication methods Sessions Tokens Signatures Two-factor Conclusions Implementing JSON web token authentication JWT provider Private members Static members Constructor Public methods Private methods Extension methods Full source code

Adding the middleware to the pipeline Angular 2 login form Adding styles Updating the root module file UI and validation test

AuthService component Updating the AppModule Updating the LoginComponent Login test

AuthHttp wrapper Adding authorization rules Adapting the client Testing the client Protecting the server Retrieving the user ID Authorization test

Suggested topics Summary

Chapter 22: Third-Party Authentication and External Providers Introducing OpenID connect servers Installing OpenIddict Adding MyGet to the package sources Installing the OpenIddict package Updating ApplicationUser and ApplicationDbContext Adding a new migration

[ xii ] WOW! eBook www.wowebook.org

589 592 594 594 596 598 598 598 599 600 601 601 602 603 603 605 606 609 611 614 615 617 620 622 623 625 627 629 629 632 634 635 636 638 638 639 640 640 641 643 644 644

Updating the DbSeeder Moving literal values to appsettings.json

Configuring the Startup class Updating the Web.Config rewrite rules Seeding the database External authentication providers OAuth2 authorization flow Facebook Creating a Facebook App Storing the keys in App settings Adding the Facebook middleware Adding the AccountsController BaseController AccountsController Namespaces Class declaration Constructor ExternalLogin ExternalLoginCallBack Logout Updating the ItemsController Configuring the client Updating the LoginComponent Updating the AuthService Updating the AppComponent Understanding zones Testing it out

Google Twitter Elevated permissions request

Troubleshooting Conclusions Suggested topics Summary

Chapter 23: User Registration and Account Edit User Registration UserViewModel AccountsController

645 646 648 653 654 654 655 655 656 658 659 661 661 664 665 665 665 666 667 670 670 672 672 674 675 677 678 680 684 686 687 688 689 689 690 690 691 691 692 693 694 695 697

Get Add E-mail confirmation Update Delete

[ xiii ] WOW! eBook www.wowebook.org

User class AuthService UserEditComponent Connecting the dots Updating the root module Implementing the route Adding the Register link Defining the styles Updating the menu

Testing it out Edit Account Two routes, one component Adapting the UserEditComponent Updating the navigation menu Adding styles Final test

Conclusions Suggested topics Summary

Chapter 24: Finalization and Deployment Switching to SQL Server Installing SQL Server 2016 Express Installing SQL Server Management Studio Configuring the database Changing the authentication mode Adding the OpenGameList database Adding the OpenGameList login Mapping the login to the database Adding a SQL Server connection string Creating an application settings file for production Updating AppSettings Updating external providers Configuring the publishOptions Checking the Startup class Updating the launchSettings.json

Publishing our native web application Creating a Publish profile File System Publish profile FTP Publish profile

Configuring IIS Installing the ASP.NET Core module for IIS Adding the website

[ xiv ] WOW! eBook www.wowebook.org

698 698 699 705 705 705 706 707 708 709 712 712 713 717 718 719 721 722 722 723 723 724 725 725 727 727 727 728 729 730 731 732 732 733 733 734 735 736 736 738 738 739

Configuring the Application Pool Firing up the engine Troubleshooting The Kestrel test

Suggested topics Summary

Chapter 25:

741 742 744 745 746 747 748

Module 3 ASP.NET Core 1.0 High Performance

Chapter 26: Why Performance Is a Feature Performance as a feature Common classes of performance problems Language considerations Types of performance problems When performance matters Slower is sometimes better Why issues are missed Measuring

The benefits of planning ahead Understanding hardware Storage access speeds Scaling approach changes Tools and costs Tools Looking at some alternative tools The new .NET Summary

Chapter 27: Measuring Performance Bottlenecks Tools SQL

748 748 749 750 751 751 752 753 754 754 755 755 756 756 758 759 759 760 761 762 763 764 765 765 766 768 769 770 770 773 775 775 775

SQL Server Profiler Executing a simple query MiniProfiler

Application profiling Glimpse Using Glimpse IDE

Monitoring HTTP Browsers Chrome

[ xv ] WOW! eBook www.wowebook.org

Firefox Fiddler

777 778 779 779 779 781 785 785 785 786

Network Microsoft Message Analyzer Wireshark

Roll your own Science Repeatability Only change one thing

Summary

Chapter 28: Fixing Common Performance Problems Latency Asynchronous operations Simple asynchronous tools Background queuing Hangfire Select N+1 problems Efficient paging Static site generators Pragmatic solutions with hardware A desktop example Web applications Oversized images Image resolution Image format Summary

Chapter 29: Addressing Network Performance Internet protocols TCP/IP

787 788 789 790 790 791 792 798 802 803 803 804 805 806 808 810 811 812 812 813 814 814 815 815 816 817 820 825 829 831 832

Slow-start

HTTP Headers HTTP methods Status codes Encryption Key exchange Delay diagnostics Performance tweaks HTTP/2 WebSockets

Compression

[ xvi ] WOW! eBook www.wowebook.org

Lossless compression algorithms Bundling and minification Bundling Minification Changes in ASP.NET Core

Image optimization PNG JPEG Other image formats Resizing images

Caching Browser Server Proxy servers between you and your users CDNs

Summary

Chapter 30: Optimizing I/O Performance Input/output Categories of I/O

832 834 834 835 835 836 836 837 838 839 840 841 842 843 843 844 845 846 846 847 848 849 849 851 851 853 856 858 862 862 866 867 868 868 871 871 874 875 876 877

Disks Virtual file systems Databases APIs

Network diagnostics tools Ping Tracert Nslookup Build your own

Solutions Batching API requests Efficient DB operations Database tuning Reporting Aggregates Sampling Inserting data GUIDs Advanced DB topics

Simulation and testing Summary

Chapter 31: Understanding Code Execution and Asynchronous Operations [ xvii ] WOW! eBook www.wowebook.org

878

Getting started with the core projects .NET Core ASP.NET Core Kestrel

Data structures Lists Dictionaries Collection benchmarks Bloom filters Hashing and checksums Hashing benchmarks Serialization SIMD CPU instructions Parallel programming Task Parallel Library Parallel LINQ Parallel benchmarking Parallel programming limitations Practices to avoid Reflection Regular expressions String concatenation in tight loops Dynamic typing Synchronous operations Exceptions Summary

Chapter 32: Learning Caching and Message Queuing Why caching is hard Web caching Caching background HTTP headers Cache busting Service workers Service worker example Web and proxy servers IIS Varnish Working with a content delivery network When not to cache

Application layer caching Redis

[ xviii ] WOW! eBook www.wowebook.org

879 879 880 880 881 881 882 883 886 888 889 894 895 896 897 900 901 903 903 904 904 905 905 906 906 907 908 909 910 911 912 914 915 916 922 922 922 923 924 925 926

Database result set caching

Message queuing Coffee shop analogy Message queuing styles Common messaging patterns Unicast Pub/sub

RabbitMQ Queuing frameworks and libraries Summary

Chapter 33: The Downsides of Performance-Enhancing Tools Managing complexity Understanding complexity Complexity reduction Frameworks Architecture Monolith versus microservices Architecture comparison Refactoring

A culture of high performance A blameless culture Intellectual dishonesty Slow down to go faster From the ground up Shared values The price of performance Distributed debugging Logging Error logging Application Insights Integrated logging Centralized logging

Statistics Managing stale caches Summary

Chapter 34: Monitoring Performance Regressions Profiling and measurement Testing Automated testing Continuous integration

927 927 928 929 929 929 930 932 933 934 936 937 938 939 939 940 941 942 944 946 947 947 948 949 950 950 951 952 952 954 958 960 961 962 963 964 965 969 969 969

[ xix ] WOW! eBook www.wowebook.org

Slow testing Fixing performance regressions Load testing Realism Realistic environments Realistic workloads Feature switching

Experimenting for science A/B testing User interface testing Web UI testing tools Automating UI performance tests

Staying alert DevOps DevOps tooling Provisioning Monitoring

Hosting Summary

Chapter 35: The Way Ahead

970 971 971 972 972 974 974 979 980 980 981 983 984 984 985 985 985 986 988 989

Reviewing what we learned Further reading Going native Processor architecture Hardware is hard Machine learning Big data and MapReduce Orleans Custom transports Advanced hashing Library and framework support The future Summary

990 990 990 991 992 992 993 995 996 996 997 998 999

Bibliography Index

1001 1003

[ xx ] WOW! eBook www.wowebook.org

Preface If your website is dynamic and you have to use some server-side scripting, you can use ASP.NET, and the fact that you are using AngularJS won't make your server-side code more MVC. If you want to be able to structure your server-side application as MVC, you'll rather pick ASP.NET MVC. The learning path aims to help you learn the fundamentals of ASP.NET Core MVC and apply that knowledge to building applications using ASP.NET Core. This book also aims to serve as a solid guide for beginners who want to learn ASP.NET MVC. The learning path is to demonstrate that it is still possible to develop great web applications from scratch, providing that we use the proper tools and have the patience to understand how we can blend them together to achieve what we want. Luckily enough, the insane amount of additional work required nowadays can be mitigated by a lot of powerful frameworks that make their way through the development scene: we’re talking of serverside runtime environments like ASP.NET and Node.js and also of many client-side frameworks such as Bootstrap, jQuery, React, AngularJS and Angular 2. We chose to focus on two of them – ASP.NET Core to cover the server-side aspects, Angular 2 to deal with the client-side not only for their stunning set of features and elegant design, but also because they happen to have something revolutionary in common: they are both a complete rewrite of their massively-popular previous installments, who played a leading role in their respective field. It will make you realize the importance for web applications. We will introduce performance improvements along with the trade-offs that they entail. We will strike a balance between premature optimization and inefficient code by taking a scientific and evidence-based approach, focusing on the big problems and avoiding changes that have little impact. This book addresses web application performance-improvement techniques from both a general standpoint (HTTP, HTTPS, HTTP/2, TCP/IP, database access, compression, I/O, asset optimization, caching, message queuing, and other concerns) and from a C#, ASP.NET Core, and .NET Core perspective. This includes delving into the details of the latest frameworks and demonstrating software design patterns that improve performance. We will highlight common performance pitfalls, which can often occur unnoticed on developer workstations, along with strategies to detect and resolve these issues early. By understanding and addressing challenges upfront, you can avoid nasty surprises when it comes to deployment time.

WOW! eBook www.wowebook.org

Preface

What this learning path covers Module 1, Learning ASP.NET Core MVC Programming, introduces you to fundamentals of ASP.NET and how it fits in the ASP.Net ecosystem. Further, it helps you to set up the development environment. You will learn what an action method and a Controller does from the perspective of an overall ASP.NET MVC application. It explains various basic programming constructs(conditionals, loops and so on) with the examples using Razor view engine. You will also learn role of models and how to use javaScript and JQuery libraries for validation. Further, you will learn routing module and deployment of ASP.NET Core application. You will use HTTP-based services and learn how to implement them using the Web API. It explains the approaches to analyzing of performance and measures for improvement in various layers of your application and the security aspects of your application . Module 2, ASP.NET Core and Angular 2, introduces the ASP.NET Core and Angular 2 frameworks. The reader will learn how to handle the HTTP request-response cycle and also how to configure the improved routing logic built upon the .NET Core pipeline. We will also explain how the client-side routing works in a SPA and what we need do to implement a viable routing pattern for our project using the PathLocationStrategy. You will learn to build proper, DBMS-based Data Model using Entity Framework Core and update the application code to make full usage of the EF Core entities defined in the data model. You will learn applying styles and the most relevant auth-related concepts for building a web application. It further expands the authentication and authorization topics introducing OpenIddict. It discusses account-related features such as user registration and edit account mechanism. Further, it describes the most common tasks to publish a potentially shippable web application onto a production server. Module 3, ASP.NET Core 1.0 High Performance, shows you why you need to care about the performance of your software. We will highlight a few ways of manually monitoring performance and some helpful tools that you can use to measure statistics. It looks at some of the most frequent performance mistake and show you how to fix simple issues across a range of different application areas. It digs into the networking layer that underpins all web applications. You will learn how to batch your requests and optimizing database usage with aggregates or sampling, aiming to reduce the data and time required. You will learn caching and message queuing along with discussing the negatives of performanceenhancing tools. It will explain how to measure performance from an automation and Continuous Integration (CI) perspective. Lastly, it briefly sums up the lessons of the module and then has a look at some advanced topics that you may like to read more about.

[2] WOW! eBook www.wowebook.org

Preface

What you need for this learning path To start programming the ASP.NET MVC applications, you will need Visual Studio Community 2015 IDE. This is a fully featured IDE available for building desktops and web applications. You will also need various packages and frameworks, such as NuGet, Bootstrap, and project.json, the installation and configuration of which will be explained in the course. You should acquire : Windows 7 SP1 or newer, up to and including Windows 10. Visual Studio 2015 with Update 3 (or newer): any version will work, including the freely available Community Edition. Microsoft SQL Server 2014 (o newer) for Chapter 10, Finalization and Deployment only: any version will work, including the freely available Express Edition. Windows Server 2008 R2 (or newer) for Chapter 10, Finalization and Deployment only. All ASP.NET, Angular 2, JavaScript and CSS packages used throughout the book are open-source and freely available for download using Visual Studio package managers such as NuGet, MyGet, NPM and Bower. You will also need a development environment to follow the code examples in this module, either Visual Studio Community 2015 or Visual Studio Code if you're not on Windows. You can also use your text editor of choice and the .NET command line tool. If you use Visual Studio, then you should also install the .NET Core SDK and tooling and the latest NuGet extension. For some of the chapters, you will also need SQL Server 2014 Express. You can use 2016 too, particularly if you are on Linux. However, you can also use Azure and run against a cloud database. There are other tools that we will cover, but we will introduce these as they are used. The detailed software/hardware list is uploaded along with the code files.

Who this learning path is for This book is for developers who want to learn to build web applications using ASP.NET Core, developers who want to make a career building web applications using Microsoft technology, and developers who are working on other web frameworks and want to learn ASP.NET Core.

[3] WOW! eBook www.wowebook.org

Preface

Reader feedback Feedback from our readers is always welcome. Let us know what you think about this course—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of. To send us general feedback, simply e-mail [email protected], and mention the course's title in the subject of your message. If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.

Customer support Now that you are the proud owner of a Packt course, we have a number of things to help you to get the most from your purchase.

Downloading the example code You can download the example code files for this course from your account at http://www. packtpub.com. If you purchased this course elsewhere, you can visit http://www.packtpub .com/supportand register to have the files e-mailed directly to you. You can download the code files by following these steps: 1. 2. 3. 4. 5. 6. 7.

Log in or register to our website using your e-mail address and password. Hover the mouse pointer on the SUPPORT tab at the top. Click on Code Downloads & Errata. Enter the name of the course in the Search box. Select the course for which you're looking to download the code files. Choose from the drop-down menu where you purchased this course from. Click on Code Download.

You can also download the code files by clicking on the Code Files button on the course's webpage at the Packt Publishing website. This page can be accessed by entering the course's name in the Search box. Please note that you need to be logged in to your Packt account.

[4] WOW! eBook www.wowebook.org

Preface

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of: WinRAR / 7-Zip for Windows Zipeg / iZip / UnRarX for Mac 7-Zip / PeaZip for Linux The code bundle for the course is also hosted on GitHub at https://github.com/PacktPub lishing/ASP-NET-Core-Cloud-ready-Enterprise-Web-Application-Development. We also have other code bundles from our rich catalog of books, videos, and courses available at https://github.com/PacktPublishing/. Check them out!

Errata Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our courses—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this course. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your course, clicking on the Errata Submission Form link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title. To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the course in the search field. The required information will appear under the Errata section.

Piracy Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy. Please contact us at [email protected] with a link to the suspected pirated material. We appreciate your help in protecting our authors and our ability to bring you valuable content.

[5] WOW! eBook www.wowebook.org

Preface

Questions If you have a problem with any aspect of this course, you can contact us at [email protected], and we will do our best to address the problem.

[6] WOW! eBook www.wowebook.org

Module 1

Learning ASP.NET Core MVC Programming Learn the fundamentals of ASP.NET MVC and build realworld web applications using ASP.NET Core MVC

WOW! eBook www.wowebook.org

1

Introduction to ASP.NET Core ASP.NET Core, the latest version of ASP.NET MVC from Microsoft, is the server-side web application development framework which helps you to build web applications effectively. This runs on top of the ASP.NET 5 platform, which enables your application to be run on a wide variety of platforms, including Linux and Mac OS X. This opens up heaps of opportunities and it is exciting to be a .NET developer in these times. In this chapter, you'll learn about the following topics: Fundamental concepts about web applications—HTTP, client-side, and serverside Three programming models of ASP.NET—ASP.NET Web Forms, ASP.NET Web Pages, and ASP.NET MVC Philosophy of ASP.NET MVC Features of ASP.NET Core and ASP.NET 5 Before discussing the ASP.NET Core and its features, let us understand the fundamentals of web applications development. I strongly believe the principle that if you want to be an expert at something, you need to be very good at the fundamentals. It will be helpful in debugging the issues and fixing them. Having said that we are going to discuss the following key fundamentals: How web applications work, and a bit about HTTP Client-side and server-side HTTP methods Just three key concepts. No big deal!

WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

How web applications work All web applications, irrespective of whether they are built using ASP.NET MVC, Ruby on Rails, or any other new shiny technology, work on the HTTP protocol. Some applications use HTTPS (a secure version of HTTP), where data is encrypted before passing through the wire. But HTTPS still uses HTTP.

So what is an HTTP protocol? HTTP stands for Hyper Text Transfer Protocol and is an application protocol which is designed for distributed hypermedia systems. "Hyper Text" in Hyper Text Transfer Protocol refers to the structured text that uses hyperlinks for traversing between the documents. Standards for HTTP were developed by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium(W3C). The current version of HTTP is HTTP/2 and was standardized in 2015. It is supported by the majority of web browsers, such as Internet Explorer, Chrome, and Firefox. The HTTP protocol (a protocol is nothing but a set of rules which govern the communication) is a stateless protocol that follows the request-response pattern.

Request-response pattern Before talking about the request-response pattern, let us discuss a couple of terms: Client and server. A server is a computing resource that receives the requests from the clients and serves them. A server, typically, is a high-powered machine with huge memory to process many requests. A client is a computing resource that sends a request and receives the response. A client, typically, could be a web server or any application that sends the requests. Coming back to the request-response pattern, when you request a resource from a server, the server responds to you with the requested resource. A resource could be anything—a web page, text file, an image , or another data format.

[9] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

You fire a request. The server responds with the resource. This is called a request-response pattern.

Stateless nature of HTTP When you request for the same resource again, the server responds to you with the requested resource again without having any knowledge of the fact that the same was requested and served earlier. The HTTP protocol inherently does not have any knowledge of the state knowledge of any of the previous requests received and served. There are several mechanisms available that maintain the state, but the HTTP protocol by itself does not maintain the state. We will explain the mechanisms to maintain the state later. Let me explain to you about the statelessness and the request-response pattern to you with a simple practical example: 1. You type the following URL: https://en.wikipedia.org/wiki/ASP.NET_MVC. This is a Wikipedia web page about ASP.NET MVC. 2. From the preceding URL, the browser fires a request to the Wikipedia server. 3. The web server at Wikipedia serves you the ASP.NET MVC web page. 4. Your browser receives that web page and presents it. 5. You request the same page again by typing the same URL again (https://en.wi kipedia.org/wiki/ASP.NET_MVC) and press Enter. 6. The browser again fires the request to the Wikipedia server. 7. Wikipedia serves you the same ASP.NET MVC web page without being aware of the fact that the same resource was requested previously from the same resource.

[ 10 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

As mentioned earlier, there are several mechanisms to maintain the state. Let us assume, for the time being, that no such mechanism is implemented here. I know that I am being too simplistic here, but this explains the point.

Client-side and server-side It is necessary to understand the client-side and server-side of web applications and what can be done either side. With respect to web applications, your client is the browser and your server could be the web server/application server. The browser side is whatever that happens in your browser. It is the place where your JavaScript code runs and your HTML elements reside. The server-side is whatever happens at the server at the other end of your computer. The request that you fire from your browser has to travel through the wire (probably across the network) to execute some server-side code and returns the appropriate response. Your browser is oblivious to the server-side technology or the language your server-side code is written in. The server-side is also the place where your C# code resides. Let us discuss some of the facts to make things clear: Fact 1: All browsers can only understand HTML, CSS, and JavaScript, irrespective of the browser vendor. You might be using Internet Explorer, Firefox, Chrome, or any other browser. Still, the fact that your browser can understand only HTML, CSS, and JavaScript holds true. It cannot understand C#. It cannot understand Java. Nor Ruby. Only HTML, CSS, and JavaScript. This is the reason why you can access the web applications, built using any technology could be accessed by the same browser.

[ 11 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

Fact 2: The purpose of any web development framework is to convert your server-side code to HTML, CSS, and JavaScript. This is related to the previous point. As browsers can only understand HTML, CSS, and JavaScript, all the web development technologies should convert your server-side code to HTML, CSS, and JavaScript so that your browser can understand. This is the primary purpose of any web development framework. This is true for whether you build your web applications using ASP.NET MVC, ASP.NET Web Forms, Ruby on Rails, or J2EE. Each web development framework may have a unique concept/implementation regarding how to generate the HTML, CSS, and JavaScript, and may handle features such as security performance differently. But still, each framework has to produce the HTML, because that's what your browsers understand.

HTTP methods Even though all the requests of the HTTP protocol follow the request-response pattern, the way the requests are sent can vary from one to the next. The HTTP method defines how the request is being sent to the server.

[ 12 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

The available methods in HTTP are GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS, CONNECT, and PATCH. In most of the web applications, the GET and POST methods are widely used. In this section, we will discuss these methods. Later, we will discuss other HTTP methods on a need-to-know basis.

GET method GET is a method of the HTTP protocol which is used to get a resource from the server. Requests which use the GET method should only retrieve the data and should not have any side effect. This means that if you fire the same GET request, again and again, you should

get the same data, and there should not be any change in the state of the server, as a result of this GET request.

In the GET method, the parameters are sent as part of the request URL and therefore will be visible to the end user. The advantage of this approach is that the user can bookmark the URL and visit the page again whenever they want. An example is www.yourwebsite.com?tech=mvc6&db=sql. We are passing a couple of parameters in the preceding GET request. tech is the first parameter with the value mvc6 and db is the second parameter with the value sql. Assume your website takes the preceding parameters with values and searches in your database to retrieve the blog posts that talk about mvc6 and sql before presenting those blog posts to the user.

[ 13 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

The disadvantage of the GET method is that, as the data is passed in clear text in the URL as parameters, it cannot be used to send the sensitive information. Moreover, most browsers have limitations on the number of characters in the URL, so, when using GET requests, we cannot send large amounts of data.

POST method The POST request is generally used to update or create resources at the server. Data is passed in the body of the request. This has the following implications: You can send sensitive information to the server, as the data is embedded in the body of the request and it will not be visible to the end user in the URL. As the data is not sent through the request URL, it does not take up space in the URL and therefore it has no issues with the URL length limitations.

As we have covered the fundamentals, we can now proceed to discuss ASP.NET.

What is ASP.NET? ASP.NET is a server-side web application development framework allowing developers to build web applications, websites, and web services. It was first introduced by Microsoft in early 2002, and in these 14 years, it has undergone a lot of changes.

[ 14 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

Basically, ASP.NET has three programming models: ASP.NET Web Forms ASP.NET Web Pages ASP.NET MVC Even though the end result of all of the preceding programming models is to produce the dynamic web pages effectively, the methodologies that they follow differ from each other. Let us discuss each one of these programming models to understand their principles.

ASP.NET Web Forms Historically, when ASP.NET was first introduced, ASP.NET Web Forms was the only programming model available to programmers to develop web applications in ASP.NET. The ASP.NET Web Forms model abstracted the web so that it can maintain the state even though the web is inherently stateless. It also supports the event-driven programming model at the server-side. This has helped desktop application developers to have a smooth transition in moving into web application development. Like PHP and several other web application frameworks, ASP.NET Web Forms is a filebased framework where users access the web page by means of accessing a file at the server. The server will process your request, convert all of your server-side components in that file to HTML, and send it back to the requesting client. Each web page in ASP.NET Web Forms is represented by two files: .aspx and .aspx.cs or .aspx.vb. The .aspx file contains your front end components-all of your ASP controls and your HTML elements. The .aspx.cs (if you are using C# as the code-behind language) or .aspx.vb (if you are using Visual Basic as the code-behind programming language) contains the code for events which are happening at the web page. This was the predominant programming model prior to the arrival of ASP.NET MVC, and this programming model is still being used to maintain the production applications that were written using this model.

[ 15 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

ASP.NET Web Pages ASP.NET Web Pages are primarily targeted at small web applications where the dataprocessing logic is written directly on the web page.

ASP.NET MVC ASP.NET MVC is the implementation of the MVC pattern in ASP.NET. The disadvantages of ASP.NET Web Forms, such as limited control over the generation of HTML are resolved in ASP.NET MVC. As most of the modern applications are controlled by client-side JavaScript libraries/frameworks, such as jQuery, KnockoutJS, and AngularJS, having complete control over the generated HTML is of paramount importance. Let us talk a bit about the Model-View-Controller pattern and how it benefits the web application development. Model-View-Controller (MVC) pattern: This is a software architectural pattern which helps in defining the responsibility for each of the components and how they fit together in achieving the overall goal. This pattern is primarily used in building user interfaces, and is applicable in many areas including developing desktop applications and web applications. But I am going to explain the MVC pattern from the context of web development. Primarily, the MVC pattern has three components: Model: This component represents your domain data. Please note that this is not your database. This model component can talk to your database, but the model only represents your domain data. For example, if you are building an ecommerce web application, the model component may contain classes such as Product, Supplier, and Inventory. View: This component is responsible for what to present to the user. Usually, this component would contain your HTML and CSS files. This may also include the layout information governing how your web application looks to the end user.

[ 16 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

Controller: As the name implies, the controller is responsible for interacting with different components. It receives the request (through the routing module), talks to the model, and sends the appropriate view to the user.

This separation of responsibilities brings great flexibility to the web application development, allowing each area to be managed separately and independently.

Features of ASP.NET MVC ASP.NET MVC is an opinionated application development framework that prefers some functionality to be handled in a certain unique way. Let us discuss each of the features of ASP.NET MVC, along with the benefits they bring to the table.

Convention over configuration This is a design methodology that significantly reduces the number of decisions while developing the application, and thus making it simpler. If you have built any application using any technology, you might be using some kind of XML file where you have to configure everything in it. Even for the simpler straightforward things, we might have to configure the things over there. ASP.NET MVC embraces convention over configuration completely. It is the philosophy where you can be certain of how it is going to work without ever configuring same.

[ 17 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

Let me give you a simple example. All Controller code resides in the Controller folder, and Views have a separate folder for each of the Controllers. Whenever a request comes, ASP.NET MVC knows where to find the Controller and its associated View without any configuration. This methodology results in less configuration and less time in debugging.

Separation of concerns As discussed earlier, ASP.NET MVC has three major components—Model, Controller, and Views. This clearly separates the responsibilities so that the UI designer or UI developer can work on the View while backend developers can work on the Model to build a data domain for the application or to talk to the database. As the duties of each of the components are clearly defined and separated, the work can be done in parallel.

Control over the generated HTML If you have any experience in building an ASP.NET Web Forms application, you might have used ASP controls such as asp:textbox. Even though these controls have a lot of benefits, they have their cons as well. Developers cannot have complete control over the generated HTML when using these controls. Of course, you can set some properties in ASP control which in turn set some attributes in your generated HTML. But complete control is not possible. ASP.NET MVC HTML helpers and Tag helpers in ASP.NET Core provide better control over the generated HTML.

Better support for unit testing As each of the components is separated and compartmentalized, creating the unit test cases becomes easier to achieve: Unified MVC and Web API Controller in ASP.NET Core: In earlier versions of ASP.NET MVC, different controllers were used for MVC (System.Web.MVC.Controller) and Web API (System.Web.Http.ApiController). In ASP.NET Core, there is only one base controller that supports creating both MVC controllers and Web API controllers. With respect to routing, all the controllers use the same routes. Of course, you can use convention-based routing or attribute-based routing depending on your needs.

[ 18 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

Note about Web API: Web API is the Microsoft technology for building web services over the HTTP protocol. HTTP is not only limited to serving web pages. Web API could be used for building API services and data. The advantage of this approach is that the services which are built using Web API could be consumed by a wide range of clients such as, browsers, mobile applications, and desktop applications. The code for the earlier version of ASP.NET MVC (till ASP.NET MVC 5) is as follows: publicclassValuesController : ApiController { // GET api/values publicIEnumerableGet() { returnnewstring[] { "value1","value2"}; } } Code for ASP.NET Core: publicclassValuesController:Controller { //GET api/values [HttpGet] publicIEnumerableGet() { returnnewstring[] { "value1","value2"}; } }

ASP.NET 5 ASP.NET 5 is the latest framework from Microsoft for building modern cloud-based applications using .NET. It is a cross-platform framework so that you can run your applications built on ASP.NET 5 on any platform, such as Linux or Mac OS X and also on Microsoft Windows, obviously. ASP.NET 5 is open source, and the complete source code is available on GitHub at https://github.com/aspnet/home. The latest version of ASP.NET MVC, ASP.NET Core—runs on the ASP.NET 5 platform.

[ 19 ] WOW! eBook www.wowebook.org

Introduction to ASP.NET Core

Features of ASP.NET 5 Cross-platform support: Applications that are built on top of ASP.NET 5 can run on any platform where ASP.NET 5 is installed. This means that the applications that you build on ASP.NET 5 can run on Apple OS X and Linux machines. Deploying ASP.NET Core on a Linux machine will be explained in a later chapter. Better support for client-side development: ASP.NET 5 is designed to work seamlessly with a range of client-side frameworks, such as AngularJs, Knockout, Bootstrap, and React.js.

Summary In this chapter, we have learned the basics of web development, including what constitutes the server-side and client-side. We have even discussed the features of ASP.NET Core and ASP.NET 5.

[ 20 ] WOW! eBook www.wowebook.org

2

Setting Up the Environment In any development project, it is vital to set up the right kind of development environment so that you can concentrate on the developing the solution rather than solving environment issues or configuration problems. With respect to .NET, Visual Studio is the defacto standard IDE (Integrated Development Environment) for building web applications in .NET. In this chapter, you'll be learning about the following topics: Purpose of IDE Different offerings of Visual Studio Installation of Visual Studio Community 2015 Creating your first ASP.NET MVC 5 project and project structure

Purpose of IDE First of all, let us see why we need an IDE, when you can type the code in Notepad, compile it, and execute it. When you develop a web application, you might need the following things to be productive: Code editor: This is the text editor where you type your code. Your code editor should be able to recognize different constructs such as the if condition, for loop of your programming language. In Visual Studio, all of your keywords would be highlighted in blue color.

WOW! eBook www.wowebook.org

Setting Up the Environment

Intellisense: Intellisense is a context aware code-completion feature available in most modern IDEs including Visual Studio. One such example is when you type a dot after an object; this Intellisense feature lists out all the methods available on the object. This helps the developers to write code faster and easier. Build/Publish: It would be helpful if you could build or publish the application using a single click or single command. Visual Studio provides several options out-of-the-box to build a separate project or to build the complete solution in a single click. This makes the build and deployment of your application easier. Templates: Depending on the type of the application, you might have to create different folders and files along with the boilerplate code. So, it'll be very helpful if your IDE supports the creation of different kinds of template. Visual Studio generates different kinds of templates with the code for ASP.NET Web Forms, MVC, and Web API to get you up-and-running. Ease of adding items: Your IDE should allow you to add different kinds of items with ease. For example, you should be able to add an XML file without any issues. And if there is any problem with the structure of your XML file, it should be able to highlight the issue and provide information to help you to fix the issues.

Visual Studio offerings There are different versions of Visual Studio 2015 available to satisfy the various needs of developers/organizations. Primarily, there are four versions of Visual Studio 2015: Visual Studio Community Visual Studio Professional Visual Studio Enterprise Visual Studio Test Professional

System requirements Visual Studio can be installed on computers running Windows 7 Service Pack 1 operating system and above. You can get to know the complete list of requirements from the following URL: https://www.visualstudio.com/en-us/downloads/visual-studio-2015-system-requi rements-vs.aspx

[ 22 ] WOW! eBook www.wowebook.org

Setting Up the Environment

Visual Studio Community 2015 This is a fully featured IDE available for building desktops, web applications, and cloud services. It is available free of cost for individual users. You can download Visual Studio Community from the following URL: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx

Throughout this book, we will be using the Visual Studio Community version for development as it is available free of cost to individual developers.

Visual Studio Professional As the name implies, Visual Studio Professional is targeted at professional developers and contains features such as Code Lens for improving your team's productivity. It also has features for greater collaboration within the team.

Visual Studio Enterprise Visual Studio Enterprise is the full-blown version of Visual Studio with a complete set of features for collaboration, including a team foundation server, modeling, and testing.

Visual Studio Test Professional Visual Studio Test Professional is primarily aimed for the testing team or people who are involved in the testing, which might include developers. In any software development methodology, either the waterfall model or agile developers need to execute the development suite test cases for the code they are developing.

[ 23 ] WOW! eBook www.wowebook.org

Setting Up the Environment

Installing Visual Studio Community Follow the given steps to install Visual Studio Community 2015: 1. Visit the following link to download Visual Studio Community 2015: https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx

2. Click on the Download Community 2015 button. Save the file in a folder where you can retrieve it easily later:

[ 24 ] WOW! eBook www.wowebook.org

Setting Up the Environment

3. Run the downloaded executable file:

4. Click on Run and the following screen will appear:

[ 25 ] WOW! eBook www.wowebook.org

Setting Up the Environment

There are two types of installation—default and custom installation. The default installation installs the most commonly used features and this will cover most developer use cases of the developer. Custom installation helps you to customize the components that you want to install: 1. Click on the Install button after selecting the installation type. 2. Depending on your memory and processor speed, the installation will take 1 to 2 hours to install.

[ 26 ] WOW! eBook www.wowebook.org

Setting Up the Environment

3. Once all the components are installed, you will see the following Setup Completed screen:

[ 27 ] WOW! eBook www.wowebook.org

Setting Up the Environment

Installing ASP.NET 5 When we install the Visual Studio Community 2015 edition, ASP.NET 5 will be installed by default. As the ASP.NET Core application runs on top of ASP.NET 5, we need to install ASP.NET 5. There are a couple of ways to install ASP.NET 5: Get ASP.NET 5 from https://get.asp.net/

Another option is to install from the New Project template in Visual Studio This option is a bit easier as you don't need to search and install. The following are the detailed steps: 1. Create a new project by selecting File | New | Project or using the shortcut Ctrl + Shift + N:

[ 28 ] WOW! eBook www.wowebook.org

Setting Up the Environment

2. Select ASP.NET Web Application and enter the project name and click on OK:

[ 29 ] WOW! eBook www.wowebook.org

Setting Up the Environment

3. The following window will appear to select a template. Select the Get ASP.NET 5 RC option as shown in the following screenshot:

[ 30 ] WOW! eBook www.wowebook.org

Setting Up the Environment

4. When you click on OK in the preceding screen, the following window will appear:

5. When you click on the Run or Save button in the preceding dialog, you will get the following screen asking for ASP.NET 5 Setup. Select the checkbox, I agree to the license terms and conditions and click on the Install button:

[ 31 ] WOW! eBook www.wowebook.org

Setting Up the Environment

6. Installing of ASP.NET 5 might take a couple of hours. Once it is completed you'll get the following screen:

During the process of installing ASP.NET 5 RC1 Update 1, it might ask you to close the Visual Studio. If asked, please do so.

Project structure in ASP.NET 5 application Once ASP.NET 5 RC1 is successfully installed, open the Visual Studio, create a new project and select the ASP.NET 5 Web Application as shown in the following screenshot:

[ 32 ] WOW! eBook www.wowebook.org

Setting Up the Environment

A new project will be created and the structure will be like following:

File-based project Whenever you add a file or folder in your file system (inside the ASP.NET 5 project folder), the changes will be automatically reflected in your application.

Support for full .NET and .NET core You might have noticed a couple of references in the preceding project: DNX 4.5.1 and DNX Core 5.0. DNX 4.5.1 provides functionalities of full-blown .NET whereas DNX Core 5.0 supports only the core functionalities, which would be used if you are deploying the application across cross-platforms such as Apple OS X, Linux. The development and deployment of an ASP.NET Core application on a Linux machine will be explained in a later chapter.

[ 33 ] WOW! eBook www.wowebook.org

Setting Up the Environment

The Project.json package Usually, in an ASP.NET web application, we would have the assemblies as references and the list of references in a C# project file. But in an ASP.NET 5 application, we have a JSON file by name Project.json, which will contain all the necessary configurations with all its .NET dependencies in the form of NuGet packages. This makes dependency management easier. NuGet is a package manager, provided by Microsoft, which makes package installation and uninstallation easier. Prior to NuGet, all dependencies had to be installed manually. The dependencies section identifies the list of dependent packages available for the application. The frameworks section informs us about frameworks, supported by the application. The scripts section identifies the script to be executed during the build process of the application. Include and exclude properties can be used in any section to include or exclude any item.

Controllers This folder contains all of your controller files. Controllers are responsible for handling requests, communicating models, and generating the views.

Models All of your classes representing domain data will be present in this folder.

Views Views are files that contain your frontend components and are presented to the end users of the application. This folder contains all of your Razor View files.

Migrations Any database-related migrations will be available in this folder. Database migrations are the C# files which contain the history of any database changes done through an Entity Framework (an ORM framework). This will be explained in detail in a later chapter.

[ 34 ] WOW! eBook www.wowebook.org

Setting Up the Environment

The wwwroot folder This folder acts as a root folder and it is the ideal container to place all of your static files such as CSS and JavaScript files. All the files which are placed in wwwroot folder can be directly accessed from the path without going through the controller.

Other files The appsettings.json file is the config file where you can configure application level settings. Bower, npm (Node Package Manager), and gulpfile.js are client-side technologies, supported by ASP.NET 5 applications.

Summary In this chapter, you learned about the offerings in Visual Studio. Step-by-step instructions are provided for installing the Visual Studio Community version, freely available for individual developers. We have also discussed the new project structure of the ASP.NET 5 application and the changes when compared to the previous versions. In the next chapter, we are going to discuss the controllers and their roles and functionalities. We'll also build a controller and associated action methods and see how they work.

[ 35 ] WOW! eBook www.wowebook.org

3

Controllers As discussed in the first chapter, all web applications receive requests from the server and produce a response, that is delivered back to the end user. A Controller does the job of receiving the request and producing the output based on the input data in ASP.NET MVC. In this chapter, you'll be learning about the following topics: Role of the Controller in ASP.NET MVC applications Routing introduction and concepts Creating your first ASP.NET 5 application Installation of the ASP.NET Core NuGet packages in your application Creation of your first Controller and action method, which returns a simple Hello World Adding a View and making the changes that allow your Controller to use that View Adding a Model and passing that Model data to your View

WOW! eBook www.wowebook.org

Controllers

Role of the Controller in ASP.NET MVC applications At the high level, the Controller orchestrates between the Model and the View, and sends the output back to the user. This is also the place where authentication is usually done through action filters. Action filters will be discussed in detail in the Filters section of this chapter. The following figure illustrates the high-level flow of a request (with the steps) in ASP.Net MVC and shows us how the Controller fits into the big picture:

The following is the sequence of events that will happen at high level when the user is accessing the ASP.NET Core application: 1. The user types the URL in the browser. 2. Based on the pattern of the URL, the routing engine selects the appropriate Controller. 3. The Controller talks to the Model to get any relevant data through its action methods. Action methods are methods within a controller class. 4. The Controller then passes the data to the View to present it in a viewable format, typically as HTML elements. 5. The View is finally delivered to the user, which he would be viewing in his browser. Before discussing the controller, let us discuss the fundamentals of routing concepts, as the routing engine only chooses the appropriate controller and action method at runtime.

[ 37 ] WOW! eBook www.wowebook.org

Controllers

Introduction to routing The routing engine is responsible for getting the incoming request and routing that request to the appropriate Controller based on the URL pattern. We can configure the routing engine so that it can choose the appropriate controller based on the relevant information. By convention, ASP.NET MVC follows this pattern: Controller/Action/Id. If the user types the URL http://yourwebsite.com/Hello/Greeting/1, the routing engine selects the Hello controller class and Greeting action method within the HelloController, and passes the Id value as 1. You can give default values to some of the parameters and make some of the parameters optional. The following is the sample configuration: The template: "{controller=Hello}/{action=Greeting}/{id?}");

In the preceding configuration, we are giving three instructions to the routing engine: Use the routing pattern controller/action/id. Use the default values Hello and Greeting for the controller and action respectively, if the values for controller or action are not supplied in the URL. Make the Id parameter optional so that the URL does not need to have this information. If the URL contains this Id information, it will use it. Otherwise, the Id information will not be passed to the action method. Let us discuss how the routing engine selects the controller classes, action methods, and Id values for different URLs: URL1:http://localhost/ Controller: Hello Action method: Greeting Id: no value is passed for the id parameter

Reasoning: The Hello controller is passed as the default value as per the routing configuration, as no value is passed as the Controller in the URL.

[ 38 ] WOW! eBook www.wowebook.org

Controllers

The following action method will be picked up by the routing handler when the preceding URL is passed: public class HelloController : Controller { public ActionResult Greeting(int id) { return View(); } } URL2: http://localhost/Hello/Greeting2 Controller: Hello Action method: Greeting2 Id: no value is passed for the id parameter

Reasoning: The Hello controller will be chosen as the URL contains Hello as the first parameter, and the Greeting2 action method will be chosen as the URL contains Greeting2 as the second parameter. Please note that the default value mentioned in the configuration would be picked only when no value is present in the URL. As the id parameter is optional and the URL does not contain the value for id, no value is passed to the id parameter. The following action method Greeting2 will be picked up by the routing handler when the preceding URL is passed: public class HelloController : Controller { public ActionResult Greeting(int id) { return View(); } public ActionResult Greeting2(int id) { return View(); } } URL3: http://localhost/Hello2/Greeting2 Controller: Hello2 Action method: Greeting2 Id: no value is passed for the id parameter

[ 39 ] WOW! eBook www.wowebook.org

Controllers

Reasoning: As Hello2 is passed as the first parameter, the Hello2 controller will be selected, and Greeting2 is the action method selected since Greeting2 is passed as the second parameter. As the id parameter is optional and no value is passed for the parameter id, no value will be passed for the id. The following action method will be picked up by the routing handler when the preceding URL is passed: public class Hello2Controller : Controller { public ActionResult Greeting2(int id) { return View(); } } URL4: http://localhost/Hello3/Greeting2/1 Controller: Hello3 Action method: Greeting2 Id: 1

Reasoning: Hello3 is the controller selected as it is mentioned as the first parameter, Greeting4 is the action method, and 1 is the value passed as the id. The following action method will be picked up the routing handler when the preceding URL is passed: public class Hello3Controller : Controller { public ActionResult Greeting2(int id) { return View(); } }

We will discuss routing in detail in a later chapter. Once the request reaches the controller, the controller will create a response by talking to the Model and may pass the data to View and the View will then be rendered to the end user.

[ 40 ] WOW! eBook www.wowebook.org

Controllers

Creating ASP.NET 5 application It's time to get our hands dirty. Let us create a simple ASP.NET 5 application. Fire up Visual Studio and follow these steps: 1. Create a project by selecting File | New Project in Visual Studio. The first option is for creating an earlier version of the ASP.NET Web application. The second option is for creating the ASP.NET Core application using the .NET Core framework. The third option is for creating the ASP.NET Core application using the .NET framework. The difference between the second and third option is that the .NET framework supports all the functionalities of existing .NET frameworks whereas .NET Core supports only the core functionalities. The advantage of using the .NET core library is that it can be deployed on any platform.

2. Select the Empty template from the list of ASP.NET 5 templates. The second option is for creating the Web API application (for building the HTTP-based services) and the third option is for creating a web application containing some basic functionalities which you can run just from out of the box without you ever needing to write anything.

[ 41 ] WOW! eBook www.wowebook.org

Controllers

3. Once you click OK in the window as shown in the preceding screenshot, (after selecting the Empty template option) a solution will be created as shown in the following screenshot:

[ 42 ] WOW! eBook www.wowebook.org

Controllers

4. When you run the application (by pressing F5) without any changes, you'll get the simple Hello World! text on your screen as shown in the following screenshot:

We have not done any coding in this newly created application. So, have you thought about how it displays the text Hello World!? The answer lies in the Startup.cs file, which contains a class by the name of Startup. This class contains the Main method, which acts as the entry point for the web application. If you have used any of the previous versions of ASP.NET MVC, or even ASP.NET Web Forms, this would not be the case. ASP.NET 5 runtime calls the ConfigureServices and Configure methods. For example, if you want to configure any service, you can add it here. Any custom configuration for your application can be added to this Configure method: public void ConfigureServices(IServiceCollection services) { } // This method gets called by the runtime. Use this method to the HTTP request pipeline. public void Configure(IApplicationBuilder app) { app.UseIISPlatformHandler(); app.Run(async (context) => { await context.Response.WriteAsync("Hello World!"); }); }

[ 43 ] WOW! eBook www.wowebook.org

configure

Controllers

There are only a couple of statements in the Configure method. The first statement tells the run-time to use the IISPlatformHandler for handling all the incoming HTTP requests. Let us leave aside async, await, and context for the moment in the second statement, which we will discuss later. In essence, the second statement tells the run-time to return Hello World! for all the incoming requests irrespective of the incoming URL. When you type the URL http://localhost:50140/Hello in your browser, it will still return the same Hello World!. This is the reason we got the Hello World! when we ran the application. As we have chosen the Empty template while creating the ASP.NET 5 application, no component will have been installed. Even MVC wouldn't be installed by default when you select the Empty template as we did. You can confirm it by opening the project.json file, where you can see no ASP.NET MVC is mentioned in the list of dependencies: "dependencies": { "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final", "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final" },

So first, let us install the ASP.Net Core package for our application.

[ 44 ] WOW! eBook www.wowebook.org

Controllers

Installing the ASP.NET Core NuGet package in your application Follow these steps to install the NuGet package of ASP.NET MVC: 1. Right click on the project, and select the Manage NuGet Packages option:

[ 45 ] WOW! eBook www.wowebook.org

Controllers

2. Select the Include Prerelease checkbox so that the NuGet Package Manager will list out all the prerelease packages. Search for MVC and you'll get theMicrosoft.AspNet.MVC package, as shown in the following result, and click on the Install button on the right-hand side:

3. Review the changes:

[ 46 ] WOW! eBook www.wowebook.org

Controllers

4. Once you click on Review Changes, the following dialog box will appear where you need to accept the license terms:

The NuGet Package Manager will download and install the ASP.NET Core and will update the project.json file and the associated references. Now, your project.json file will have updated dependencies. The second line Microsoft.AspNet.Mvc is added: "dependencies": { "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final", "Microsoft.AspNet.Mvc": "6.0.0-rc1-final", "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final" },

Alternatively, you can also update the project.json with the NuGet package along with the version information. The NuGet Package Manager will automatically download and install them. ASP.NET Core is installed in our application. Now, we need to tell our application to use ASP.NET MVC.

[ 47 ] WOW! eBook www.wowebook.org

Controllers

This needs a couple of changes to the Startup.cs file: 1. Configure the application to add the MVC service. This can be done by adding the following line to the ConfigureServices method of the Startup class: services.AddMvc();

2. Configure the routing so that our correct controllers will be picked for the incoming requests based on the URL entered. The following code snippet needs to be updated in the Configure method of the Startup.cs file:app.UseMvc(routes => { app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); });

In the preceding statement, we are configuring the routes for our application. In this chapter and most of the chapters in this book, we will write codes manually or choose an Empty template instead of relying on scaffolding templates. For those who are new to the term scaffolding, scaffolding is a feature that generates all the necessary boilerplate code for you for the selected item (for example, the Controller) instead of you needing to write everything. Though I agree that scaffolding templates are useful and save time in generating the boilerplate code, they hide many of the details that beginners have to understand. Once you write code manually, you'll know all the intricacies of how each of the components is contributing to the big picture. Once you are strong in the fundamentals, you can use scaffolding templates to save you time in writing the boilerplate code.

Our first Controller Before creating the Controller, we need to remove the following app.Run statement as this will return Hello World! for all the incoming requests. As we want incoming requests to be handled by the controllers, we need to remove the following code from the Configure method of the Startup class: app.Run(async (context) => { await context.Response.WriteAsync("Hello World!"); });

[ 48 ] WOW! eBook www.wowebook.org

Controllers

We have installed the ASP.NET Core in our application. So, we are geared up to creating our first ASP.NET Core controller. Create a folder with the name Controllers and add a new Controller by selecting from the context menu as shown in the following screenshot:

Once you select Add | New Item, you will be shown the following list of options. We are going to add an MVC controller class to our project:

[ 49 ] WOW! eBook www.wowebook.org

Controllers

A class will be created with the following content: public class HomeController : Controller { // GET: // public IActionResult Index() { return View(); } }

All controllers, both MVC and Web API controllers, inherit from the Controller base class. In earlier versions of ASP.NET MVC, MVC controllers would inherit from the Controller class and Web API controllers would inherit from the APIController class. In the preceding HomeController class, we have a single action method by Index that returns the corresponding View. When you run the application as it is, you'll get a 500 Internal Server Error. The reason being is that no View has been created for the Index action of the HomeController and ASP.NET Core tries to search for that View. As the View is not available, it returns a 500 Internal Server Error. Instead of creating and returning that View, let us make a simple change to this action method. Let us return a string, Hello World! I am learning MVC 6!, and change the return type of IActionResult: public string Index() { return "Hello World! I am learning MVC 6!"; }

Run the application. You'll see the Hello World! I am learning MVC 6! in your browser as shown in the following screenshot. Please make sure that you remove the app.Run statement in the Configure method as mentioned earlier:

[ 50 ] WOW! eBook www.wowebook.org

Controllers

Voila! We have changed the ASP.NET Core application to render the custom content instead of the boring Hello World. What we have done may seem like a marginal improvement, but we have used controllers and action methods in our ASP.NET Core application, which has brought a lot of structure and flexibility to the web application development.

The following is the sequence of steps that occur when we run the application: 1. The application runs on the URL http://localhost:50140, where 50140 is the port number selected by IIS Express to run the application on my local system. This number may vary. 2. As we have not passed any parameter, default values for the Controller and action method will be selected. In our case, HomeController will be chosen as the Controller and Index will be chosen as the action method in the HomeController. Since ID is the optional value and it is not passed, this ID parameter is ignored. 3. After the Controller and action methods are selected by the routing engine, control is passed to the action method of the selected controller. In our case, it will be the Index action method of the HomeController. 4. In the Index action method, we are returning a string, Hello World! I am learning ASP.Net MVC 6!. This text is returned from the controller, which would then return back to the user.

[ 51 ] WOW! eBook www.wowebook.org

Controllers

IActionResult If you noticed, the default return type in the action method of the controller was IActionResult and then we changed the return type to the string in order to return the text Hello World.... The IActionResult is the interface that we can use to return different types of ActionResult, ranging from a simple string to complex JSON data, so, we don't need to change the return type of the action method to return the string.

In the earlier example, I have changed the return type to the string to make things simple. Now, let us make a simple change to return the string by keeping the return type (IActionResult) as it is: // GET: // public IActionResult Index() { return Content("Hello World! I am learning MVC 6!"); }

While returning the string, we are using the virtual method, called Content from the Controller class (the base controller from where HomeController is inherited from) in the preceding action method. The purpose of this Content() method is to convert the string to the type IActionResult. Now, run the application. We should be getting the same result. IActionResult is capable of returning different data types: ContentResult: Can return a text result. EmptyResult: Returns a null result. FileResult: Returns a binary output to write to the response. HttpStatusCodeResult: Provides a way to return. JavaScriptResult: Returns a script that can be executed from the client side. JSonResult: When you return a serialized JSON object. RedirectResult: Redirects to another action method. RedirectToRouteResult: Represents a result that performs a redirection by

using a specified route values dictionary.

[ 52 ] WOW! eBook www.wowebook.org

Controllers

Adding Views We were returning a simple string from the controller. Although that explains the concept of how the Controller and action method works, it is not of much practical use. Let us create a new action method by the name, Index2: public IActionResult Index2() { return View(); // View for this 'Index2' action method }

Now, we have created the action method that returns a View. But we still have not added the View for the same. By convention, ASP.NET MVC would try to search for our View in the folder Views\{ControllerName}\{ActionMethod.cshtml}. With respect to the preceding example, it will try to search for Views\Home\Index2.cshtml. Please note that the name of the controller folder-is Home , not HomeController. Only the prefix is needed as per convention. As this folder structure and file are not available, you'll get a 500 Internal Server Error when you try to access this action method through the URL http://localhost:50140/Home/Index2. So, let us create a folder structure. Right-click on the solution, select Add | New Folder from the context menu, create a folder called Views, and then create a subfolder by the name Home within the Views folder:

[ 53 ] WOW! eBook www.wowebook.org

Controllers

Right click on the Home folder, and select Add | New Item from the context menu. A dialog will appear as shown in the following screenshot. Give the name of the file as Index2.cshtml, as our action method name is Index2. cshtml is the razor view engine (this will be discussed in detail in the ViewEngines section of the Views chapter) extension used when you are using C#.

A file by the name Index2.cshtml will be created when you click the Add button in the preceding screen with the following content:

@* is the comment syntax in the razor view engine. You can write any C# code within the @{} block.

Let us add a simple HTML block after the generated code:

Hello! This is my first View

[ 54 ] WOW! eBook www.wowebook.org

Controllers

Now, when you run the application, you will get the following output:

The following diagram explains the request flow and how we generate the response through the View:

Adding Models Models represent your business domain classes. Now, we are going to learn about how to use the Models in our controller. Create a Models folder and add a simple Employee class. This is a just a plain old C# class: public class Employee { public int EmployeeId { get; set; } public string Name { get; set; } public string Designation { get; set; } }

[ 55 ] WOW! eBook www.wowebook.org

Controllers

Create a new action method, Employee, in our HomeController, and create an object of the Employee Model with some values, and pass the Model to the View. Our idea is to use the Model employee values in the View to present them to the user: using Chapter3.Models; public IActionResult Employee() { //Sample Model - Usually this comes from database Employee emp1 = new Employee { EmployeeId = 1, Name = "Jon Skeet", Designation = " Software Architect" }; return View(emp1); }

Now, we need to add the respective View for this action method. Add a new Razor view file in the View\Home folder. Add the following code snippet. Whatever comes after the @ symbol is considered as Razor code. In the following code, we are trying to access the properties of the Model object that is passed to our view. In our case, Model represents the employee object that we have constructed in our action method. You can access the object from View using the Model keyword:

Employee Name : @Model.Name
Employee Designation: @Model.Designation


When you run the application and type the URL http://localhost:50140/Home/Employee, you'll see the following output:

[ 56 ] WOW! eBook www.wowebook.org

Controllers

Passing data from Controller to View We have just discussed how to pass the data from the Controller to the View using the Model object. While calling the View, we are passing the model data as a parameter. But there are times when you want to pass some temporary data to the View from the Controller. This temporary data may not deserve a model class. In such scenarios, we can use either ViewBag or ViewData. ViewData is the dictionary and ViewBag is the dynamic representation of the same value.

Let us add the company name and company location property using ViewBag and ViewData as shown in the following code snippet: public IActionResult Employee() { //Sample Model - Usually this comes from database Employee emp1 = new Employee { EmployeeId = 1, Name = "Jon Skeet", Designation = " Software Architect" }; ViewBag.Company = "Google Inc"; ViewData["CompanyLocation"] = "United States"; return View(emp1); }

Make the respective changes in the View file as well so that we can display the Company name and Company location values:

Employee Name : @Model.Name
Employee Designation: @Model.Designation
Company : @ViewBag.Company
Company Location: @ViewData["CompanyLocation"]


[ 57 ] WOW! eBook www.wowebook.org

Controllers

Run the application after making the preceding changes:

ViewBag and ViewData represent the same collection, even though the entries in the collection are accessed through different methods. ViewBag values are dynamic values and are executed at run-time, whereas the ViewData is accessed through the dictionary.

To test this, let us make a simple change to our view file: Employee Name : @Model.Name
Employee Designation: @Model.Designation
Company : @ViewData["Company"]
Company Location : @ViewBag.CompanyLocation


Even though I have stored the Company value using ViewBag in the Controller, I am accessing the same using ViewData. The same is the case for the Company Location value, we have stored the value using ViewData in the Controller, but we are accessing the value using ViewBag. When you run the application after making the preceding changes, you'll see the same result as you have seen before.

Filters Filters in ASP.NET MVC enable you to run code before or after a particular stage in the execution pipeline. They can be configured globally per-controller or per-action. There are different kinds of filters, and each filter is executed at a different stage in the pipeline. For example, action filters are executed when the action method is executed. Let us use a simple example to see how an action filter (a type of filter) works.

[ 58 ] WOW! eBook www.wowebook.org

Controllers

I have created a simple controller, DateController, where I am just displaying the time. In this action method, I am using a predefined action filter by the name of ResponseCache, that caches the response for the duration specified in seconds. In the following code snippet, we have mentioned the duration as 600 seconds. So, the response will be cached for 10 minutes. public class DateController : Controller { [ResponseCache(Duration = 600)] public IActionResult Index() { return Content(DateTime.Now.ToShortTimeString()); } }

When I run it for the first time, it displays the time as expected. But when you refresh the browser (which indirectly fires the request again), the time is not updated as the response is cached already by the application. In the following screenshot, even though the time is 7:43, the application is still showing as 7:40:

The following are the predefined types of filters available in ASP.NET Core.

[ 59 ] WOW! eBook www.wowebook.org

Controllers

Authorization filters These are used for authorization and are mainly intended to determine whether the current user is authorized for the request being made.

Resource filters These are the filters that handle the request after authorization and are the last one to handle the request before it leaves the filter pipeline. They are used to implement caching or by passing the filter pipeline.

Action filters These wrap calls to individualaction method calls and can manipulate the arguments passed in the action as well as the action result returned from it.

Exception filters These are used to manage the unhandled exceptions in ASP.NET MVC.

Result filters This wrap the individual action results and they only run when the action method is executed successfully.

Summary In this chapter, we have built our first ASP.NET 5 application from scratch and we have installed ASP.NET Core in our ASP.NET 5 application. We have learned how the controller fits into the overall ASP.NET MVC application and learned how to build your first controller with the action methods. We also learned about how to use Model and View in our Controller. We have also discussed different ways to pass the data from the Controller to the View using ViewBag and ViewData. We have also learned about filters in ASP.NET MVC and how to make use of predefined filters in ASP.NET Core.

[ 60 ] WOW! eBook www.wowebook.org

4

Views Views are the actual output of the application that is delivered to the user. It is what they actually see when they access your application from the screen. All the components, be it menus, input elements, dialog boxes, and everything the user sees comes from your Views only. If you do not provide good user experience when accessing your application, users will not care how great your application is. So, Views play a critical role when building an ASP.NET MVC application. In this chapter, we'll cover the following topics: The purpose of View Engine and Razor View Engine Programming in Razor View Engine and different programming constructs Layout in ASP.NET Core and its features HTML Helpers Partial Views Tag Helpers

The View engine and the Razor View engine As discussed in Chapter 1, Introduction to ASP.NET Core, a browser can only understand HTML, CSS, and JavaScript. The purpose of the View engine is to generate the HTML code from your View and send it to the browser so that it can understand the content. Primarily, there are two different types of View engines—Razor View engine and Webform View engine. Although these two View engines come out of the box with ASP.NET MVC, you can use any custom View engine.

WOW! eBook www.wowebook.org

Views

Razor View engine The Razor View engine is the default and recommended View engine in ASP.NET Core, and going forward, this may be the only View engine coming out of the box when you install ASP.NET MVC. You can mix a C# code and HTML code in your Razor View and the Razor View engine is intelligent enough to distinguish between these two and generate the expected output. In some scenarios, we may have to give additional information to Razor View to produce the appropriate results. Razor code blocks start with the @ symbol and do not require a closing @.

Programming in Razor View engine Programming in Razor View engine is just like you program in C#. The difference is that, in Razor View engine, your C# code will get mixed with HTML to produce the desired HTML output.

Variables in Razor View You can declare a variable inside the razor block and use that variable using the @ symbol. For all the examples in this chapter, we will only present the code samples of the view.

Let's discuss this with an example. 1. Create a Controllers folder and a Controller called HomeController. 2. Create a folder called Views, a subfolder called Home, and a View file called Index.cshtml by right-clicking the context menu and selecting Add | New Item and then MVC Razor View from the list. The HomeController.cs file will have following code: public class HomeController : Controller { // GET: // public IActionResult Index() { return View();

[ 62 ] WOW! eBook www.wowebook.org

Views } }

Next is the updated Razor View where we will declare a variable and use it. The first five lines and the last two lines are simple HTML elements. We will concentrate on the lines that are bold. Then, we will create a Razor block using @ { … } and declaring a variable inside it. The Razor block ends with the closing curly bracket. The snippet Value: is considered as simple HTML text. As we would like to use the razor variable value, we will use @i to instruct the Razor View engine that i is not a normal HTML text; and it is a Razor construct and is to be treated accordingly. The complete HTML code is as follows:

Views demo

@{ int i = 5; } Value: @i

When you run the application, you'll see the following output:

Please note that when you access the razor variable, you will need to use the @ symbol. Without this, Razor View engine sees the i as text and not as an expression.

[ 63 ] WOW! eBook www.wowebook.org

Views

The following screenshot is the result you will get when you access the variable without @ symbol:

The for loop You can use most of the programming constructs available in C# in Razor View. The following piece of code is the for loop construct where we loop through five times and print the variable name: @{ for (int i = 0; i < 5; i++) {

  • @(i+1)
  • } }

    The following are a few points to be noted: As the for loop is a razor code, we should enclose the loop with the @ symbol to indicate that the code that follows is a Razor code and not normal HTML. Whenever we use an HTML element or tag, Razor View engine falls back to HTML mode. If you want to use any Razor expression within the HTML tags, you will want to include the @ symbol again to tell the Razor View engine that whatever follows is a Razor code and not an HTML element. This is the reason we use the @ symbol again in the preceding expression, even within the parent root-level razor code. The complete code for the View is as follows:

    Views demo

      @{ for (int i = 0; i < 5; i++) {
    • @(i+1)
    • }

      [ 64 ] WOW! eBook www.wowebook.org

      Views }


    The while loop The following piece of code is the while loop implementation for the same loop. Please note that the emboldened expressions increment the variable i. We will not use the @ symbol as it is not within the HTML element: @{ int i = 0; while(i=18) { ViewBag.Message = "You are eligible to Vote!"; }

    [ 75 ] WOW! eBook www.wowebook.org

    Views else { ViewBag.Message = "Sorry.You are not old enough to vote!"; } return View(); }

    It is to be noted that both the GET and POST action method refer to the same View —ValidateAge.cshtml. Add the following content to the View just above the form element: @if(ViewBag.Message!=null) { @ViewBag.Message }

    Once the user submits the form, the POST action method sets the dynamic Message property in ViewBag. However, the value of this property will be null when the View is rendered as part of the GET request. If the value is not null, insert the message at the top of the page. When you run the application, you'll get the following output:

    Partial View Partial Views are just Views that can be reused across your application. Partial Views can be thought of as pluggable reusable blocks that you can call from anywhere and have the content of the partial view displayed.

    [ 76 ] WOW! eBook www.wowebook.org

    Views

    Consider the following structure of a web page—it's the same layout page that we used earlier, but with a couple of changes. The Latest News block is added to the Side Section and the Login block is added to the Top Section. These blocks are not restricted to the Top Section or Side Section and can be used anywhere in your application, including your Content Section as shown in the following figure:

    These Partial Views are not restricted to static content and can contain form elements. In the preceding screenshot, the Latest News Partial View contains the text content and the login Partial View contains form elements to get the e-mail ID and password. Location of Partial Views—Framework does not restrict the location of the Partial View. However, by convention, if your Partial View will be used only by your Controller, you can create that Partial View in the Controller-specific Views folder. For example, if your Partial View will only be used in HomeController file, you can create that Partial View in the Views\Home folder. Let's take look at how to create a Partial View and use it. As discussed earlier, a Partial View is just like a normal View. So, we will create a Partial View in the same way we create normal View.

    [ 77 ] WOW! eBook www.wowebook.org

    Views

    Right-click on the Shared folder and select Add | New Item. By convention, like all shared content, the name of the Partial View will also start with "_"(underscore), as shown in the following screenshot:

    We are creating this Partial View based on the assumption that it can be used from anywhere in the application.

    In the generated Partial View, I have added the following simple static content—a text and a simple table: This content and below table is coming from partial view
    Employee No Employee Name
    10012 Jon Skeet
    10013 Scott Guthrie


    [ 78 ] WOW! eBook www.wowebook.org

    Views

    Calling the Partial View A Partial View can be called using the @Html.Partial HTML helper. In our case, we will be calling the Partial View from Index2.cshtml file. The parameter that you pass will be the name of the partial file. It will search for the Partial View by that name and render that complete content as part of the Index2.cshtml file. The content of Index2.html file will now be as follows: Hello. This text will be rendered in body of the layout page


    @Html.Partial("_PartialHelloWorld")

    Now, run the application and access the URL http://localhost:50132/Home/Index2. You'll see the following output:

    View components View components are a new feature introduced in ASP.NET Core, they are almost similar to Partial Views but is more powerful. When you use Partial Views, you have dependency over the Controller. However, when you use the ViewComponent attribute, you do not have to depend on the Controller, so we will establish separation of concerns and have better testability. Even though the existing Partial View HTML helper is still supported, it is preferable to use the View component whenever you want to show a reusable piece of information when you are using .NET Core.

    [ 79 ] WOW! eBook www.wowebook.org

    Views

    Creating a View component You can create a ViewComponent using any of the following: Create a class by deriving from the ViewComponent attribute Decorate a class with the [ViewComponent] attribute or derive it from the class that has the [ViewComponent] attribute You can use the convention by creating a class that ends with a suffix ViewComponent attribute Whatever option you choose, this ViewComponent should be public, non-nested, and nonabstract classes. Like Controllers, you can use the Dependency Injection (via a constructor) in the ViewComponent attribute as well. As the ViewComponent attribute is separate from the Controller lifecycle, you may not be able to use the action filters in ViewComponents. There is a method called Invoke (or InvokeAync, the asynchronous equivalent of Invoke), that will return the IComponentViewResult interface. This method is similar to the action method of the Controller that will return the View. Let's get our hands dirty by creating a ViewComponent attribute. Create a new folder called ViewComponents in your project and a new class called SimpleViewComponent, as shown in the following screenshot:

    The SimpleViewComponent file that we created will look like the following: using System; using System.Collections.Generic; using System.Linq;

    [ 80 ] WOW! eBook www.wowebook.org

    Views using System.Threading.Tasks; using Microsoft.AspNet.Mvc; namespace Chapter4.ViewComponents { public class SimpleViewComponent :ViewComponent { public IViewComponentResult Invoke() { var data = GetSampleData(); return View(data); } /// /// This is a simple private method to return some dummy data /// /// private List GetSampleData() { List data = new List(); data.Add("One"); data.Add("Two"); data.Add("Three"); return data; } } }

    We just have a couple of methods, one to populate the data and the other is the Invoke method where we will render the View. Once you have created the ViewComponent attribute, you will need to include the ViewComponent namespace in the Views\_ViewImports.cshtml file so that the ViewComponents attributes can be available for all the Views. The highlighted code snippet in the following is added to the View: @using Chapter4 @using Chapter4.Models @using Chapter4.ViewComponents @addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

    We have created the ViewComponent and made them available to all of the Views. A simple action method in the HomeController file just returns the View: public ActionResult Sample() { return View();

    [ 81 ] WOW! eBook www.wowebook.org

    Views }

    In the associated View, we can just invoke the component as shown in the following code snippet:

    This is a sample web page


    @Component.Invoke("Simple")



    When you invoke the component, it will search in the following two folders: The Views\\Components\\ folder The Views\Shared\Components\/ folder The default View name of the View component is Default, which makes your file name for the View Default.cshtml. So, we will need to create the Default.cshtml file in Views\Shared\Simple\Default.cshtml folder, as shown in the following screenshot:

    In the the View (Default.cshtml file) of the ViewComponent file, we are just iterating the items in the model and displaying them as an unordered list item, as shown in the following code: @model IEnumerable Sample list
      @foreach(var item in Model)

      [ 82 ] WOW! eBook www.wowebook.org

      Views {
    • @item
    • }


    When you run the application and access the URL (http://localhost:50132/Home/Sample), you should see the following output:

    The first line, This is a sample web page, comes from the parent View file (sample.cshtml) whereas the subsequent list comes from ViewComponent attribute. The ViewComponent attributes are usually referred in the Views. However, if you want to call the ViewComponent directly from your Controller, you can do so. I have called the Sample action method to call the Simple ViewComponent directly instead of calling it through some other View as follows: public ActionResult Sample() { return ViewComponent("Simple"); //return View(); }

    [ 83 ] WOW! eBook www.wowebook.org

    Views

    Thus, these ViewComponents have far more flexibility and features, such as Dependency Injection, when compared to old HTML Partial Views. This ensures ViewComponents are separately tested.

    Tag Helpers Tag Helpers are a new feature in ASP.NET Core; they help generate the HTML elements. In HTML helpers, we will write a C#/Razor code to generate the HTML. The disadvantage associated with this approach is that many frontend engineers will not know C#/Razor code. They work on plain HTML, CSS, and JavaScript. Tag Helpers look just like HTML code but have all the features of server-side rendering. You can even build your custom Tag Helper for your needs. Let's take a look at how to use a Tag Helper. In order to use the Tag helper, you will need to install the Microsoft.AspNet.Mvc.TagHelpers NuGet package. Open the Package Manager Console window by selecting View | Other Windows | Package Manager Console, as shown in the following screenshot:

    [ 84 ] WOW! eBook www.wowebook.org

    Views

    You can install TagHelpers methods by entering the following command in the Package Manager Console window, the following command: Install-Package Microsoft.AspNet.Mvc.TagHelpers -Pre

    The following response will appear when you've entered the command:

    Once the TagHelpers package is installed, we will need to call ViewImports file, where we will add the TagHelpers directive so that Tag Helpers are available to our Views. Right-click on the Views folder and select the Add New Item option from the Context menu; you'll see the following screen:

    [ 85 ] WOW! eBook www.wowebook.org

    Views

    Add the following content to the _ViewImports.cs file. The first couple of lines tells ASP.NET MVC to include the necessary namespaces. The last line tells ASP.NET MVC to include all the TagHelpers available in Microsoft.AspNet.Mvc.TagHelpers. The first parameter indicates the name of TagHelper. We have used *, which means that we may want to use all the Tag Helpers. The second parameter indicates the assembly where the TagHelpers will be available: @using Chapter4 @using Chapter4.Models @addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers"

    As we are creating the _ViewImports.cshtml file directly under the Views folder, as shown in the following screenshot, the Tag Helpers will be available for all the Views:

    Had we included the _ViewImports.cshtml file under the Home folder, the Tag Helpers would be available only for the Views under the Home folder. Let's add a simple action method called Index3 in the HomeController file, and in the associated View, we will use Tag Helpers as shown in the following code: public IActionResult Index3() { ViewBag.Title = "This is Index3"; Person person = new Person(); return View(person); }

    Add the corresponding View (Index3.cshtml file) for the Index3 action method with the following code: @model Chapter4.Models.Person



    [ 86 ] WOW! eBook www.wowebook.org

    Views


    The following are a few things that you need to note in the preceding code, for the use of Tag Helpers: All the form elements look just like standard HTML elements with just a few changes in the attributes. This makes frontend developers work independently, without learning HTML/Razor code and thus more easily achieving the separation which concerns. The first line of the preceding view indicates the type of model data passed to the view from the Controller. The Form element has a couple of attributes named asp-controller and aspaction which represent Controller names and action method names respectively. The Label and input tag helpers are just like HTML elements, with just an additional asp-for attribute. The values for these attributes represent the model properties. You can take advantage of IntelliSense when entering the values for these attributes.

    Creating custom Tag Helpers ASP.NET Core provides many built-in Tag Helpers to help you create the necessary HTML elements for many scenarios. However, this process is not comprehensive and is exhaustive. Sometimes, you may want to make some changes in the generated HTML element, or you may want to create an HTML element with new properties or a new HTML element altogether. You are not restricted to using only the existing Tag Helpers in the ASP.NET Core application. You can create your own Tag Helper if the existing Tag Helpers do not suit your needs. Let's create a simple Tag Helper to create an e-mail link:

    [ 87 ] WOW! eBook www.wowebook.org

    Views

    There are a couple of ways to create Tag Helpers to implement the ITagHelper interface or inherit the TagHelper class. The TagHelper class has a Process method that you can override to write your custom Tag Helpers. The TagHelper class also has the TagHelperOutput parameter, which you can use to write and generate the desired output HTML. So, it is preferable to create Tag Helpers by inheriting from the TagHelper class. Our objective is to write a custom e-mail Tag Helper so that when someone uses that Tag Helper, which is , it should be converted to the following line of code: Drop us a mail

    The following are the steps that need to be performed to create the custom Tag Helper in the ASP.NET Core application. Create a folder called TagHelper and add a new item named the EmailTagHelper.cs file. By convention, all Tag Helpers class should end with TagHelper, even though we can override this convention.

    Once you have created the file, you will need to override the Process method to generate the desired HTML output: using using using using using

    System; System.Collections.Generic; System.Linq; System.Threading.Tasks; Microsoft.AspNet.Razor.TagHelpers;

    namespace Chapter4.TagHelpers { public class EmailTagHelper : TagHelper

    [ 88 ] WOW! eBook www.wowebook.org

    Views { public override void Process(TagHelperContext context, TagHelperOutput output) { string emailTo= context.AllAttributes["mailTo"].Value.ToString(); output.TagName = "a"; output.Attributes["href"] = "mailto:" + emailTo; output.Content.SetContent("Drop us a mail"); } } }

    The parameters used in the preceding code are explained as follows: The context parameter will give you all the information that you supply at Tag Helper. For example, in the Tag Helper, you can get the mailTo attribute and its associated value from the context parameter. In the first line of the preceding Process method, we will get the mailTo attribute value and use that value to create an attribute in the generated HTML (anchor tag). The output parameter is of type TagHelperOutput, which is used to generate the desired HTML output. The output.Content.SetContent parameter will set the text that is to be displayed for the anchor tag. We have created the e-mail Tag Helper. Now, we have to make it available to our Views so that we can make use of that Tag Helper in our Views. Edit Views\_ViewImports.cshtml to include the namespace of the TagHelpers and add the associated TagHelpers. In the following _ViewImports.cshtml file, we have added the content highlighted in bold: @using Chapter4 @using Chapter4.Models @using Chapter4.TagHelpers @addTagHelper "*, Microsoft.AspNet.Mvc.TagHelpers" @addTagHelper "*, Chapter4"

    [ 89 ] WOW! eBook www.wowebook.org

    Views

    The "*" symbol in the following line tells the view engine to include all the TagHelpers in the Chapter4 namespace: @addTagHelper "*, Chapter4"

    You can only specific TagHelpers, For example, the following line will include only the EmailTagHelper so it is available for our Views: @addTagHelper "Chapter4.TagHelpers.EmailTagHelper, Chapter4"

    Let's create a simple action method in our Home Controller. In the view of the associated action method, we will use the e-mail Tag Helper: public IActionResult AboutUs() { return View(); }

    The following is the view of the preceding AboutUs action method: About Us We are one of the biggest electronics retail store serving millions of people across the nation. blah.blah. blah
    If you want to hear great offers from us

    When you run the application and access the http://localhost:50132/Home/AboutUs URL, you will see the following output:

    [ 90 ] WOW! eBook www.wowebook.org

    Views

    Here, we created an anchor tag with the mailto attribute and the email value as the href attribute value. I have opened the Developer Tools window (Press F12 to do this and select the DOM Explorer tab) to see the generated HTML.

    Summary In this chapter, you learned what a View engine is and how to build a View using the Razor view engine. We also discussed different programming constructs that you can make use of in Razor to produce the desired HTML output. Then, you learned about Layout and how to provide a consistent site structure across all of the pages in your ASP.NET MVC application. Later, we discussed how to promote re-usability using Partial Views with an example. Finally, you learned how to use Tag Helpers to produce clean HTML.

    [ 91 ] WOW! eBook www.wowebook.org

    5

    Models Data is the heart of every application. A user enters data into the application, edits the entered data, and searches the data. We can even say that an application that we build is just an interface for the operations that we perform on the application data. So, it is absolutely necessary for any framework to provide a mechanism to handle data operations easier and more manageable. Models in ASP.NET MVC are used to represent the business domain data. In this chapter, you'll be learning about the following topics: Models and their purpose Creating a simple model and using it in the controller and views of the ASP.NET MVC application Creating a model specific to a View model Data flow in an ASP.NET MVC application in the context of models and ViewModels Purpose of the Entity Framework along with its features and benefits Adding, updating, and deleting data using the Entity Framework Using the Entity Framework in ASP.NET MVC applications

    Models Models are simple POCO (Plain Old C# Objects) classes representing your business domain data. For an e-commerce business, model classes would be Product, Order, and Inventory. If you are building an application for a university, model classes would be Student, Teacher, and Subject. Models represent the business domain data in your application and they are not aware of the underlying database that is being used in your application. In fact, you don't even need a database to work with models.

    WOW! eBook www.wowebook.org

    Models

    They can represent the data stored in an XML file or CSV file or any other data in your application. Having said that, these models could be used to interact with your database (in most cases) but they don't have any dependency to the database.

    The following steps describe how to create an ASP.NET Core application that uses Models: 1. Make sure to create an ASP.NET 5 application with an empty template. Install the ASP.NET Core NuGet package and configure this, as discussed in an earlier chapter. 2. Create a Controllers folder and create a HomeController with a single Index action method. 3. Create the following folder/files for the Views model: Views: This folder is inside your project. Views\_ViewStart.cshtml: This identifies the name of the Layout

    file. Views\Shared folder: This folder holds all the shared View components for your application. Shared\_Layout.cshtml: This file identifies what your web application structure should look like. Views\Home folder: This folder contains all of the Views of your HomeController. Views\Home\Index.cshtml: This is the view corresponding to the Index action method of HomeController. Now, we have created an ASP.NET Core application with Controllers and Views.

    Let us create a Models folder in our application; this folder will contain all of your model files. In a real world application, this folder and the respective model files would reside in separate projects. For the sake of simplicity, we are keeping the Models folder and its files in the same project.

    [ 93 ] WOW! eBook www.wowebook.org

    Models

    Let us create a simple model class Product model, in the Models folder: public class Product { public int ProductId { get; set; } public string Name { get; set; } public decimal Price { get; set; } }

    This Product model class is no different from any other C# class and contains a few properties about the product. Update the Index action method in HomeController to use the Product model, as shown in the following code snippet. We are building the model data and passing the model data to the View so that it can be shown to the users. However, it is NOT recommended to build the Model data in the controller's action methods as it violates the separation of concerns. For the sake of simplicity only, we are building the Model data in an action method. public IActionResult Index() { /* Build the products model. It is NOT RECOMMENDED to build models in Controller action methods like this. In real world appication, these models and the respective Data Access Layer(DAL) would be in separate projects. We are creating it here to make things simpler to explain */ List Products = new List { new Product { Name = "Mobile Phone", Price = 300

    [ 94 ] WOW! eBook www.wowebook.org

    Models }, new Product { Name = "Laptop", Price = 1000 }, new Product { Name = "Tablet", Price = 600 } }; return View(Products); }

    Update the corresponding Index View method to use the Model data loop through each product and show it as an unordered list item. The @model in the first line represents the Model metadata; the type of data being passed to the View. The Model in the for…each loop represents the actual data itself, a list of products in our case: @model List
      @foreach (var Product in Model) {
    • @Product.Name
    • }


    When you run the application, you'll get the following output:

    We have successfully created a Model and have used it in our Controller and View. Let us create a comparatively complex Model class, Order (Order.cs in the Models folder), which contains a list of products and their total amount: public class Order { public int OrderId { get; set; } public List Products { get; set; }

    [ 95 ] WOW! eBook www.wowebook.org

    Models public decimal Total { get; set; } }

    Now, we have to update the Index action method to use the Order model. Once we build the list of products, we are assigning the products list to the Order property and calculating the total cost of the order. These calculations would usually be done as part of the business layer. Again, for the sake of simplicity, we are building the data Model and calculations here in the action; this should never be the case in a real world application. The code highlighted in bold is the changes that we have made in the action method: public IActionResult Index() { /* Build the products model. It is NOT RECOMMENDED to build models in Controller action methods like this. In real world appication, these models and the respective Data Access Layer(DAL) would be in separate projects. We are creating it here to make things simpler to explain */ List Products = new List { new Product { Name = "Mobile Phone", Price = 300 }, new Product { Name = "Laptop", Price = 1000 }, new Product {

    [ 96 ] WOW! eBook www.wowebook.org

    Models Name = "Tablet", Price = 600 } }; Order order = new Order(); order.Products = Products; order.Total = Products.Sum(product => product.Price); return View(order); }

    The View is updated to accommodate the Model changes. Model metadata (@model) is changed to indicate that the Order information is passed to the View instead of the list of products. Then, we are showing the list of products in table format. Please note that all of the Model data (Order object and its properties, in this case) can be accessed through the Model. For example, the Products class can be accessed through Model.Products and the value of the Total can be obtained through Model.Total: @model Chapter5.Models.Order @foreach (var Product in Model.Products){ }
    Product Name Price
    @Product.Name @Product.Price
    Total @Model.Total


    [ 97 ] WOW! eBook www.wowebook.org

    Models

    When you run the application, you'll see the following output:

    Models specific to a View component There are scenarios where you might want to update only a few properties in a large Model or you might want to create a new Model based on a few models. In such scenarios, it is better to create a new Model specific to the View. For example, let us assume that we are building a screen where you update the price of the product. This simple screen may contain only three properties—product ID, product name, and price of the product. But the product's Model may contain more than 30 properties to hold all details of the product such as manufacturer, color, size, and other properties. Instead of sending the complete Model with all the properties, we can create a new Model specific to this view with only a few properties—ID, Name, and Price.

    Note on ViewModels The ViewModels are entities where when you update the Model, your View would get updated automatically and vice versa. In many online articles and even in some books, they are referring to ViewModels when they are actually trying to mean Models specific to the View. In ViewModels, binding is two ways—when you update either the Model or the View, the other one would get updated automatically. Let us consider a simple example; you have a form with various fields on the left-hand side and print preview on the right side. In this case, whatever you type in real time in the form will be reflected immediately on the right side. In such cases, you can use pure View models when you type, your ViewModel would be updated and that ViewModel would be consumed in the right-hand side print preview. These pure ViewModels are being used in advanced JavaScript frameworks such as Knockout or AngularJS.

    [ 98 ] WOW! eBook www.wowebook.org

    Models

    In Models specific to the View, we are binding in only one way from the Model to the View. Here, we are sending a Model specific to the View instead of the generic Model (which represents a business domain class). However, in this book, we will be referring to Models specific to View as ViewModel for brevity. Unless otherwise specified, you should read all ViewModels as Models specific to View. So, I am making the same mistake made by other authors

    .

    Data flow with respect to a Model The following block diagram shows the data flow in an ASP.NET MVC application:

    Data Source represents your application data. The application data could reside anywhere—from full-fledged RDBMS such as SQL servers to simple Excel spreadsheets, or anything in between. Models represent the business domain data for your application and are independent of the data source being used. The same model could be used with different data sources. We can use the Model as-is in our views to get data or to present it. But in some views, you might not need all the properties of the model. So, instead of sending the entire Model to the View, we create models specific to the View and use them in our View. This makes things simpler.

    [ 99 ] WOW! eBook www.wowebook.org

    Models

    The following is the high-level sequence of events that happens when you store or retrieve a record in ASP.NET Core using the Model: 1. Users enter the data in a form (created using Views) in the application. The fields in the form do not need to represent the complete model as we need only a few properties in the Model. 2. The entered data is passed to the controller where Model binding happens. Model binding is the process where the data entered in the View gets mapped to the Model or ViewModel. 3. If the data is received in the ViewModel, then we would be converting the ViewModel to the Model. 4. Finally, the Model data is stored in the data source. Till now, we have been handling only in-memory data in our application. In almost all real world applications, some form of the database will be used for data storage, access, and retrieval. In the next section, we will discuss the Entity Framework (ORM framework), which makes data access simpler from a .NET application.

    Model binding Model binding is the process of mapping the Model data coming from the View to the ViewModel parameter of the action method in the Controller. Let us consider a simple form with a couple of form fields—Name and EmailID. On the submission of the form, these values would be mapped to the ViewModel object of the action method of the Controller. Model binding takes care of this mapping. The Model binder looks for a match in the form fields, query strings, and request parameters. In the preceding example, any class with these properties would be picked up by ModelBinder without any issues. As the following Person class contains the Name and EmailID properties, the model binder would not complain about using this model for mapping the entered values in the form: public class Person { public string Name { get; set; } public string EmailID { get; set; } }

    [ 100 ] WOW! eBook www.wowebook.org

    Models

    The following code snippet shows how to use the Person class in the action method: public ActionResult Add(Person p) { return View(); }

    The Entity Framework The Entity Framework is the Object Relational Mapping (ORM) framework that enables developers to work on domain-specific objects directly for data access instead of working on database queries. This reduces a lot of the code complexity in the data access layer in the application. Before discussing the Entity Framework and its features, let us pause for a moment and think about the steps that we follow when we try to save some information to the database when using ADO.NET: 1. 2. 3. 4. 5.

    Construct the business domain object. Create a connection to your database. Open the connection. Create a command object along with the command type. Add the properties of your business domain object to the parameters of the command object. 6. Execute the command that saves the data into the database. We have to follow the previously mentioned six steps for common operations such as saving a piece of data into the database. If you are using an ORM framework such as the Entity Framework, you just need three steps: 1. Construct the business domain object. 2. Create the DbContext class for your business domain object. The instance of the DbContext class represents the session with the database. 3. Save it to the database using the instance of the DBContext class.

    [ 101 ] WOW! eBook www.wowebook.org

    Models

    You might wonder how that is possible. As a matter of fact, in the background, the Entity Framework creates a connection to the database and executes the query to save the business domain object to the database. To make it simple, the Entity Framework writes all the data access code for you so that you can concentrate on achieving the business functionality of the application rather than writing the database layer code.

    The Entity Framework is independent of ASP.NET MVC As discussed earlier, the Entity Framework is an ORM framework for accessing data and is independent of ASP.NET MVC. The Entity Framework could be used in Windows Communication Foundation (WCF) services, Web API services, and even in console applications. You could use the Entity Framework in any type of application and make use of it to access data using objects. The concepts and the functionalities of the Entity Framework remain the same, irrespective of the type of application that you use it with. Now, we are going to use the Entity Framework with the console application. This allows us to concentrate on the task at hand and demonstrate the functionalities of the Entity Framework instead of working on the boilerplate code of the ASP.NET Core application. In a later part of this chapter, we will integrate the Entity Framework with the ASP.NET Core application. The latest version of the Entity Framework for the SQL server is 7.0.0 and it is still in beta at the time of writing this book. EF7 (Entity Framework 7) brings significant changes when compared to its previous version (Entity Framework 6). However, EF7 is the recommended version when building ASP.NET 5 applications and hence we will be using this version in this book. We need a database to explain many of the features of the Entity Framework. Please install SQL Server 2014 Express on your PC before continuing further. Step by step instructions for installing SQL Server 2014 Express and SQL Server Management Studio is given in Appendix A.

    [ 102 ] WOW! eBook www.wowebook.org

    Models

    Creating console applications with the Entity Framework Follow these steps to create a simple console application: 1. Select File | New Project and select Console Application. 2. Name the project and click on OK.

    Installing the Entity Framework 7 NuGet package There are two ways to install any NuGet package in your application: Using the NuGet Package Manager Using Package Manager Console

    [ 103 ] WOW! eBook www.wowebook.org

    Models

    Using the NuGet Package Manager People who prefer graphical interfaces can use this option: 1. Right-click on the console project and select Manage NuGet Packages from the context menu:

    [ 104 ] WOW! eBook www.wowebook.org

    Models

    2. Search for EntityFramework.MicrosoftSqlServer in the NuGet package and make sure the Include prerelease checkbox is checked. Click on Install once you select EntityFramework.MicrosoftSqlServer and select Latest pre-release 7.0.0rc1-final (at the time of writing this book). You can select any latest version of Entity Framework 7:

    3. Once you click on Install, the NuGet package manager will ask you to review the changes. Click on OK:

    [ 105 ] WOW! eBook www.wowebook.org

    Models

    4. Click on I Accept in the License Acceptance window:

    [ 106 ] WOW! eBook www.wowebook.org

    Models

    5. Once you click on I Accept, it will install the Entity Framework with all its dependencies. In the Output window, you will get the Finished message once the installation is complete:

    Using the Package Manager Console To install the NuGet package using the Package Manager Console, follow these steps: 1. Open the Package Manager Console window by selecting the menu option View | Other Windows | Package Manager Console.

    [ 107 ] WOW! eBook www.wowebook.org

    Models

    2. Type Install-Package EntityFramework.MicrosoftSqlServer - Pre in the Package Manager Console window as shown in the following screenshot:

    [ 108 ] WOW! eBook www.wowebook.org

    Models

    3. Once the installation is complete, a message, Successfully installed 'EntityFramework.MicrosoftSqlServer 7.0.0-rc1-final', will be shown:

    Installing Entity Framework commands We need to install the Entity Framework commands package in order to perform migration activities. Migration includes the creation of a database and its associated tables. Any changes in the schema will also be taken care of by migration:

    As discussed earlier, we need to follow three steps in order to interact with the database when we are using the Entity Framework: 1. Create the Model classes. 2. Create the DbContext class for your business domain object. The instance of the DbContext class represents the session with the database. 3. Construct the business domain object and save it to the database using the instance of the DBContext class. Let us discuss each of the preceding steps in details and try to save an object to the database.

    [ 109 ] WOW! eBook www.wowebook.org

    Models

    Creating Model classes The Model classes are simple POCO objects, which can be used with the Entity Framework. Let us create a POCO class for our business domain object, the Employee class in our case. I have created a new file named Employee.cs in our console application with the following content. This Employee class contains a few properties of an employee and has no special properties or fields to make it work with the Entity Framework. Let's take a look at the following code snippet: public class Employee { public int EmployeeId { get; set; } public string Name { get; set; } public decimal Salary { get; set; } public string Designation { get; set; } }

    By convention, if the property name is Id or ClassName+Id, it will be considered as a primary key by Entity Framework while creating the database table. Properties with string data types will be created as fields of the type nvarchar(max). However, we can override this behavior by using annotations, which we will be discussed later.

    Creating the DbContext class The instance of the DbContext class represents the session to the database and this DbContext class does most of the heavy lifting of your data access for your application. Create a new class by the named EmployeeDbContext with the following content: using Microsoft.Data.Entity; using System.Configuration; namespace ConsoleEF7 { public class EmployeeDbContext : DbContext{ public DbSet Employees {get; set;} protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) {string connectionString = ConfigurationManager.ConnectionStrings ["SqlServerExpress"].ConnectionString; optionsBuilder.UseSqlServer(connectionString); base.OnConfiguring(optionsBuilder);

    [ 110 ] WOW! eBook www.wowebook.org

    Models } } }

    Configure it using App.Config:







    There are a few things to be noted in the preceding code snippet: Include the Microsoft.Data.Entity namespace as the DbContext class available in this namespace. Our connection string is available in the App.Config file. In order to read the contents of the App.Config file, we are including the ConfigurationManager class in System.Configuration. In order to use the DbContext API, a class has to be created which inherits from the DbContext class so that we can access methods of the DbContext API. We have created the EmployeeDbContext class which was inherited from DbContext class. DbSet is a class which allows operations of the Entity Framework to be performed for a given Entity type. We need to create the DbSet object for each of the Entity types that we use in our application. In this example, we are using only one DbSet object as we are working with the Employee class.

    Create a migration Migration is the process of recording all the changes of your database. Add-Migration is the Entity Framework command for adding migration:

    [ 111 ] WOW! eBook www.wowebook.org

    Models

    1. Once you add the migration, you can revoke the changes by executing the Remove-Migration Entity Framework command.

    This is what the migrations directory looks like:

    [ 112 ] WOW! eBook www.wowebook.org

    Models

    2. Update the database by issuing the Entity Framework command UpdateDatabase , which updates the database tables as per the information available in the migration. As we have installed the EntityFramework.Commands package earlier, these commands will be available for the application:

    3. Once you update the database, you can see the changes in the database by connecting to SQL Server Management Studio:

    4. Perform the database operation to save the business domain object in the database. You can create the database manually or, if the database is not available, it will create one for you.

    [ 113 ] WOW! eBook www.wowebook.org

    Models

    The Main method is updated with the following code: class Program { static void Main(string[] args) { AddEmployee(); } static void AddEmployee() { using (var db = new EmployeeDbContext()) { Employee employee= new Employee { Designation = "Software Engineer", Name = "Scott", Salary = 5600 }; db.Employees.Add(employee); int recordsInserted = db.SaveChanges(); Console.WriteLine("Number of records inserted:" + recordsInserted); Console.ReadLine(); } } }

    Firstly, we are constructing the business domain object. Then, we are adding the constructed Employee object to the employee's DbSet of the DbContext class. Finally, we are calling the SaveChanges method DbContext API, which will save all the pending changes to the database. You might be wondering how it can save it to the database when we have not even given the connection string. Let us discuss what happens behind the scenes when we run the program: When you make changes to any of the DbSet collection, the Entity Framework checks whether the database exists. If it does not exist, it creates a new one using the pattern . In our case, a database called by EF6.EmployeeDbContext would be created. Then, it creates database tables for the entities declared in DbSet. By convention, the Entity Framework uses the pluralized form of Entity for the table names. As we have declared DbSet for the Employee entity, the Entity Framework creates a pluralized form of Employee and creates the table named Employees.

    [ 114 ] WOW! eBook www.wowebook.org

    Models

    The creation of the database and tables happens when the following code is executed: db.Employees.Add(employee);

    When SaveChanges method is executed, the data in the Employee object will get saved to the database and returns the number of records affected. In the preceding case, it returns 1. When you run the application again, the first two steps mentioned previously will be skipped as the database and table will have already been created. When you query the database, you can see the newly inserted record:

    [ 115 ] WOW! eBook www.wowebook.org

    Models

    How the SaveChanges method works When we are making changes, the Entity Framework tracks the state of each of the objects and executes the appropriate query when SaveChanges method is called. For example, when we add an Employee object to the employees' collection (DbSet), this object is being tracked as Entity in the Added state. When SaveChanges is called, the Entity Framework creates an insert query for the same and executes it. The same is the case with updating and deleting the object. The Entity Framework sets the Entity state of the respective objects to Modified and Deleted. When SaveChanges is called, it creates and executes the Update and Delete queries.

    The preceding figure explains how the SaveChanges method works at a high-level for different types of change. We have a couple of POCO objects (Object 1 and Object 2), which have been added to the employees' DbSet object. Let us assume Object 3 and Object 4 have been modified and objects Object 5 and Object 6 are in Deleted state. When you call SaveChanges method, it creates three sets of queries. The first set of queries is for the addition of objects, resulting in insert queries getting executed against the database. In the second set of queries, Update queries are created and executed for the objects whose state is modified. Finally, Delete queries are executed for the objects for all the Deleted state objects.

    Updating the record Let us try to update the salary of an inserted employee record using the Entity Framework: static void UpdateSalary() { using (var db = new EmployeeDbContext()){

    [ 116 ] WOW! eBook www.wowebook.org

    Models Employee employee = db.Employees.Where(emp => emp.EmployeeId 1).FirstOrDefault(); if(employee!=null){ employee.Salary = 6500; int recordsUpdated = db.SaveChanges(); Console.WriteLine("Records updated:" + recordsUpdated); Console.ReadLine(); } } }

    ==

    In the preceding method, we find the employee with EmployeeId = 1. Then, we update the salary of the employee to 6500 and save the employee object to the database. Please note that, in the preceding method, we interact with the database a couple of times—once to find the correct employee record (read operation) and again to update the record (update operation). static void Main(string[] args){ UpdateSalary(); }

    The Main method is updated to call the UpdateSalary method. When you query the database, you should see the record with the updated information:

    Deleting the record Deleting the record is a bit tricky as it involves setting the state directly. In the following method, firstly we get the object and setting the state of the object to Deleted. Then calling the SaveChanges method will generate the delete query for the object and execute it, which in turn will eventually delete the record in the database: static void DeleteEmployee() { using (var db = new EmployeeDbContext()) { Employee employeeToBeDeleted = db.Employees.Where(emp =>

    [ 117 ] WOW! eBook www.wowebook.org

    Models emp.EmployeeId == 1).FirstOrDefault(); if (employeeToBeDeleted != null) { db.Entry(employeeToBeDeleted).State = Microsoft.Data.Entity.EntityState.Deleted; int recordsDeleted = db.SaveChanges(); Console.WriteLine("Number of records deleted:" + recordsDeleted); Console.ReadLine(); } } } static void Main(string[] args) { DeleteEmployee(); }

    Using the Entity Framework in ASP.NET MVC applications There is not much difference between using the Entity Framework in a console application and ASP.NET MVC application. Now, we are going to build a simple application with a single screen as shown in the following image. In this screen, we will have a form where the user will enter the information about the employee; once the user submits the form, the information will be saved to the database and reflected in the following screenshots:

    [ 118 ] WOW! eBook www.wowebook.org

    Models

    We can create a simple Model for the employee. We need to build a ViewModel for this View, as we need to get the employee information from the user and we need to show a list of employees as well on the same screen. Let us create an ASP.NET Core application, adding the employee and showing the list of employees. The following is the step-by-step instructions to create the application for the previously mentioned objective: 1. Create an ASP.NET Core project in Visual Studio by selecting an empty ASP.NET 5 application. 2. Install the ASP.NET Core NuGet package. 3. Install the Entity Framework 7 NuGet package and ef EntityFramework commands (for database migration) as explained earlier in this chapter. 4. Add config.json to declare the connection string of the database: { "Data": { "DefaultConnection": { "ConnectionString": "Data Source=MUGIL-PC\\SQLEXPRESS;Initial Catalog=Validation;Integrated Security=True" } } }

    5. Update project.json to include EntityFramework 7 and EntityFramework commands. The changes are highlighted in bold: { "version": "1.0.0-*", "compilationOptions":{ "emitEntryPoint": true }, "dependencies": { "Microsoft.AspNet.IISPlatformHandler": "1.0.0-rc1-final", "Microsoft.AspNet.Mvc": "6.0.0-rc1-final", "Microsoft.AspNet.Server.Kestrel": "1.0.0-rc1-final", "EntityFramework.MicrosoftSqlServer": "7.0.0-rc1-final", "EntityFramework.Commands": "7.0.0-rc1-final" }, "commands": { "web": "Microsoft.AspNet.Server.Kestrel", "ef": "EntityFramework.Commands" },

    [ 119 ] WOW! eBook www.wowebook.org

    Models "frameworks": { "dnx451": { }, "dnxcore50": { } }, "exclude": [ "wwwroot", "node_modules" ], "publishExclude": [ "**.user", "**.vspscc" ] }

    6. Configure MVC in the Startup class (Startup.cs): In the constructor, we are building the configuration by reading the config.json file Add the MVC service and the Entity Framework service to the services in the ConfigureServices method Configure the MVC routing in the Configure method: using using using using using using using using

    Microsoft.AspNet.Builder; Microsoft.AspNet.Hosting; Microsoft.AspNet.Http; Microsoft.Extensions.DependencyInjection; Microsoft.Extensions.Configuration; Validation.Models; Microsoft.Data.Entity; Microsoft.Extensions.PlatformAbstractions;

    namespace Validation { public class Startup { public IConfigurationRoot Configuration { get; set; } public Startup(IHostingEnvironment env, IApplicationEnvironment appEnv) { var builder = new ConfigurationBuilder() .AddJsonFile("config.json") .AddEnvironmentVariables(); Configuration = builder.Build(); } // This method gets called by the runtime. Use this method to add services to the container. // For more information on how to configure your application, visit http

    [ 120 ] WOW! eBook www.wowebook.org

    Models ://go.microsoft.com/fwlink/?LinkID=398940 public void ConfigureServices(IServiceCollection services) { services.AddEntityFramework() .AddSqlServer() .AddDbContext(options => { options.UseSqlServer(Configuration.Get ("Data:DefaultConnection:ConnectionString")); }); services.AddMvc(); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app) { app.UseIISPlatformHandler(); app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Employee}/ {action=Index}/{id?}"); }); } // Entry point for the application. public static void Main(string[] args) => WebApplication.Run(args); } }

    7. Create Models and DbContext classes. 8. Create the Models folder and add the Employee model class and EmployeeDbContext class. 9. Create the Employee Model class (Employee.cs in the Models folder): public class Employee { public int EmployeeId { get; set; } public string Name { get; set; } public string Designation { get; set; } public decimal Salary { get; set; } }

    10. Create EmployeeDbContext (EmployeeDbContext.cs in the Models folder): using Microsoft.Data.Entity; using Microsoft.Extensions.Configuration;

    [ 121 ] WOW! eBook www.wowebook.org

    Models namespace Validation.Models { public class EmployeeDbContext : DbContext { public IConfigurationRoot Configuration { get; set; } public DbSet Employees { get; set; } public EmployeeDbContext() { var builder = new ConfigurationBuilder() .AddJsonFile("config.json") .AddEnvironmentVariables(); Configuration = builder.Build(); } protected override void OnConfiguring (DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlServer (Configuration.Get ("Data:DefaultConnection:ConnectionString")); base.OnConfiguring(optionsBuilder); } } }

    11. Create ViewModels: As we are going to show a list of employees and the form to add employees in the same screen, we are going to build a Model specific to this View. This model will contain information about the list of employees and the employee to be added. 12. Create the ViewModels folder and add the EmployeeAddViewModel: using MVCEF7.Models; namespace MVCEF7.ViewModels { public class EmployeeAddViewModel { public List EmployeesList { get; set; } public Employee NewEmployee { get; set; } } }

    This ViewModel has a couple of properties. EmployeesList and NewEmployee. EmployeesList will contain the list of employees. This list would be fetched from the database. NewEmployee will hold the employee information entered by the user.

    [ 122 ] WOW! eBook www.wowebook.org

    Models

    13. Create Controllers to handle the incoming requests: Create a Controllers folder and add the EmployeeController class with a couple of action methods-one for GET and another for POST. The Index action method corresponding to the GET action method will be called when you access the URL (http://localhost/Employee/Index) or when you run the application. The POST Index action method will be called when you submit the form as following: public IActionResult Index() { EmployeeAddViewModel employeeAddViewModel = new EmployeeAddViewModel(); using (var db = new EmployeeDbContext()) { employeeAddViewModel.EmployeesList = db.Employees.ToList(); employeeAddViewModel.NewEmployee = new Employee(); } return View(employeeAddViewModel); }

    In the preceding GET Index action method, we are creating the ViewModel object and passing it to the View. The following code uses POST Index action method: [HttpPost] public IActionResult Index(EmployeeAddViewModel

    employeeAddViewModel) {

    using (var db = new EmployeeDbContext()) { db.Employees.Add(employeeAddViewModel.NewEmployee); db.SaveChanges(); //Redirect to get Index GET method return RedirectToAction("Index"); } }

    We get the NewEmployee property in the ViewModel, which contains the information on the user. Save it to the database. Once we save the employee information to the database and we redirect the control to the GET Index action method, the GET Index action method will again show the form to enter the employee information and the list of employees in table format.

    [ 123 ] WOW! eBook www.wowebook.org

    Models

    14. Add the Views folder: 1. Create Views\_ViewStart.cshtml with the following content: @{ Layout = "_Layout"; }

    2. Create Views\Shared\_Layout.cshtml with the following content:



    @ViewBag.Title

    @RenderBody()



    3. Create Views\Employee\Index.cshtml with the following content: @model MVCEF.ViewModels.EmployeeAddViewModel @* //For more information on enabling MVC for empty projects, http://go.microsoft.com/fwlink/?LinkID=397860 *@ @{ }

    visit

    @using (Html.BeginForm("Index", "Employee", FormMethod.Post)) {

    [ 124 ] WOW! eBook www.wowebook.org

    Models
    @Html.LabelFor(Model => Model.NewEmployee.Name) @Html.TextBoxFor(Model => Model.NewEmployee.Name)
    @Html.LabelFor(Model => Model.NewEmployee.Designation) @Html.TextBoxFor(Model => Model.NewEmployee.Designation)
    @Html.LabelFor(Model => Model.NewEmployee.Salary) @Html.TextBoxFor(Model => Model.NewEmployee.Salary)
    }




    List of employees:


    @foreach(var employee in Model.EmployeesList) { }
    ID Name Designation Salary
    @employee.EmployeeId @employee.Name @employee.Designation @employee.Salary


    [ 125 ] WOW! eBook www.wowebook.org

    Models

    In the preceding Index view, we create a form where we get the employee information from the user in the topmost div. In the next div, we show the list of employees in a tabular format. Once we create all the folders and the files, the project structure should look like the following:

    [ 126 ] WOW! eBook www.wowebook.org

    Models

    Database migration We have created the business entity—the Employee class. Now, we can proceed with the migration. Migration is a two-step process: in the first step, we create the migration files. This can be done by executing the following command from the command prompt from the context of the project: dnx ef migrations add InitialMigration

    This command will create the migration files in your project, as shown in the following screenshot:

    [ 127 ] WOW! eBook www.wowebook.org

    Models

    Then execute the following command to create the database:

    [ 128 ] WOW! eBook www.wowebook.org

    Models

    This command will read the migration files created in the previous step and create the database along with the associated tables:

    Run the application. You will get the following screen, where the user can enter the employee information in the form. As we are using the strongly typed model in our view, it takes the default values for all the properties. The Name and Designation are properties of type string and the default values are empty string for these fields, Salary is of type decimal and the default value for decimal is 0 hence 0 is shown in the form when it is loaded for the Salary field. As there are no records, we are showing 0 records in the List of employees table:

    [ 129 ] WOW! eBook www.wowebook.org

    Models

    When you enter the information in the form and submit it, the information gets saved in the database and all the database records in the Employees table will be presented as follows:

    Summary In this chapter, we learned what a Model is and how it fits in the ASP.NET MVC application. Then, we created a simple Model, built model data in a Controller, passed the Model to the View, and shown the data using the View. We have learned about the Models specific to a View and have discussed the flow of the data with respect to Models. We learned about the Entity Framework, an ORM framework from Microsoft, and how it simplifies database access from your .NET application. We have created simple console application where we have inserted, updated, and deleted the records. Finally, we have built an ASP.NET Core application that uses Model, ViewModel, and Entity Framework.

    [ 130 ] WOW! eBook www.wowebook.org

    6

    Validation We can never rely on the data entered by users. Sometimes they might be ignorant about the application and thus they may be entering incorrect data unknowingly. At other times, some malign users may want to corrupt the application by entering inappropriate data into it. In either case, we need to validate the input data before storing the data for further processing. In this chapter, you'll be learning about the following topics: Different types of validation Server-side validation with an example Client-side validation with an example Unobtrusive JavaScript validation using jQuery unobtrusive libraries, where we don't have to write separate code for validation In an ideal case, users will enter valid data in a proper format in your application. But, as you might realize, the real world is not so ideal. Users will enter incorrect data in your application. As a developer, it is the responsibility of us to validate the user input in our application. If the entered input is not valid, you need to inform the user, saying what has gone wrong, so that the user can correct the input data and submit the form again.

    WOW! eBook www.wowebook.org

    Validation

    Validation can be done on the client-side or the server-side or at both ends. If the validation is done before sending the data to the server, it is called client-side validation. For example, if the user does not enter any data in a mandatory field, we can validate (by finding the data that is not entered) the form, at the client-side itself. There is no need to send the form data to the server. JavaScript is the most commonly used language being used for client-side validation.

    If the validation is done at the server-side (sending the form data to the server), it is called server-side validation. For instance, you might want to validate data entered by the user against the data in the database. In this case, it is preferable to do server-side validation as we cannot have all the data in the database at the client-side.

    Client-side and server-side validation In the real world, it's not a case of either server-side or client-side validation. We can have both types of validation at the same time. In fact, it is recommended to validate the data at both ends to avoid unnecessary processing.

    [ 132 ] WOW! eBook www.wowebook.org

    Validation

    The preceding figure shows the validation is being performed at both the client-side and the server-side. If the data is not entered into the required field, we can catch that issue at the client-side itself. There is no need to send the data to the server to finally find out that there is no data entered. Once all the required data is entered, the data is sent back to the server to validate the entered data based on some business logic. If the validation fails, the form data is sent again to the browser with the error message so that the user can send the data again. We have covered enough theory about the need for validation and the types of validations typically used in the application. Let us get our hands dirty by adding validation to the application that we built in the previous chapter. The following screenshot is the form that we have built in the previous chapter. There is nothing fancy in this form—just three fields. When a user enters the data in the form, the data is stored in the database and the entire employee information is fetched back and shown in a tabular format.

    [ 133 ] WOW! eBook www.wowebook.org

    Validation

    In the existing application that we have built, we do not show any message to the user even when the user does not enter any information in any of the fields and submits it. Instead, we silently store the default values for the fields (empty values for string types and 0.00 for decimal types) as shown in the following screenshot:

    But this should not be the case. We should inform the user saying that the data entered is not valid and ask the user to correct the input data.

    Server-side validation Let us continue with the application that we built in the previous chapter. To do server-side validation, we need to do the following: 1. Add Data Annotation attributes to the ViewModels model class. The input data is validated against this metadata and the model state is updated automatically. 2. Update the view method to display the validation message for each of the fields. The span tag helper with the asp-validation-for attribute will be used to display the validation error message. 3. Update the controller action method to verify the model state. If the model state is valid, we insert the data into the database. Otherwise, the View model is updated and the view method is rendered again with the validation error message so that the user can update with valid input data and submit the form again.

    [ 134 ] WOW! eBook www.wowebook.org

    Validation

    Updating View models with the Data Annotation attribute The Data Annotation attributes defines the validation rules for the properties of the Model/ViewModel. If the input data does not match with the attribute definition in the model, the validation will fail, which in turn makes the associated model state invalid. There are several Data Annotation attributes available to validate the data. The following are the most commonly used Data Annotations attributes: Required: This attribute indicates the property is required. Range: This attribute defines the minimum and maximum constraints. MinLength: This defines the minimum length a property must have in order for the validation to succeed. MaxLength: As the name implies, this attribute defines the maximum length of the property. If the length of the property value exceeds the maximum length, the validation would fail. RegularExpression: We can use a regular expression for data validation if we use this attribute. As Data Annotation attributes are available in the System.ComponentModel.DataAnnotations namespace, we need to include this namespace. The following is the updated View model code: using using using using using using

    System; System.Collections.Generic; System.ComponentModel.DataAnnotations; System.Linq; System.Threading.Tasks; Validation.Models;

    namespace Validation.ViewModels { public class EmployeeAddViewModel { public List EmployeesList { get; set; } [Required(ErrorMessage ="Employee Name is required")] public string Name { get; set; } [Required(ErrorMessage ="Employee Designation is required")] [MinLength(5, ErrorMessage = "Minimum length of designation should be 5 characters")] public string Designation { get; set; }

    [ 135 ] WOW! eBook www.wowebook.org

    Validation [Required] [Range(1000,9999.99)] public decimal Salary { get; set; } } }

    We have added Data Annotation attributes for all the three properties—Name, Designation, and Salary. The ErrorMessage attribute displays a message to be displayed when the validation fails. If there is a failure of validation and if there is no ErrorMessage mentioned, the default error message will be displayed.

    Updating the View model to display the validation error message For each of the fields, we have added a span tag where the error message is displayed in a red color when the validation fails. When the validation succeeds, there will be no error message displayed. The attribute value of asp-validation-for represents the field name for which the validation error message has to be displayed. For example, we have used the span tag with the asp-validation-for attribute and with the value Name, which tells ASP.NET MVC to display the validation error message for the Name field:



    [ 136 ] WOW! eBook www.wowebook.org

    Validation


    Updating the controller action method to verify the model state The model state is automatically updated based on the Data Annotation attribute specified on our View model and the input data. We are verifying whether the model state is valid in the following Index method, which is a POST action method. If the model state is valid (when the validation succeeds), we save the entered data to the database. If the validation fails, then the ModelState is set to invalid automatically. Then, we would populate the ViewModel with the entered data and render the View method again so that the user can correct the input data and re-submit the data: [HttpPost] public IActionResult Index(EmployeeAddViewModel employeeAddViewModel) { if (ModelState.IsValid) { using (var db = new EmployeeDbContext()) { Employee newEmployee = new Employee { Name = employeeAddViewModel.Name, Designation = employeeAddViewModel.Designation, Salary = employeeAddViewModel.Salary }; db.Employees.Add(newEmployee); db.SaveChanges(); //Redirect to get Index GET method return RedirectToAction("Index"); } } using (var db = new EmployeeDbContext()) { employeeAddViewModel.EmployeesList = db.Employees.ToList(); } return View(employeeAddViewModel); }

    [ 137 ] WOW! eBook www.wowebook.org

    Validation

    When you run the application after making aforementioned changes and submit the form without entering the values, error messages will be displayed beside the fields as shown in the following screenshot. Please note that, even in the case of a validation error, we display the employees' data in the following table, which is achieved by using the code block in the previous code snippet.

    There are a few things to be noted in the previous validation and its error message: If the validation fails, error messages are displayed as expected. If there is more than one validation for a field, it will display one error message at a time. For example, we have a couple of validations for Designation field—the Required and MinLength attributes. If there is no data entered for the field, only the required field error message will be displayed. Only when the required field error is resolved (by entering some characters in the field), the second validation error message will be displayed. If no error message is available and if the validation fails, the default error message is displayed. We have not given any error message for the Salary field. So, when the validation fails for that field, ASP.NET MVC displays the default error message based on the field name and the type of validation failure.

    [ 138 ] WOW! eBook www.wowebook.org

    Validation

    The preceding figure depicts the high-level sequence of events in server-side validation and is described as follows: 1. The user enters the invalid data. 2. Based on the Data Annotations attribute in the View model, the model state is updated automatically. This happens during the model binding process where the data in the view method is mapped to the data in the model or View model. 3. In the controller's action method, we are verifying the model state. 4. If the model state is valid, we are saving the entered data to the database. 5. If the model state is not valid, we are rending the View model again with the validation error message so that the user can correct the input data and submit the form again with the valid input data.

    Client-side validation There are scenarios where we don't need to go to the server to validate the input data. In the preceding example of the server-side validation, we do not need to go to the server to verify whether the user has entered the data for the Name field. We can validate at the client-side itself. This prevents round-trips to the server and reduces the server load.

    [ 139 ] WOW! eBook www.wowebook.org

    Validation

    We are going to use JavaScript to validate the data from the client-side. JavaScript is a highlevel, interpreted language which is primarily used in client-side programming. These days, JavaScript is also being used at the server-side as part of Node.js.

    We are going to make a couple of changes in our View model (Index.cshtml file) to validate the form at the client-side: 1. Changes in the form: add the id attribute to all the span tags so that we can access this HTML element to display the HTML error message. On submission of the form, call a JavaScript function to validate the input data. 2. Add the script HTML element and create a JavaScript function to validate the input data. In the following code, we are calling the validateForm JavaScript function on submission of the form. If the validateForm function returns true, the data will be sent to the server. Otherwise, the data will not be sent. We have added the id attribute for all the span tags so that we can identify the span tags and display the validation error messages over there:



    We have added the JavaScript function to validate all three fields. We get the values of all the three fields and we store them in separate variables. Then we verify whether the value of each of the variables is null or empty. If the value is empty, we get the span element for the respective field and set the text context with the validation error message:

    When you run the application and submit the form without entering the data, you'll get the error message generated from the client-side itself without ever going to the server.

    In real-world applications, we would not be hand coding the validation code at the JavaScript. Instead, most applications use unobtrusive validation, where we do not write JavaScript code for validating each of the fields. Simply adding the respective JavaScript libraries will do. You might wonder how the fields get validated without ever writing the code. The magic lies in the data- attributes added to the input HTML elements based on the Data Annotation attributes. This jQuery unobtrusive library gets a list of fields for which dataattributes are added and it gets validated. Run the application and press Ctrl + U to see the source code. The source code will look something like the following:

    [ 142 ] WOW! eBook www.wowebook.org

    Validation

    Different attributes will be added to different kinds of Data Annotation attributes. For the fields to be validated, the data-val attribute would be set to true. For the properties which are marked as required in the View model, the data-val-required attribute will have the value of the error message of the associated property.

    Implementation There will be a layout file (_Layout.cshtml) to define the layout structure of your web application. As JavaScript libraries are going to be used in all the pages, this is the right place to add common functionalities such as unobtrusive validation. Just add the JavaScript libraries (highlighted in bold) to the layout file (_Layout.cshtml) so that they will be available for all the View files:



    @ViewBag.Title

    @RenderBody()





    There is no change to the View model except for the removal of the JavaScript function we wrote earlier for validating the fields. The complete code for the view is as following: @model Validation.ViewModels.EmployeeAddViewModel






    List of employees:


    [ 144 ] WOW! eBook www.wowebook.org

    Validation

    @foreach (var employee in Model.EmployeesList) { }
    ID Name Designation Salary
    @employee.EmployeeId @employee.Name @employee.Designation @employee.Salary


    The preceding diagram depicts the unobtrusive client validation process: 1. Data Annotations are added to Model/ViewModels. 2. The view takes Model/ViewModels and generates the HTML. 3. The generated HTML from the View model contains data-* attributes: For the fields for which the Required attribute is set, the data-valrequired attribute is created with the error message as its value. For the fields with the MinLength Data Annotation attribute, the data-val-minlength attribute is set with the error message as its value.

    [ 145 ] WOW! eBook www.wowebook.org

    Validation

    For the range Data Annotation, the data-val-range attribute is set with the error message as its value. The data-val-range-max represents the maximum value in the range and the data-valrange-min attribute represents the minimum value in the range. 4. The jQuery unobtrusive validation library reads these elements with data-* attributes and does the client-side validation. This allows the developer to not write the separation validation code using JavaScript as everything is resolved by the configuration itself.

    Summary In this chapter, we have learned about the need for validation and the different kinds of validation available. We have even discussed how client-side and server-side validation work, along with the pros and cons of each type of validation. Later, we made code changes to validate the input data at the server-side. Then we used JavaScript to validate the input data in the client-side itself. Finally, we used the jQuery unobtrusive library to do the clientside validation without ever writing the JavaScript code to validate the input data at the client-side. In the next chapter, we will discuss the routing principle and how to customize it. In an earlier chapter, we saw the basics of routing in an ASP.NET 5 application. Now we are going to explore the topic in depth.

    [ 146 ] WOW! eBook www.wowebook.org

    7

    Routing Routing is one of the important concepts in the ASP.NET MVC application as it takes care of incoming requests and maps them to the appropriate controller's actions. In this chapter, we are going to learn about the following topics: Using the MapRoute method to configure routing Different types of routing with examples—convention and attribute-based Using HTTP verbs in attribute-based routing We briefly discussed routing in Chapter 3, Controllers. In this chapter, we are going to discuss routing along with several options available to customize it in ASP.NET Core.

    Convention-based routing The routing engine is responsible for mapping the incoming requests to the appropriate action method of the controller. In the Configure method of the Startup class, we have mapped the following route: app.UseMvc(routes => { routes.MapRoute(name: "default", template: "{controller=Employee}/{action=Index}/{id?}"); });

    WOW! eBook www.wowebook.org

    Routing

    The MapRoute method has two parameters: name: This represents the name of the route as we could configure multiple

    routes for the same application. template: This signifies the actual configuration for the route. There are three parts to this configuration value. As we are supplying default parameters, if the values are not passed, it will take the default parameter values. {controller=Employee}: The first value acts as the name of the controller and we use the Employee controller as the default controller when the controller

    value is not available in the URL. {action=Index}: The Index action method will be acting as the default action method and the second parameter from the URL will be taken as the action method name. {id?}: By specifying "?" after the id parameter, we are saying that id is the optional parameter. If the value is passed as the third parameter, the id value will be taken. Otherwise, it would not be considered. There is another method with the same functionality. The app.UseMvcWithDefaultRoute() method configures the route "{controller=Employee}/{action=Index}/{id?}". But we have used the earlier method to show that we can customize the route as we want.

    Let us see a few examples and observe how our routing engine works. We are assuming the following routing for the preceding examples: "{controller=Employee}/{action=Index}/{id?}"

    Example 1 URL-http://localhost:49831/ In this URL, we have not passed a value for the controller, action, or id. Since we have not passed anything, it would take the default values for the controller and the action. So, the URL is converted into the following URL by the routing engine: http://localhost:49831/Employee/Index

    [ 148 ] WOW! eBook www.wowebook.org

    Routing

    Example 2 URL-http://localhost:49831/Employee/ In this URL, we have passed the value for the controller (the first parameter), which is Employee, whereas we did not pass anything for action method (the second parameter) or id (the third parameter). So, the URL will be converted into the following URL by taking the default value for the action method: http://localhost:49831/Employee/Index

    Example 3 URL-http://localhost:49831/Manager/List The routing engine will take the first parameter, Manager, as the controller method name and the second parameter, List, as the action method name.

    Example 4 URL-http://localhost:49831/Manager/Details/2 We have passed all three parameters in this URL. So, the first parameter value, Manager, will be considered as the controller method name, the second parameter value will be considered as the action method name, and the third parameter value will be considered as the id method name. When defining the map route, we have used the MapRoute method with a couple of parameters. The first parameter, name, represents the name of the route and the second parameter, template, represents the URL pattern to be matched along with the default values: routes.MapRoute(name: "default", template: "{controller=Employee}/{action=Index}/{id?}");

    [ 149 ] WOW! eBook www.wowebook.org

    Routing

    There are other overloaded variations of this MapRoute method. The following is another commonly overloaded MapRoute method, where the incoming URL pattern and the default values are passed for different parameters. The name of the route is FirstRoute and this route will be applied for all URLs starting with Home. The default values for the controller and the action are Home and Index2 respectively: routes.MapRoute(name:"FirstRoute", template:"Home", defaults:new {controller ="Home", action="Index2"});

    You can define any number of routing maps for your ASP.NET MVC application. There is no restriction or limit on the routing maps. Let us add another routing map to our application. We have added another route map called FirstRoute to our application (highlighted in bold): public void Configure(IApplicationBuilder app) { app.UseIISPlatformHandler(); app.UseMvc(routes => { routes.MapRoute(name:"FirstRoute", template:"Home", defaults:new {controller ="Home", action="Index2"}); routes.MapRoute(name: "default", template: "{controller=Employee}/{action=Index}/{id?}"); }); }

    And we have added another controller method by the name HomeController with a couple of simple action methods returning different strings: public class HomeController : Controller { // GET: // public IActionResult Index() { return Content("Index action method"); } public IActionResult Index2() { return Content("Index2 action method"); } }

    [ 150 ] WOW! eBook www.wowebook.org

    Routing

    When you try to access the application through the URL, http://localhost:49831/Hello, both routing maps, FirstRoute and the default, match with the URL pattern. Which map routing, do you think, will get applied in this scenario? The routing engine maps the incoming URL based on the following factors: 1. Matching pattern. 2. On the order defined in the routing engine. The first factor is an obvious one. For a routing map to be picked up by the routing engine, the pattern of the incoming URL should get matched with the defined template in the routing map. The second factor is subtle but important. If more than one routing map matches with the incoming URL, the routing engine will pick the first URL as defined in the configuration. For example, if the incoming URL matches with both the FirstRoute and default maps, the routing engine will pick the FirstRoute map as it was defined first in the configuration.

    [ 151 ] WOW! eBook www.wowebook.org

    Routing

    If the routing engine could not map the incoming URL to any of the mapping routes, we get an HTTP 404 error, meaning that no resource could be found. You can see the status (200 means OK, 404 means No resource found) by looking at the Network tab in the developer tools as shown in the following screenshot:

    Attribute-based routing Until now, we have used convention-based routing. In convention-based routing, we define the routing templates (which are just parameterized strings) in a centralized place these are applicable to all the available controllers. The problem with convention-based routing is that, if we want to define different URL patterns for different controllers, we need to define a custom URL pattern that is common to all the controllers. This makes things difficult. There is another option for configuring the routing engine-attribute-based routing. In attribute-based routing, instead of configuring all the routing in a centralized location, the configuration will happen at the controller level. Let us see an example of attribute-based routing.

    [ 152 ] WOW! eBook www.wowebook.org

    Routing

    First, let us remove the convention-based routing that we created earlier in the Configure method in the startup.cs class file: public void Configure(IApplicationBuilder app) { app.UseIISPlatformHandler(); app.UseMvc(); //app.UseMvc(routes => //{ // routes.MapRoute(name: "FirstRoute", // template: "Hello", // defaults: new { controller = "Home", // action = "Index2" }); // routes.MapRoute(name: "default", // template:" // {controller=Employee}/{action=Index}/{id?}"); //}); }

    Then, we can configure the routing at the controller itself. In the following code, we have added the routing configuration for the home controller that we created earlier: namespace Validation.Controllers { public class HomeController : Controller { // GET: // [Route("Home")] public IActionResult Index() { return Content("Index action method"); } [Route("Home/Index3")] public IActionResult Index2() { return Content("Index2 action method"); } } }

    [ 153 ] WOW! eBook www.wowebook.org

    Routing

    We have used the Route attribute in the action methods of the controller. The value passed in the Route attribute will be acting as the URL pattern. For example, when we access the URL http://localhost:49831/Home/, the Index method of HomeController will be called. When we access the URL http://localhost:49831/Home/Index3, the Index2 method of HomeController will be called. Please note that the URL pattern and action method name do not need to match. In the preceding example, we are calling the Index2 action method but the URL pattern uses Index3, http://localhost:49831/Home/Index3. When you use attribute-based routing and convention-based routing together, attributebased routing will take precedence.

    Route attribute at the controller level You will notice that, with the URL pattern for the action methods, Index and Index2, we repeat the controller name, Home, in both URL patterns, Home and Home/Index3. Instead of repeating the controller method name (or any common part in the URL) at the action method level, we can define it at the controller level. In the following code, the common part of the URL (Home) is defined at the controller level and the unique part is defined at the action method level. When the URL pattern is getting mapped to the action methods of the controller, both route parts (at the controller level and at the action method level) are merged and matched. So there will be no difference between the routes defined earlier and those that follow. If you want two parameters in attribute-based routing, you can pass them within curly braces. In the following example, we did this for the SayHello action method. For example, the URL pattern http://localhost:49831/Home/Index3, will still get mapped to Index2 method of the Homecontroller: namespace Validation.Controllers { [Route("Home")] public class HomeController : Controller { // GET: // [Route("")] public IActionResult Index() { return Content("Index action method"); }

    [ 154 ] WOW! eBook www.wowebook.org

    Routing [Route("Index3")] public IActionResult Index2() { return Content("Index2 action method"); } [Route("SayHello/{id}")] public IActionResult SayHello(int id) { return Content("Say Hello action method"+id); } } }

    Passing routing values in HTTP action verbs in the Controller Instead of passing the routing values as Route attributes, we can even pass the routing values in HTTP action verbs such as HTTPGet and HTTPPost. In the following code, we have used the HTTPGet attribute to pass the route values. For the Index method, we did not pass any value and hence no route value will get appended to the route value defined at the controller method level. For the Index2 method, we are passing the value Index3 and Index3 will get appended to the route value defined at the controller level. Please note that only URLs with GET methods will be mapped to the action methods. If you access the same URL pattern with the POST method, these routes will not get matched and hence these action methods will not get called. namespace Validation.Controllers { [Route("Home")] public class HomeController : Controller { // GET: // [HttpGet()] public IActionResult Index() { return Content("Index action method"); } [HttpGet("Index3")] public IActionResult Index2() {

    [ 155 ] WOW! eBook www.wowebook.org

    Routing return Content("Index2 action method"); } } }

    Route Constraints Route Constraints enable you to constrain the type of values that you pass to the controller action. For example, if you want to restrict the value to be passed to the int type int, you can do so. The following is one such instance: [HttpGet("details/{id:int?}")] public IActionResult Details(int id) { return View(); }

    ASP.NET 5 (ASP.NET Core) even supports default parameter values so that you can pass the default parameters: [HttpGet("details/{id:int = 123}")] public IActionResult Details(int id) { return View(); }

    Summary In this chapter, we have learned about routing and how it works. We learned about different kinds of routing available. We discussed convention-based routing and attributebased routing with different examples. We also discussed route constraints and the default parameter values that could be passed. In the next chapter, we are going to see how we can make the application look good.

    [ 156 ] WOW! eBook www.wowebook.org

    8

    Beautifying ASP.NET MVC Applications with Bootstrap You might have created an application with all the required functionalities. It may even work perfectly without any issues in all scenarios. But the success of your application depends on how well your users can access it. Your application should look good (if not great) and be user-friendly in order for it to be a success. In this chapter, you are going to learn about the following topics: Role of HTML and CSS in the ASP.NET Core application Characteristics of front-end frameworks and different frameworks available Bootstrap and its grid system along its features CSS classes available in Bootstrap for form elements such as input and select elements CSS classes for different types of HTML elements such as table Using Bootstrap in your ASP.NET Core application Before discussing how to make our application look good, let us take a step back and discuss the roles that HTML and CSS play in your application.

    Knowing HTML and CSS As mentioned earlier, all browsers can only understand HTML, CSS, and JavaScript. So, the applications that you build should produce output as HTML, CSS, and JavaScript. This holds true for web applications built using other technologies such as Java or Ruby on Rails. Having said that, we will only discuss HTML and CSS.

    WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    HTML (Hyper Text Markup Language) is used to structure the content in your web pages. For example, you can add content in a title tag so that it will be available in a browser's tab or window. Let us see an example. Open any text editor (you can even use Notepad), type the following HTML content and the save file as Bootstrap.html. Please note the extension .html:

    Beautify your ASP.NET MVC applications using Bootstrap

    Bootstrap

    Bootstrap is the most popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web.



    The first line tells that the HTML content follows HTML 5 standards (the latest and the current version of HTML) and should be interpreted as such. The html tag tells the browser it is the start of an HTML document. Information in the head tag represents metadata and tells the browser to the web page rather than the web page itself. Examples include the page title, description about the page, and keywords for the search engine optimization. All the contents of the body tag will be shown in the browser's main window. In the preceding HTML code, we have made Bootstrap the heading and the rest of the content has been made as a paragraph. Open the file in the browser and you should see something like the following screenshot:

    [ 158 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    You will notice the content placed in the title tag shown as the browser's tab title, the heading content is made bold and bigger in size, and the paragraph starts on a new line. CSS is all about styling. You can use CSS to customize how each of the elements in your web page looks. You can change the color of the button, the font of the heading text, the border size of a table, and so on. You can include CSS styles either inline or using a separate file. If you are using inline CSS, it should be within a style tag. If you are using external CSS, we can make use of a link tag and refer to the external CSS file. CSS is nothing but a set of rules used for the presentation. Each rule consists of two parts—a selector for which a declaration has to be applied and a declaration containing the styling information. The styling information has a property and a value for the property. Let us take the following simple CSS rule: h1{ color : #0094ff; }

    This CSS rule states that all the heading text should be in a blue color. h1 is the selector, which tells the browser the following declaration has to be applied for all h1 tags. In the declaration, we are setting the blue color (#0094ff is blue in hexadecimal format).

    [ 159 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    The following is the updated HTML file where I've updated the CSS styles (highlighted in bold):

    Beautify your ASP.NET MVC applications using Bootstrap

    Bootstrap

    Bootstrap is the most popular HTML, CSS, and JS framework for developing responsive, mobile first projects on the web.



    When you open the file in a browser after making the style changes, you will get the following output:

    [ 160 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    Having said that, you need to create CSS rules to make the elements in your web application look good. But creating different styles for each element is a time-consuming and tedious task. You can choose from any of the frontend frameworks available. Any application should have the following characteristics: Consistency: The elements and the controls that are being used by your application should be familiar to the user. For example, if you use a drop-down list, the user should be able to select only one value from it. Responsive: The application that you build should look good across all devices of different sizes. The code that you write should adapt to the screen size of the device of your users. Mobile-friendly: This is related to the preceding point. These days, many applications being accessed from mobile devices rather than desktops or laptops. We have to make sure that the application that we build will look great on mobile devices. Customizable: If you are going to use any front-end application framework, it should be customizable according to your needs. For example, if you want to update the heading text color, you should be able to update or override the CSS file to make the necessary changes. Easy to get started: The learning curve for learning your front-end framework should be minimal as you should be spending time on delivering value to the customer—building and delivering the solution. We are not getting paid to learn a new fancy front-end framework. There are few front-end frameworks available such as Bootstrap, Foundation, and PureCSS. In this chapter, we are going to use the Bootstrap framework as it is the most widely used front-end framework.

    [ 161 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    Bootstrap Bootstrap is the most popular HTML, CSS, and JS framework for developing responsive, mobile-first projects on the web and you can access it at http://getbootstrap.com/.

    Bootstrap has many features that meet the various needs of a web application. It has different CSS classes for each of the HTML elements so that you can use it without touching any CSS code. However, if you wish to override it, you can do so. Let us look at each of the features of Bootstrap in brief.

    Bootstrap Grid system The grid system in Bootstrap helps you to create a responsive layout for your application. This feature makes your application look great in all devices of different shapes including mobile devices. Bootstrap provides a fluid grid system, which scales up to twelve columns as the device or viewport size increases. You can think of grid system as being like columns in an Excel sheet (like the one in the following screenshot):

    [ 162 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    We can combine multiple columns to create a wider column. In the second row in the preceding screenshot, we have used a single column (using class .col-md-1) twelve times. In the third row, we are using a CSS class (.col-md-2) to create six wider columns instead of twelve shorter columns. Likewise, we are creating a lesser number of wider columns.

    Forms All form controls receive global styling when you use Bootstrap in your application. The following is one such example (the screenshot is taken from the Bootstrap site):

    [ 163 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    The following is the code for the preceding form. Related HTML elements are grouped together in the form. For example, the labels for the Email and Email address input text boxes are grouped together. The same is the case for Password and File input.

    Inline forms Inline forms are forms where all the form elements are on the same line (as shown in the following screenshot). We need to add the class form-inline to the form element.

    [ 164 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    Horizontal forms In horizontal forms, we have each element group on a separate line; Email label, Email input on a line and Password label, Password input on a line (as shown in the following screenshot):

    To apply the horizontal form styling, we just need to add the class form-horizontal as in the following code. Just like other forms, we need to group the related HTML elements together by applying the CSS class form-group:

    [ 165 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    Table CSS classes For basic styling, add the base CSS class table to thetable HTML element as shown in the following screenshot:

    Striped tables In a striped table, the background of alternate rows will be of the same color. In the following screenshot, the background color of the first row and third row are the same color. You can apply the table-striped class to apply the striped table behavior to the table HTML element.

    [ 166 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    Hover tables When you move your mouse over any of the rows in the table, the background color of the row is changed. This hover behavior can be achieved by applying the CSS class tablehover along with the class table to the HTML table element.

    Bordered tables We can have a bordered table (as in the following screenshot), if we apply the CSS class table-bordered to the table element.

    [ 167 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    Contextual classes in table There are times when you want to highlight rows based on the data value. For example, if you are showing inventory data in tabular format, you might want to highlight the rows with a red background color for the items whose count is less than the stipulated count. In this case, you can apply a danger class to table rows to highlight them in a red color. There are different types of contextual class available for highlighting in different colors. You can apply these classes to individual cells instead of the complete rows.

    [ 168 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    Buttons There are different styling options available for making buttons appear in different colors. For all the buttons, the base button class btn has to be applied:

    The btn-primary button class is used to highlight the button in blue whereas the btnsuccess button class highlights the button in green. In the preceding screenshot, different options for styling the button are shown.

    [ 169 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    Button sizes You can change the size of the button according to your needs. The btn-lg class can be applied to a large button and the btn-sm class can be applied to buttons to make them appear small. The following are the different options available to control the size of the button.

    Contextual colors Based on the context, you might want to change the color of the text. For example, you might want to make the text appear in green if the previous operation is successful. For an unsuccessful operation, you might want to show the error message in a red. In such scenarios, you might use this helper CSS classes to show them in different colors.

    [ 170 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap

    We have seen various features of Bootstrap. Now, let us use Bootstrap to make our application look good. Basically, we have two major components in our view—a form at the top to get the input from the user and a table at the bottom to display the results in a table.

    Using Bootstrap in your ASP.NET MVC application There are different ways to get Bootstrap for your application: Refer to the Bootstrap file available at the CDN (Content Delivery Network) in your application Download the source code Install with Bower Compile with Grunt Of these options, the easiest option is the first one. Open the layout file (_Layout.cshtml) in the application that we created earlier. Include the CSS files at the top (within the head tag) and the scripts at the bottom (at the end of the body tag):



    @ViewBag.Title

    [ 171 ] WOW! eBook www.wowebook.org

    Beautifying ASP.NET MVC Applications with Bootstrap



    @RenderBody()



    [ 270 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application



    [ 271 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    Minification of JavaScript and CSS files and their combination The time taken to download the related resources of a web page is directly proportional to the size of the files that are downloaded. If we reduce the size of the file without changing the actual content, it will greatly increase the performance. Minification is the process of changing the content of the file in order to reduce the size of the file. Removing the extraneous white spaces and changing the variable names to shorter names are both common techniques used in the minification process. Popular JavaScript libraries such as jQuery and frontend frameworks provide minified files by default. You can use them as they are. In the following screenshot, I have downloaded the compressed version of jQuery. You can minify the custom JavaScript and CSS files that you have written for your application:

    Bundling is the process where you can combine two or more files into one. Bundling and minification, when used together, will reduce the size of the payload, thereby increasing the performance of the application. You can install the Bundler & Minifier Visual Studio extension from the following URL: https://visualstudiogallery.msdn.microsoft.com/9ec27da7-e24b-4d56-8064-fd7e 88ac1c40

    [ 272 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    Once you have installed this Visual Studio extension, you can select the files that you want to bundle and minify by selecting the files and selecting the Bundler & Minifier option from the Context menu, brought up by right-clicking. It is shown in the following screenshot:

    Once you select the Bundle and Minify Files option, it will ask you to save the bundled file as shown in the following screenshot:

    [ 273 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    You can name the file of your wish and save the file. Once you save the file, another file would have been created in your solution—in our case, it is the bundleconfig.json file:

    This file will have the information on the input files and the bundled output file. The following is one such example: [ { "outputFileName": "wwwroot/css/site.min.css", "inputFiles": [ "wwwroot/css/site.css" ] },

    [ 274 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application { "outputFileName": "wwwroot/js/site.min.js", "inputFiles": [ "wwwroot/js/site.js" ], "minify": { "enabled": true, "renameLocals": true } }, { "outputFileName": "wwwroot/css/bundle.css", "inputFiles": [ "wwwroot/css/site.css", "wwwroot/css/StyleSheet1.css" ] } ]

    You can use this bundled file in your application, resulting in increased performance.

    The caching process Caching is the process of copying the data and having it in memory instead of getting the data again through an external resource, such as a network, file, or database. The data used in caching is ephemeral and can be removed at any time. As we are directly accessing the data, caching can greatly improve the performance of the application. Caching can be done in any of the layers—client-side at the browser, at the proxy server (or at some middleware), or at the web/application server. For database layer caching, we might not need to do any custom coding. Based on the type of database server being used, you might need to make some configuration changes. However, most of the databases these days are powerful enough to cache the data as and when it is needed.

    [ 275 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    Client-side caching We can cache at the client-side if we add the appropriate HTTP response headers. For example, if we want to cache all the static assets, such as CSS, images, and JavaScript files, we can add the max-age response header in the Cache-Control header:

    In the preceding screenshot of the Developer Tool window's Network tab, when the requests are fired again, we get HTTP 304 response (Not modified) as the response. This means the same files are not transferred back twice across the wire, as they are available in the browser itself. Implementing browser caching for static files is pretty easy, and it involves just a couple of steps—adding dependencies and configuring the application. Add the following NuGet package to the list of dependencies in the project.json file: "Microsoft.AspNet.StaticFiles": "1.0.0-rc1-final"

    [ 276 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    Add the following namespaces to the Startup.cs file and configure the application to use those static files: using Microsoft.AspNet.StaticFiles; using Microsoft.Net.Http.Headers; public void Configure(IApplicationBuilder app) { app.UseIISPlatformHandler(); app.UseMvc(); app.UseMvc(routes => { routes.MapRoute(name:"default", template:"{controller=Employee}/{action=Index}/{id?}");}); app.UseStaticFiles(new StaticFileOptions() { OnPrepareResponse = (context) => { var headers = context.Context.Response.GetTypedHeaders(); headers.CacheControl = new CacheControlHeaderValue() { MaxAge = TimeSpan.FromSeconds(60), }; } }); }

    Response caching In response caching, cache-related HTTP headers are added to HTTP responses when MVC actions are returned. The Cache-Control header is the primary HTTP header that gets added to the response.

    [ 277 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    The preceding diagram shows response caching in action. In the first request, we are calling a Controller's action method; the request comes from the client and passes through the proxy server, actually hitting the web server. As we have added a response cache, any subsequent requests will not be forwarded to the web server, and the responses will be returned from the proxy server itself. This will reduce the number of requests to the web server, which in turn will reduce the load on the web server. Caching the response of the Controller's action method is pretty easy. Just add the ResponseCache attribute with a duration parameter. In the following action method, we have added the response cache with a duration of 60 seconds, so that, for the next 60 seconds, if any requests come again, the responses will be returned from the proxy server itself instead of going to the web server: [ResponseCache(Duration = 60)] public IActionResult Index() { EmployeeAddViewModel employeeAddViewModel = new using (var db = new EmployeeDbContext())

    [ 278 ] WOW! eBook www.wowebook.org

    EmployeeAddViewModel();

    Improving Performance of an ASP.NET Core Application { employeeAddViewModel.EmployeesList = db.Employees.ToList(); } return View(employeeAddViewModel); }

    The web/application layer The web/application layer is composed of whatever happens between receiving the request from the client and sending back the response (or querying the DB layer to get the required data). Most of the web/application layer will be in a server-side language, such as C#, so when you try to optimize the web/application layer, you need to incorporate the best practices of ASP.NET MVC and C#.

    No business logic in Views A View is what is rendered to the browser, and it can contain presentation logic. Presentation logic represents where and how the data is to be displayed. ViewModels (actually, models specific to the View) are models that hold the data for a particular view. Neither Views nor ViewModels should contain any business logic as this violates the separation of concerns principle. Look at the following Razor View code. We are just looping through the list in the model and presenting the data in tabular format—nothing else: List of employees:
    @foreach (var employee in Model.EmployeesList) { }

    [ 279 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application
    ID Name Designation Salary
    @employee.EmployeeId @employee.Name @employee.Designation @employee.Salary


    In some code, there might be a repository layer in ViewModel, which should never be the case. Please be extra cautious about what is there in the View/ViewModel code.

    Using asynchronous logging Try to use asynchronous logging, wherever possible, to improve the performance. Most logging frameworks, such as Log4Net, provide an option for logging asynchronously. With respect to the ASP.NET Core, you can implement the logging through a Dependency Injection. The following is a typical example of the implementation of a logging framework in an MVC Controller: public class EmployeeController : Controller { private readonly IEmployeeRepository _employeeRepo; private readonly ILogger _logger; public EmployeeController(IEmployeeRepository employeeRepository, ILogger logger) { _employeeRepo = employeeRepository; _logger = logger; } [HttpGet] public IEnumerable GetAll() { _logger.LogInformation(LoggingEvents.LIST_ITEMS, "Listing all employees"); return _employeeRepo.GetAll(); } }

    The DB layer Though the DB layer is not directly related to ASP.NET Core applications, it is the developer's responsibility to take complete ownership of the application's performance, and that includes taking care of the database's performance as well. We will now look at a few of the areas in the DB layer that we need to consider when improving the performance of an ASP.NET Core application.

    [ 280 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    Understanding the queries generated by the ORM In most applications these days, we use Object-Relational Mapping (ORM), such as Entity Framework or NHibernate. As you might know, the primary objective of the ORM is to enable you to write the data access layer using domain-based classes and objects instead of writing queries directly. However, it does not mean that you never need to understand the basics of the SQL queries generated, or the optimization of these queries. Sometimes, the generated query from Entity Framework may not be optimized, so a better practice would be to run the profiler, analyze the generated queries, and tune them as per your needs. You can use the interceptors in Entity Framework to log the SQL queries.

    Using classic ADO.NET if you really want to ASP.NET Core is just a web development framework, and it is not tied to any data access framework or technology. If the ORM that you use in your application does not support the performance that you expect it to, you can use the classic ADO.NET and manually write the queries/stored procedures.

    Return only the required data Always return only the data that you need nothing more, nothing less. This approach reduces the data that we send across the wire (from the database server to the web/application server). For example, we would not use the following: Select * from employees

    Instead, we would use this: Select FirstName,LastName from employees

    The latter query would get only the required fields from the table, and, thus, only the required data is passed across to the calling client.

    [ 281 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    Fine tuning the indices Beginners tend to add indices whenever they face a problem with the database. Adding an index to every column in the table is bad practice, and will reduce performance. The right approach is to take the list of queries that are most frequently executed. Once you have this list, try to fine tune them—remove unnecessary joins, avoid correlated subqueries, and so on. Only when you have tried and exhausted all query tuning options at your end should you start adding the indices. The important thing to note here is that you should add indices only on the required number of columns.

    Using the correct column type and size for your database columns When you want to use int as a datatype for a column, use an integer. Don't use double. This will save a lot of space if you have lots of rows in your table.

    Avoiding correlated subqueries Correlated subqueries use values from their parent query, which in turn makes it run row by row. This would significantly affect the query performance. The following is one such example of a correlated subquery: SELECT e.Name, e.City, (SELECT DepartmentName FROM EmployeeDepartment WHERE ID = e.DepartmentId) AS DepartmentName FROM Employee e

    Generic performance improvement tips Here are a couple of pointers to improve the overall application performance in an ASP.NET Core Web Application.

    [ 282 ] WOW! eBook www.wowebook.org

    Improving Performance of an ASP.NET Core Application

    Avoiding the Response.Redirect method When we want to do client-side redirection, developers can call the Response.Redirect method with the URL passed as a parameter. But there is a small problem with this approach. If we use Response.Redirect, the browser will send the request to the server again, which needs another round trip to the server. So, if possible, it is better to avoid the Response.Redirect method and instead use RedirectToAction method if possible.

    Using string builder If your application involves a lot of string manipulation, it is preferable to use string builder instead of the usual string concatenation. String concatenation results in creating a new string object for each of the operations, whereas string builder works on the single object itself. We can achieve significantly better performance when we use string builder in large string manipulation operations.

    Summary In this chapter, we have learned how to analyze the performance of web applications and which layers to target when improving the performance. Then we discussed how to improve the performance in each of the layers—the UI layer, the web/application layer, and the DB layer.

    [ 283 ] WOW! eBook www.wowebook.org

    12

    ASP.NET Core Identity Security is essential to all types of applications, including web applications. Would you use Facebook if anyone could update your status by impersonating you? If that were possible, then no one would come back to Facebook. From this example, we can see that security is not so much a feature as it is a necessity for all applications. In this chapter, we are going to learn about the following topics: Authentication and authorization ASP.NET Identity How to implement security in an ASP.NET Core application using ASP.NET Identity with Entity Framework When we talk about the security of an application, we primarily want to prevent any unauthorized access, meaning that only the people who have access to the information should be able to access it—nothing more, nothing less. Before proceeding further, I would like to clarify some of the core concepts regarding security.

    Authentication Authentication is the process of validating whether the user has access to the system. In any application, users will be authenticated first. This can be achieved by asking the user to enter their user ID and password.

    WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    Authorization Authorization is the process where we verify whether the user has access to the requested resource. They might have legitimate access to the system, but they might not have access to the requested resource as they do not have the required access. For example, only the admin user can access the configuration page of the application, whereas normal users should not be allowed to use this page. ASP.NET Identity provides several features for securing the application. Let us consider the following simple scenario where the user tries to access the Secure Page, a page to which only authorized people should have access. As the user is not logged in, they will be redirected to the Login Page so that we can authenticate and authorize the user. Upon successful authentication, the user is redirected to the Secure Page. If for any reason, we can not authenticate and authorize the user, we can redirect them to the "Access denied" Page:

    ASP.NET Core Identity is a membership system that enables you to secure the application easily, and which has features such as adding login functionality to your application. The following are the steps that we need to follow in order to use ASP.NET Identity (with Entity Framework) for our application:

    [ 285 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    1. 2. 3. 4. 5. 6.

    Add the relevant dependencies to the project.json file. Create an appsettings.json file and store the database connection string. Create an ApplicationUser class and ApplicationDbContext class. Configure the application to use ASP.NET Identity. Create ViewModels for registration and login. Create the necessary controller and associated action methods and Views.

    Adding the relevant dependencies to the project.json file If you want to use ASP.NET Identity with Entity Framework in your application, you need to add the following dependencies: "EntityFramework.Commands": "7.0.0-rc1-final", "EntityFramework.MicrosoftSqlServer": "7.0.0-rc1-final", "Microsoft.AspNet.Authentication.Cookies": "1.0.0-rc1-final",

    Create an appsettings.json file and store the database connection string. Create a file with the name appsettings.json at the root level of the project, as shown in the following screenshot:

    Store the following connection string in appsettings.json. This connection string will be used by ASP.NET Identity to store the data in relevant tables: { "Data": { "DefaultConnection": { "ConnectionString": "Server=(localdb)\\mssqllocaldb;Database=aspnet_security;Trusted_Connection =True;MultipleActiveResultSets=true"

    [ 286 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity } } }

    Adding ApplicationUser and ApplicationDbContext classes Create a Models folder and a couple of files—ApplicationDbContext.cs andApplicationUser.cs—as shown in the following screenshot:

    The ApplicationUser class inherits from the IdentityUser class (available at the AspNet.Identity.EntityFramework6 namespace) as follows: public class ApplicationUser : IdentityUser { .. }

    You can add properties to the user as per the needs of your application. I have not added any properties as I would like to keep things simple to show the features of ASP.NET Identity. The ApplicationDbContext class inherits from the IdentityDbContext class of ApplicationUser. In the constructor method, we pass the connectionstring, which is eventually passed to the base class. Even the OnModelCreating method is overridden. If you want to change any table names (to be generated by Identity), you can do so as follows: public class ApplicationDbContext : IdentityDbContext { public ApplicationDbContext(string nameOrConnectionString) : base(nameOrConnectionString) { } protected override void OnModelCreating(DbModelBuilder modelBuilder) {

    [ 287 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity base.OnModelCreating(modelBuilder); } }

    Once we create the Models file, we need to configure the application and services. You can configure these in Configure and ConfigureServices, which are found in the Startup class.

    Configuring the application to use Identity In order to use Identity, we just need to add the following line in the Configure method of the Startup class: app.UseIdentity();

    The complete Configure method is shown in the following code, along with the call of the UseIdentity method, which is app.UseIdentity(): public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddConsole(Configuration.GetSection("Logging")); loggerFactory.AddDebug(); if (env.IsDevelopment()) { app.UseBrowserLink(); app.UseDeveloperExceptionPage(); app.UseDatabaseErrorPage(); } else { app.UseExceptionHandler("/Home/Error");

    app.UseIISPlatformHandler(options => options.AuthenticationDescriptions.Clear()); app.UseStaticFiles(); app.UseIdentity(); // To configure external authentication please see http://go.microsoft.com/fwlink/?LinkID=532715

    [ 288 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity app.UseMvc(routes => { routes.MapRoute( name: "default", template: "{controller=Home}/{action=Index}/{id?}"); }); }

    In the ConfigureServices method, we will make the following changes: We will add the ApplicationDbContext class with the connection string taken from the appsettings.json file We will add Identity with UserStore and RoleStore Finally, we will ask ASP.NET Core to return AuthMessageSender whenever we ask for the IEmailSender and ISMSSender classes public void ConfigureServices(IServiceCollection services { // Add framework services. services.AddScoped(f => { return new ApplicationDbContext(Configuration["Data:DefaultConnection:ConnectionString "]); }); services.AddIdentity() .AddUserStore() .AddRoleStore() .AddDefaultTokenProviders(); services.AddMvc(); // Add application services. services.AddTransient(); services.AddTransient(); }

    [ 289 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    Creating ViewModels Next, we will be creating several ViewModels that we will be using in our Views model. To start with, we will create a RegisterViewModel class that contains three properties—Email, Password, and ConfirmPassword. We decorate the properties with appropriate attributes so that we can use client-side validation using an unobtrusive jQuery validation. We are making all the fields required as follows: public class RegisterViewModel { [Required] [EmailAddress] [Display(Name = "Email")] public string Email { get; set; } [Required] [StringLength(100, ErrorMessage = "The {0} must be at least {2} characters long.", MinimumLength = 6)] [DataType(DataType.Password)] [Display(Name = "Password")] public string Password { get; set; } [DataType(DataType.Password)] [Display(Name = "Confirm password")] [Compare("Password", ErrorMessage = "The password and confirmation password do not match.")] public string ConfirmPassword { get; set; } }

    Now, we can create the LoginViewModel model, which the user can use to log in to your application. There is an additional property, RememberMe, which, when checked, will enable you to log in without having to enter the password again: public class LoginViewModel { [Required] [EmailAddress] public string Email { get; set; } [Required] [DataType(DataType.Password)] public string Password { get; set; } [Display(Name = "Remember me?")] public bool RememberMe { get; set; } }

    [ 290 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    Creating Controllers and associated action methods Now we need to create an AccountController class, where we will define the action methods for authentication and authorization: public class AccountController : Controller { private readonly UserManager _userManager; private readonly SignInManager _signInManager; private readonly IEmailSender _emailSender; private readonly ISmsSender _smsSender; private readonly ILogger _logger; public AccountController( UserManager userManager, SignInManager signInManager, IEmailSender emailSender, ISmsSender smsSender, ILoggerFactory loggerFactory) { _userManager = userManager; _signInManager = signInManager; _emailSender = emailSender; _smsSender = smsSender; _logger = loggerFactory.CreateLogger(); } }

    In the preceding code, we are using services provided by different components. UserManager and SignInManager are provided by ASP.NET Identity. The IEmailSender and ISmsSender are custom classes that we have written which will be used for sending emails and SMS messages. We will look more at e-mail and SMS later in this chapter. Logging is provided by the Microsoft Logging extension. The following is a simple login HTTPGET method. It simply stores the URL from where the Login method is accessed and returns the login page: [HttpGet] [AllowAnonymous] public IActionResult Login(string returnUrl = null) { ViewData["ReturnUrl"] = returnUrl; return View(); }

    [ 291 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    Creating Views Now, we will create respective View page for the login. In this View page, we are just showing the following details: @using @using @using @using

    System.Collections.Generic Microsoft.AspNet.Http Microsoft.AspNet.Http.Authentication AspNet.Identity.EntityFramework6

    @model LoginViewModel @inject SignInManager SignInManager @{ ViewData["Title"] = "Log in"; } @ViewData["Title"].



    Use a local account to log in.











    [ 292 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity





    Log in

    Register as a new user?

    Forgot your password?





    @section Scripts { @{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); } }

    [ 293 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    When the user logs into the application for the first time, they might not have any login credentials, so our application should provide a feature that they can use to create a login for themselves. We will create a simple Register action method that will just return a View with which the user can register themselves: [HttpGet] [AllowAnonymous] public IActionResult Register() { return View(); }

    We will also create the corresponding View that contains input controls for e-mail, password, password confirmation, and a Register button: @model RegisterViewModel @{ ViewData["Title"] = "Register"; } @ViewData["Title"].

    Create a new account.















    Register

    @section Scripts { @{ await Html.RenderPartialAsync("_ValidationScriptsPartial"); } }

    The following is the corresponding POST action method for registration. Here, the program checks whether the model is valid, and, if it is valid, it will create an ApplicationUser object using the model data and call the Identity API (the CreateAsync method). If it can create the user variable, the user will log in using that user ID and be redirected to the Home page: [HttpPost] [AllowAnonymous]

    [ 295 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity [ValidateAntiForgeryToken] public async Task Register(RegisterViewModel model) { if (ModelState.IsValid) { var user = new ApplicationUser { UserName = model.Email, Email = model.Email }; var result = await _userManager.CreateAsync(user, model.Password); if (result.Succeeded) { await _signInManager.SignInAsync(user, isPersistent: false); return RedirectToAction(nameof(HomeController.Index), "Home"); } AddErrors(result); } return View(model); }

    The log-out functionality is pretty simple. It just needs to call the SignoutAsync method of Identity API and be redirected to the Index page: [HttpPost] [ValidateAntiForgeryToken] public async Task LogOff() { await _signInManager.SignOutAsync(); _logger.LogInformation(4, "User logged out."); return RedirectToAction(nameof(HomeController.Index), "Home"); }

    Coming back to the log-in functionality, the following is the respective action method. We are calling the PasswordSignInAsync method of Identity API. Upon a successful login, we redirect the URL from where the log-in functionality is accessed: [HttpPost] [AllowAnonymous] [ValidateAntiForgeryToken] public async Task Login(LoginViewModel model, string returnUrl = null) { ViewData["ReturnUrl"] = returnUrl; if (ModelState.IsValid) { var result = await _signInManager.PasswordSignInAsync(model.Email, model.Password,

    [ 296 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity model.RememberMe, lockoutOnFailure: false); if (result.Succeeded) { return RedirectToLocal(returnUrl); } } // If there is any error, display the form again return View(model); }

    E-mail and SMS services If you want to add e-mail and SMS services to your application's authentication capabilities, you can do so by creating the interfaces and classes shown here: public interface IEmailSender { Task SendEmailAsync(string email, string subject, string message) } public interface ISmsSender { Task SendSmsAsync(string number, string message); } public class AuthMessageSender : IEmailSender, ISmsSender { public Task SendEmailAsync(string email, string subject, string message) { // We can plug in our email service here to send an email. return Task.FromResult(0); } public Task SendSmsAsync(string number, string message) { // We can plug in our SMS service here to send a text message. return Task.FromResult(0); } }

    [ 297 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    Securing an action method in a Controller For the sake of explanation, let us assume that the About page is a secure page and only authenticated users should be able to access it. We just have to decorate the About action method in the Home controller with an[Authorize] attribute: [Authorize] public IActionResult About() { ViewData["Message"] = "This is my about page"; return View(); }

    Making the preceding change will redirect the user to the log-in page when the user tries to access the log-in page without logging in to the application:

    [ 298 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    In the following screenshot, you will notice an additional query parameter, ReturnURL, in the URL. This ReturnURL parameter will redirect the application to that specific page (the value passed in the ReturnURL parameter—Home/About in our case). Once you log in, you'll be redirected to the page that you requested earlier:

    When you register a new user, the details of the user will be stored in the relevant tables created by ASP.NET Identity. Open the SQL Server Object Explorer window by selecting the option View | SQL Server Object Explorer, as shown in the following screenshot:

    [ 299 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    Once you select the SQL Server Object Explorer option, you will see a window similar to the following screenshot. ASP.NET Identity creates a database for us by using Entity Framework and the connection string that we provided earlier in the appsettings.json package. ASP.NET Identity creates several tables to maintain identity-related information and the database migration history of Entity Framework. As we are using ASP.NET Identity at the basic level, none of the identity-related tables will get populated, apart from dbo.AspNetUsers.:

    [ 300 ] WOW! eBook www.wowebook.org

    ASP.NET Core Identity

    You can right-click on the dbo.AspNetUsers table and select View Data to see the data:

    As only one user has been registered in our application, only one row has been created. Please note that the hashed password (marked by ASP.NET Identity for us) and no blank passwords will get stored in the table.

    Summary In this chapter, we learned about authentication and authorization. We also learned how to implement ASP.NET Identity in an ASP.NET Core application by following a step-by-step process. We also discussed the tables involved in ASP.NET Identity and learned how to see the data created by ASP.NET Identity.

    [ 301 ] WOW! eBook www.wowebook.org

    Module 2

    ASP.NET Core and Angular 2 Create powerful applications for the modern web

    WOW! eBook www.wowebook.org

    1

    Getting Ready ASP.NET Core MVC is a web framework, built on top of the core .NET framework, specifically made for building efficient web applications and APIs that will be able to be reached by a massive range of clients including web browsers, mobile devices, smart TVs, web-based home automation tools, and more. Angular 2 is the second major installment of AngularJS, a world-renowned development framework born with the idea of giving the coder the toolbox needed to build reactive, cross-platform web-based apps, which are optimized for desktop and mobile: it features a structure-rich templating approach which is based upon a natural, easy-to-write, and readable syntax. These two frameworks were put together with the same assumption in mind: the HTTP protocol is not limited to serving web pages, it can be also used as a viable platform to build web-based APIs to effectively send and receive data. This theory has slowly made its way through the first 20 years of the World Wide Web and is now an undeniable, widely acknowledged statement, and also a fundamental pillar of almost every modern web development approach. As for the reasons behind this perspective switch, there are plenty of good reasons for it, the most important of them being related to the intrinsic characteristics of the HTTP protocol. It is simple to use, flexible enough to match most development needs in the always-changing environment which the World Wide Web happens to be, not to mention how universal it has become nowadays, almost any platform that you can think of has an HTTP library, so HTTP services can reach a broad range of clients, including browsers, mobile devices, and traditional desktop applications.

    WOW! eBook www.wowebook.org

    Getting Ready

    Two players one goal From the perspective of a fully-functional web-based application, we could say that the Web API interface provided with the ASP.NET Core framework is a programmatic set of server-side handlers used by the server to expose a number of hooks and/or endpoints to a defined request-response message system, typically expressed in structured markup languages such as JSON or XML. This "exposition" is provided using the HTTP protocol thanks to a publicly available web server (typically IIS). Similarly, Angular can be described as a modern, feature-rich client-side library that gives the browser the ability to bind input and/or output parts of an HTML web page to a flexible, reusable, and easily testable JavaScript model. These assumptions allow us to answer a simple, yet inevitable question: can we put together the server-side strengths of ASP.NET Core's Web API capabilities with the frontend capabilities of the Angular library in order to build a modern, feature-rich, and production-ready web application? The answer, in short, is yes. In the following chapters, we'll see how we can do that by analyzing all the fundamental aspects of a well-written, properly designed web-based product and how ASP.NET Core and/or Angular can be used to handle each one of them.

    What's new in Angular 2? The new major version of Angular is a complete rewrite of the previous one, entirely based upon TypeScript and ECMAScript 6 specifications. The choice of not making it backward compatible with the previous installment clearly demonstrates the intention of the authors to adopt a completely new approach, any developer who already knows AngularJS will undoubtedly face a huge number of breaking changes, not only in the code syntax but also in the way of thinking and designing your client app. Angular 2 is highly modular, entirely component-based, features a new and improved dependency injection model and has the main goal of being able to easily integrate with other server-side and client-side frameworks. However, the most important reason why we're picking Angular 2 over other excellent JS libraries such as ReactJS and EmberJS is the fact that it arrives with a huge stack of features out of the box, making it way simpler to use than the aforementioned competitors. If we combine that with the consistency given by TypeScript language we could very well say that, despite being the youngster, Angular 2 has embraced the framework approach more convincingly than the others. That's a great reason to invest in it, hoping it will keep up with these compelling promises.

    [ 304 ] WOW! eBook www.wowebook.org

    Getting Ready

    The ASP.NET Core revolution Summarizing what has happened in the ASP.NET world within the last year is not an easy task, in short, we could say that we're undoubtedly facing the most important series of changes in the .NET Framework since the year it came to life. ASP.NET Core is a complete re-implementation of ASP.NET, which unites all the previous web application technologies such as MVC, Web API and Web Pages into a single programming module, formerly known as MVC6. The new framework introduces a fully featured cross-platform component, also known as .NET Core, shipped with a brand new open source .NET Compiler Platform (currently known as Roslyn), a cross-platform runtime (known as CoreCLR), and an improved x64 Just-In-Time compiler (RyuJIT). You might be wondering what happened to ASP.NET 5 and Web API 2, as these used to be quite popular names until mid-2016. ASP.NET 5 was the original name of ASP.NET Core, before the developers chose to rename it to emphasize the fact that it is a complete rewrite. The reasons for that, together with the Microsoft vision about the new product, are further explained in the following blog post from Scott Hanselman that anticipated the changes on January 16, 2016: http://www.hanselman.com /blog/ASPNET5IsDeadIntroducingASPNETCore10AndNETCore10.aspxFor those who don't know, Scott Hanselman has been the outreach and community manager for .NET/ASP.NET/IIS/Azure and Visual Studio since 2007. Additional information regarding the perspective switch is also available in the following article by the senior Microsoft developer (and NuGet Program Manager) Jeffrey T. Fritz: https://blogs.msdn.microsof t.com/webdev/2016/02/01/an-update-on-asp-net-core-and-net-core / As for Web API 2, it was a dedicated framework for building HTTP

    services returning pure JSON or XML data instead of web pages. Initially born as an alternative to the MVC platform, it has been merged with the latter into the new, general-purpose web application framework known as MVC6, which is now shipped as a separate module of ASP.NET Core.

    [ 305 ] WOW! eBook www.wowebook.org

    Getting Ready

    The field of choice – single-page application In order to demonstrate how ASP.NET Core and Angular 2 can work together to their full extent, we couldn't think of anything better than building a single-page application project. The reason for that is quite obvious: although neither ASP.NET Core nor Angular came into being with the specific purpose of building a SPA, there is no better approach for showcasing some of the best features they have to offer nowadays: we're talking about HTML5 pushState API, webhooks, data transport-based requests, dynamic web components, UI data bindings, and a stateless, AJAX-driven architecture capable of flawlessly encompassing all of these.

    Common features of a competitive SPA If you have never worked on building single-page applications, you need to know what you'll be facing. If you already have, feel free to skip to the following paragraph, unless you don't mind taking another look at the key features provided by any competitive SPA: No server-side round-trips: A competitive SPA is able to redraw any part of the client UI without requiring a full server-side round-trip to retrieve a full HTML page. This is mostly achieved by implementing a separation of concerns design principle, meaning that the data will be separated from the presentation of data by using a model layer that will handle the former, and a view layer that reads from the latter. Efficient routing: A competitive SPA is able to keep track of the user current state and location during his whole navigation experience using organized, JavaScriptbased routers. This is usually accomplished in one of two ways: the Hashbang technique and HTML5 History API usage. We'll talk about both of them in Chapter 2, ASP.NET Controllers and Server-side Routes. Performance and flexibility: A competitive SPA usually transfers all of its UI to the client thanks to its JavaScript SDK of choice (Angular, JQuery, Bootstrap, and so on). This is often good for network performance, as increasing client-side rendering and offline processing reduces the UI impact over the network. But the real deal brought about by this approach is the flexibility granted to the UI, as the developer will be able to completely rewrite the application's frontend with little to no impact on the server, aside from a few static resource files.

    [ 306 ] WOW! eBook www.wowebook.org

    Getting Ready

    The list could easily grow, as these are only some of the major advantages of a properly designed, competitive SPA. These aspects play a major role nowadays, as many business websites and services are switching from their traditional multi-page application (also known as MPA) mindset to fully committed or hybrid single-page application based approaches. The latter have become increasingly popular since 2015 and are commonly called Native Web Applications (NWAs) because they tend to implement a number of small-scale, single-page modules bound together upon a multi-page skeleton rather than building a single, monolithic SPA. Furthermore, there are also a lot of enterprise level single-page applications and NWAs flawlessly serving thousands of users everyday, examples include WhatsApp Web, Teleport Web, and Flickr, plus a wide amount of Google web services including Gmail, Contacts, Spreadsheet, Maps, and more. Most of these services are not there, thanks to the heat of the moment; they were intentionally built that way years ago, and they are meant to stay.

    Product owner expectations One the most interesting, yet underrated, concepts brought out by many modern agile software development frameworks such as SCRUM is the importance given to the meanings and definitions of roles. Among these, there's nothing as important as the product owner, also known as the customer in Extreme Programming methodology, or customer representative elsewhere. In case you don't know, he's the guy that brings the expectations you'll struggle to satisfy to the development table. He will tell you what's most important to deliver and when; he will prioritize your work based on its manifest business value rather than its underlying architectural value; he'll be entitled by the management to take decisions and make tough calls, sometimes these are great, sometimes not so, that will often have a significant impact on your development schedule. In short, he's the one in charge of the project, that's why, in order to deliver a web application matching his expectancies, you'll need to understand his vision and feel it as if it were yours. This is always true, even if your project's product owner is your dad, wife, or best friend; even if he happens to be you.

    [ 307 ] WOW! eBook www.wowebook.org

    Getting Ready

    Now that we've made that clear, let's take a look to some of the most common product owner's expectations for a typical web-based single-page application project, we ought to see if the choice of using ASP.NET Core and Angular is good enough to fulfill each one of them: Early release(s): No matter whether you're selling a bunch of salad or web-based services, the customer will always want to see what he's buying. If you're using SCRUM, you'll have to release a potentially-shippable product at the end of each sprint; in a Waterfall-based approach you'll have Milestones, and so on. One thing is for sure, the best thing you can do in order to efficiently organize your development is to adopt an iterative and/or modular-oriented approach. ASP.NET Core and Angular, along with the strong separation of concerns granted by their underlyingMVC or MVVM based patterns, will gracefully push you into the mindset needed to do just that. GUI over backend: You'll often be asked to work on the GUI and frontend functionalities because they will be the only real viewable and measurable thing for the customer. This basically means that you'll have to mock up the data model and start to work on the frontend as soon as possible, delaying everything that goes on under the hood, even if that means leaving it empty, we could say that the hood is what you need the most. Please notice that this kind of approach is not necessarily bad, by all means, we're not tying up the donkey where the (product) owner wants. On the other hand, the choice of using ASP.NET Core together with Angular will grant you the chance of easily decoupling the presentation layer and the data layer implementing the first and mocking the latter, which is a great thing to do. You'll be able to see where you're going before wasting valuable time or being forced to make potentially wrong decisions. ASP.NET Core's Web API interface will provide the proper tools to do that by allowing you to create a sample web application skeleton in a matter of seconds using Visual Studio Controller templates and in-memory data contexts powered by theEntity Framework 6, which you will be able to access using entity models and code-first. As soon as you do that, you'll be able to switch to the GUI design using the Angular 2 presentation layer toolbox as often as you want until you reach the desired results. Once you are satisfied, you'll just need to properly implement the web API controller interfaces and hook up the actual data. Fast completion: None of them will work unless you also manage to get everything done in a reasonable time span. This is one of the key reasons for choosing to adopt a server-side framework and a client-side framework working together with ease. ASP.NET Core and Angular 2 are the tools of choice not only because they're both built on solid, consistent ground, but also because they're meant to do precisely that: get the job done on their respective side and provide a usable interface to the other partner.

    [ 308 ] WOW! eBook www.wowebook.org

    Getting Ready

    Adaptability: As stated by the Agile manifesto, being able to respond to change requests is more important than following a plan. This is especially true in software development, where we could even claim that anything that cannot handle change is a failed project. That's another great reason to embrace the separation of concerns enforced by our two frameworks of choice, as this grants the developer the ability to manage, and even welcome, to some extent, most of the layout or structural changes that will be expected during the development phase. That's about it. Notice that we didn't cover everything here, as it would be impossible without undertaking an actual assignment. We just tried to give an extensive answer to the following general questions. If we were to build SPA, would ASP.NET Core and Angular 2 be an appropriate choice? The answer is undoubtedly yes, especially when used together. Does it mean that we're done already? Not a chance, as we have no intention of taking this assumption for granted. Conversely, it's time for us to demonstrate it by ceasing to speak in general terms and starting to put things in motion.

    A sample single-page application project What we need now is to conceive a suitable test-case scenario similar to the ones we will eventually have to deal with: a fully featured, production-ready single-page application project complete with all the core aspects you would expect from a potentially shippable product. In order to do this, the first thing we need to do is to become our own customer for a minute and come up with an idea, a vision to share with our own other self. We'll then be able to put our developer shoes back on and split our abstract plan into a list of items we'll need to implement: these will be the core requirements of our own project. Finally, we'll set up our workstation by getting the required packages, adding the resource files, and configuring both the ASP.NET Core and Angular 2 frameworks into the Visual Studio 2015 IDE.

    [ 309 ] WOW! eBook www.wowebook.org

    Getting Ready

    The vision If we're going to demonstrate the key features of ASP.NET Core and Angular we can't really take into consideration the number of presentation-oriented websites such as demos, product galleries, corporate or marketing showcases, photo/video/media reels, and so on, as we need something that can show the asynchronous and parallel request processing capabilities of both frameworks to their full extent. In order to fulfill these expectations, we would instead need something similar to a blog engine, a content-management system, and/or a groupware platform such as a community forum or a wiki. The latter seems to be the most appropriate, as it will mean going through a number of implementation challenges including account management, login/session handling, searchbased navigation, full-text search queries, titles and contents indexing, multi-language support, markup syntax rendering, dynamic media handling and a not-so-trivial data model on top of an ever-growing, potentially huge amount of data. The application we are going to build won't be just a shallow demonstration; we won't throw some working code here and there and expect the reader to connect the dots. Our objective is to create a solid, realistic application using our frameworks of choice while following the current development best practices. Each chapter will be dedicated to a single core aspect and if you feel like you already know your way there, feel free to skip to the next one. Conversely, if you're willing to follow us through the whole loop, you'll have a great journey through the most useful aspects of ASP.NET Core and Angular 2 and how they can work together to deliver the most common and useful web development tasks, from the most trivial ones to the more complex beasts. It's an investment that will pay dividends, as it will leave you with a maintainable, extensible, and well-structured project, plus the knowledge needed to build your own. To avoid making things too boring, we'll pick an enjoyable, enticing theme from the entertainment industry: a collaborative, wiki-enabled database of open-source video games, where registered users will be able to add/modify entries. We'll call it OpenGameList, also known as opengamelist.com. Luckily enough, the domain was still available at the time of writing this book. If you go there now, you'll be able to see what we're going to build from scratch (don't do that if you don't like spoilers, as it could ruin some of your fun).

    [ 310 ] WOW! eBook www.wowebook.org

    Getting Ready

    Core requirements Our application will follow a rather classic approach: our users will be able to browse the latest entries and navigate through the wiki using internal wiki links or via simple or complex search queries. If they're authenticated, they'll also be allowed to switch to edit mode to modify an item or add a new one, otherwise, they'll be prompted to enter their credentials via a classic login or registration form. On top of all that, we'll also add an administration area where we can do our management stuff such as deleting entries, editing/disabling/deleting users, running maintenance tasks, and so on. Let's break down these requirements to a list of development topics: Routing: The app will be able to properly respond to client requests, that is to say, routing them accordingly to what they're up to. Data model: We'll definitely adopt a database engine to store our entries and the proper tools to access it in a modern, fashionable way. In order do so, we need to define our data architecture by setting up Data Repositories and Domain Entities that will be handled by the server and hooked to Angular 2 through ASP.NET Core's Web API controller interfaces. Web API controllers: From an MVC-based architectural perspective, one of the main differences between multi-page and single-page applications is that the former's controllers are designed to return views, while the latter ones, also known as API controllers, return serialized data. These are what we will need to implement to put Angular 2 components in charge of the presentation layer. Angular components: Switching to client-side, we will need to define a set of components to handle UI elements and state changes. As you probably already know, components are the most fundamental elements in Angular 2, replacing Angular 1's controllers and scopes. We'll get to know more about them soon enough. Authentication: Soon enough we'll have the need to empower our application with a membership context, that way we'll be able to restrict CRUD operations to authenticated users only, keep track of each user actions, prepare the administration layer, and so on. These will be our main development challenges: if we don't know how to properly handle them, we won't be able to succeed. The following chapters will address each one of them: we'll also cover other important aspects such as SEO, security, and deployment, as they will be very important when you are ready to publish your application.

    [ 311 ] WOW! eBook www.wowebook.org

    Getting Ready

    Preparing the workspace The first thing we have to do is to set up our workstation: it won't be difficult because we only need a small set of essential tools. These include Visual Studio 2015, the web platform installer, .NET Framework 4.5 or higher, a web server such as IIS or IIS Express, and a decent source code control system such as Git, Mercurial, or Team Foundation, which we will take for granted as you most likely already have them in place. Just make sure you're all set with these before going further. IMPORTANT If you haven't already, be sure to download and install Visual Studio 2015 Update 3 and the .NET Core for Visual Studio Tools Preview 2. These are the latest updates at the time of writing, but will be updated in the near future. These updates will address some important issues for web development tools, as well as adding improved support for TypeScript, NPM, and other components that we'll be using later on: Visual Studio 2015 Update 3: http://go.microsoft.com/fwli nk/?LinkId=691129

    .NET Core for Visual Studio Tools Preview 2.0.1: https://go. microsoft.com/fwlink/?LinkID=824849

    Setting up the project The first thing we need to do is to create a newASP.NET Core web application project: 1. Fire up Visual Studio 2015 and, from the File menu, expand New and select Project to open a new project modal window.

    [ 312 ] WOW! eBook www.wowebook.org

    Getting Ready

    2. From the Templates tree, expand the Visual C# node and select the Web subfolder: the right section of the modal window will be populated by a number of available project templates. Among these, there are two choices for creating an ASP.NET Core web application project: the first one, optimized for crossplatform deployment, entirely relies upon the new .NET Core Framework; the other one, ideal for a Windows environment, is based upon the latest .NET Framework version (4.6.1 at the time of writing).

    3. The good thing here is that, thanks to the ASP.NET Core versatility, we are free to choose the approach we like the most, as both frameworks are mature enough to support almost everything we will use within this book. The only relevant downside of the .NET Core choice is the lack of compatibility with some NuGet packages that haven't been ported there yet: that's why, at least for our example, we'll be choosing to stick with the full-scale .NET Framework. In order to do that, select the ASP.NET Core Web Application (.NET Framework) template and fill in the relevant Name, Location, and Solution name fields. We'll name the solution OpenGameList, while the project will be calledOpenGameListWebApp, as shown in the previous screenshot. Once done, click OK to continue.

    [ 313 ] WOW! eBook www.wowebook.org

    Getting Ready

    4. In the next modal window, we can further customize our template by choosing the default contents to include in our project (Empty, Web API, or Web Application) and the authentication mechanism, should we want to use one. Choose Web API and No authentication, then click the OK button to create the project.

    5. If you're used to the Visual Studio Web Application Project templates from previous ASP.NET versions you may be tempted to choose Empty instead, thus avoiding the insane amount of sample classes, folders, and components, including a number of potentially outdated versions of various client-side frameworks such as Bootstrap, KnockoutJS, JQuery, and more. Luckily, these new ASP.NET Core project templates are extremely lightweight - the one we did choose comes with nothing more than a /Controller/ folder, a ValuesController.cs sample, and some useful references in the Startup.cs file.

    [ 314 ] WOW! eBook www.wowebook.org

    Getting Ready

    Package and resource managers Now that our project has been created, it's time to add the resources we're going to use. As we already mentioned, the layout of a standard ASP.NET Core solution is quite different from what it used to be. The main differences are: The solution's projects are now created in a /src/ folder by default. This is just a convention, though, as they can be placed anywhere. There is a brand-new wwwroot folder, which will contain the compiled, ready-topublish contents of our application, while everything else will be the project source code. Other things worth noting are a Controller folder, containing a sample ValueController.cs class, a Startup.cs file containing the application class and a couple of other files we'll address in a while.

    [ 315 ] WOW! eBook www.wowebook.org

    Getting Ready

    The first thing we need to do is to add a folder called Scripts to the root of our project. We'll use this folder to place all of our JavaScript files, and then we'll combine and minify these using a JavaScript Task Runner such as Gulp, this way we'll make sure that the resulting file will be added to the wwwroot folder automatically after each build.

    Installing the packages Now it's time to make a good use of the three package managers natively supported by ASP.NET, namely NuGet, NPM, and Bower. These tools will allow you to gather all the resources you need to build your application: they will download all the resources and their dependencies automatically, so you needn't do it manually, thus saving a lot of time. In case you're wondering why we need three of them, it can be useful to briefly recap their scope: NuGet: This will take care of all the .NET native and third-party packages such as Entity Framework, ASP.NET MVC, and so on. The full package list is stored in the project.json file so they can be retrieved and/or checked for updates at any time. NPM: This was the default package manager for the JavaScript runtime environment known as Node.js. Over the last few years, though, it has also been used to host a number of projects, libraries, and frameworks of any kind, including Angular 2. The package list is stored in the package.json file. Bower: Another package management system for client-side programming, created by Twitter and maintained on GitHub, specifically designed for frontend development such as jQuery, Bootstrap, and AngularJS. It depends on Node.js and NPM and works under git. Its configuration file is called bower.json. Notice that, since the Angular 2 team is pushing their code using NPM rather than Bower, we won't be using it in our project.

    NuGet and ASP.NET ASP.NET Core gives us at least four different ways to add NuGet packages to our project: Using the Visual Studio powered GUI, accessible by right-clicking the project and choosing Manage NuGet Packages. Using the Package Manager Console, with the well-renowned Install-Package command followed by the package name and build version.

    [ 316 ] WOW! eBook www.wowebook.org

    Getting Ready

    Using the on-screen helper tools provided by Intellisense, limited to the native .NET modules/libraries. Directly adding the package reference to the project's NPM configuration file, also known as project.json. The first three methods, although being absolutely viable, are basically shortcuts for populating the fourth one; the latter has the advantage of being the less opaque one, so we'll just use it.

    Project.json Open the project.json file, find thedependencies section and add the following packages to the list (new lines are highlighted): "dependencies": { "Microsoft.AspNetCore.Mvc": "1.0.0", "Microsoft.AspNetCore.Server.IISIntegration": "1.0.0", "Microsoft.AspNetCore.Server.Kestrel": "1.0.0", "Microsoft.Extensions.Configuration.EnvironmentVariables": "1.0.0", "Microsoft.Extensions.Configuration.FileExtensions": "1.0.0", "Microsoft.Extensions.Configuration.Json": "1.0.0", "Microsoft.Extensions.Logging": "1.0.0", "Microsoft.Extensions.Logging.Console": "1.0.0", "Microsoft.Extensions.Logging.Debug": "1.0.0", "Microsoft.Extensions.Options.ConfigurationExtensions": "1.0.0", "Microsoft.AspNetCore.Diagnostics": "1.0.0", "Microsoft.AspNetCore.Routing": "1.0.0", "Microsoft.AspNetCore.Authentication.JwtBearer": "1.0.0", "Microsoft.AspNetCore.StaticFiles": "1.0.0", "Microsoft.VisualStudio.Web.BrowserLink.Loader": "14.0.0" }

    We have added a space to visually separate the default dependencies required by all ASP.NET Core projects from our newly added ones.

    [ 317 ] WOW! eBook www.wowebook.org

    Getting Ready

    The listed builds are the latest at the time of writing, but they won't last forever: as soon as ASP.NET Core passes its initial release (1.0.0 at the time of writing), these numbers will gradually increase over time, whenever a new version comes out. To check the latest version of each package, just place the cursor between the quotes and delete the version number, a dynamic drop-down list will be shown containing all the latest versions for that given module. While we're here, it can be useful to check which version of the .NET Framework we are targeting by looking at the frameworks key. Since we choose the .NET Framework template, we should find something like this: "frameworks": { "net461": { } },

    This will most likely change in the future, so be sure to target a version compatible with the packages you want to use. For the purpose of this book, the .NET Framework 4.6.1 will do the job. As soon as we save the project.json file, Visual Studio will start to retrieve the missing packages from the web. Wait for it to finish, then proceed with opening the Startup.cs file, which is also in the project root.

    Startup.cs If you're a seasoned .NET developer you might already be familiar with the Startup.cs file, introduced in OWIN-based applications to replace most of the tasks previously handled by the good old Global.asax file. In ASP.NET Core, the Startup.cs file serves the same purpose as the OWIN startup class, being nothing less than the application main entry point, it is the place where we can add services, choose which application modules and middleware functions to load, handle dependency injection tasks, and configure the pipeline. However, the similarities end here, the class has been completely rewritten to be as pluggable and lightweight as possible, meaning that it will include and load only what's strictly necessary to fulfill our application's tasks.

    [ 318 ] WOW! eBook www.wowebook.org

    Getting Ready

    To better understand this, let's take a look at the following lines taken from the Startup.cs source code shipped with the ASP.NET Core Web API project template we chose: // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddConsole(Configuration.GetSection("Logging")); loggerFactory.AddDebug(); app.UseMvc(); }

    Notice how empty our application's HTTP request pipeline is, it won't ever serve static files, since there is nothing telling it to do so. To better understand it, let's perform a quick test before proceeding.

    Testing the HTTP request pipeline In order to check that the ASP.NET pipeline is properly working, click on the Start Debugging option in the Debug menu, or just hit the F5 hotkey. If everything is OK, your default web browser will open pointing to the following URL: http://localhost:14600/api/values. The page content will show the following:

    [ 319 ] WOW! eBook www.wowebook.org

    Getting Ready

    If we're seeing the preceding screenshot, it means that the request pipeline is working fine, the MVC module has been added within the Startup.cs file and it's working as expected. That's because there's a sample ValueController.cs class in the /Controllers/ folder, conveniently added by the Web API project template we chose a while ago, that behaves exactly like this. Now let's try to request the static Project_Readme.html file, also added by our chosen template in the project root. In order to reach it, we need to move it inside the /wwwroot/ folder. Once done, it should be reachable by pointing at the following URL: http://localhost:14600/Project_Readme.html. However, if we try to do that, and then issue that request using the same browser we used before, we would get the following response:

    [ 320 ] WOW! eBook www.wowebook.org

    Getting Ready

    This HTTP 404 error clearly demonstrates what we've just said, the HTTP request pipeline won't serve static files, simply because we didn't tell it to. However, we can easily fix that behavior by adding them to the pipeline within the Startup.cs file (new lines highlighted): // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory) { loggerFactory.AddConsole(Configuration.GetSection("Logging")); loggerFactory.AddDebug(); // Configure a rewrite rule to auto-lookup for standard default files such as index.html. app.UseDefaultFiles(); // Serve static files (html, css, js, images & more). See also the following URL: // https://docs.asp.net/en/latest/fundamentals/static-files.html for further reference. app.UseStaticFiles(); // Add MVC to the pipeline app.UseMvc(); }

    While we are here, we also added the following: A rewrite rule to enable support for the most common default files (such as

    index.htm, index.html, and so on), which will be automatically served

    without the user having to fully qualify the URI. A series of comments to better clarify the meaning of each module, including a reference link to the official ASP.NET Core documentation.

    [ 321 ] WOW! eBook www.wowebook.org

    Getting Ready

    If we run our application again, we should now be welcomed with the following:

    That's it. We have enabled static file support, so that we'll be able to serve not only HTML but also CSS, JS files, and so on. Delete the Project_Readme.html and get ready to install an important third-party NuGet package that we'll be using a lot in the following chapters.

    Newtonsoft.Json If you're working with ASP.NET and you've never heard about Newtonsoft.Json (formerly Json.NET), you've almost certainly missed something that could've eased your job, big time. We're talking about one of the finest libraries - and most useful tools, ever developed for .NET, at least for the writer. It's a very efficient (and thus very popular), highperformance JSON serializer, deserializer, and all-around framework for .NET, which also happens to be completely open source.

    [ 322 ] WOW! eBook www.wowebook.org

    Getting Ready

    We won't dig into it anymore here, as we'll be using it soon enough. For now, let's just install it by right-clicking on our solution's node in the Solution Explorer, then choosing Manage NuGet Packages for this Solution... to open up the following panel:

    Search for Newtonsoft.Json to make it appear if it isn't there already, then left-click to select it. Ensure to select the latest stable version (9.0.1 at the time of writing), click on Install, and then wait for the installer to complete its job. That's it for now: if we open the project.json file now, we can see that the Newtonsoft.Json package reference has been added at the end of the dependencies section, together with the other project-specific dependencies we've manually added before. We'll be installing other packages using the GUI in the following chapters, as soon as we need them, now that we know how easy it is to do that.

    JavaScript, TypeScript, or Dart? Now it's time to choose the client programming language to adopt. Given the fact we're planning to use Angular 2, our choices are basically the following three: good old JavaScript, its Microsoft superset known as TypeScript, or the Google growing beast known as Dart.

    [ 323 ] WOW! eBook www.wowebook.org

    Getting Ready

    In this project, we're going to use TypeScript for a number of good reasons, the most important of them are as follows: TypeScript has a number of features that JavaScript doesn't, such as static typing, classes, and interfaces. Using it in Visual Studio also gives us the chance to benefit from the built-in IntelliSense, which, together with its distinctive features, will allow us to spot most programming errors as we type the code, potentially saving a great amount of time. For a large client-side project, TypeScript will allow us to produce a more robust code, which will also be fully deployable anywhere a plain JavaScript file would run. As a matter of fact, since TypeScript is a superset of JavaScript it can be used alongside any JavaScript code without problems. Dart is a wonderful newcomer, and it will probably surpass its ECMA scriptrivals soon. Currently though, it is still quite immature in terms of available thirdparty libraries, documentation, development support, and overall community knowledge. We're not the only ones praising TypeScript: it's something acknowledged by the Angular team itself, considering the fact that the Angular 2 source code has been written using TypeScript, as proudly announced by Microsoft in the following MDSN blog post: https ://blogs.msdn.microsoft.com/typescript/2015/03/05/angular-2-built-on-typescri pt/.

    Adding the tsconfig.json file The first thing we hav to do to set up TypeScript is to add a tsconfig.json file to our root project. In the Solution Explorer, right-click on the root project node and select Add, then New Item; switch to the client-side from the top left tree view, then select TypeScript Configuration File, and add the tsconfig.json file to the project root.

    [ 324 ] WOW! eBook www.wowebook.org

    Getting Ready

    The tsconfig.json file will be generated with its default set of options, which are good for most scenarios but not ideal for what we're going to do. That's why we need to add/change some settings until it will look like the following: { "compileOnSave": false, "compilerOptions": { "emitDecoratorMetadata": true, "experimentalDecorators": true, "module": "system", "moduleResolution": "node", "noImplicitAny": false, "noEmitOnError": false, "removeComments": false, "target": "es5" }, "exclude": [ "node_modules", "wwwroot" ] }

    [ 325 ] WOW! eBook www.wowebook.org

    Getting Ready

    These instructions will influence how Intellisense and our external TypeScript compiler will work; two things that will help us, big time. Wait a minute, did we just say external TypeScript compiler? What about the built-in, integrated TypeScript compile feature provided by the Visual Studio IDE? Well, we just said that one of TypeScript's biggest advantages is that we can use it anywhere a plain JavaScript file would run, that's because any .ts file can be compiled into a .js file without any problem. The compile task is handled by the TypeScript compiler itself, also known as tsc, which can be run directly from the Visual Studio IDE with the help of the tsconfig.json we added in the preceding paragraph. That file, as we've just seen, contains a wide set of compiling options such as creating source maps, creating definition files, concatenating everything into a single output file, and so on. Looking great, isn't it? Unfortunately, not so much. Although Visual Studio 2015 does a pretty decent job of acting as a bridge between our TypeScript sources and tsc, it still lacks an important key feature, it doesn't give the option to minify/uglify the resulting JS files, thus leaving them uncompressed after the compile task. Such feature can be trivial during development, yet it happens to be quite important in production because we'll definitely want to cloak our source code before publishing it. That's why the best thing we can do is to stop the Visual Studio IDE from compiling TypeScript files and implement a custom alternative that does support minify/uglify. In case you're wondering about what the terms minify and/or uglify actually mean, we strongly suggest that you read the following Wikipedia page: https://en.wikipedia.org/wiki/Minification_(programming). Luckily, we're going to use something we would need anyway, a dedicated, streamlined, and modern task runner that goes by the name of Gulp. In order to install it, though, we need to set up the appropriate package manager.

    [ 326 ] WOW! eBook www.wowebook.org

    Getting Ready

    Introducing NPM NPM is the tool we will use to add some important packages to our project, the most relevant ones being Gulp and Angular 2. To install it, do the following: 1. 2. 3. 4.

    Go to the Solution Explorer. Right-click on the root project node. Select Add, then New Item. Switch to Client-side from the top left tree view, then select NPM Configuration File, and add the package.json file to the project root.

    [ 327 ] WOW! eBook www.wowebook.org

    Getting Ready

    Change the contents of the automatically-generated package.json file to match the following: { "version": "1.0.0", "name": "opengamelistwebapp", "private": true, "dependencies": { }, "devDependencies": { "gulp": "^3.9.1", "gulp-clean": "^0.3.2", "gulp-concat": "^2.6.0", "gulp-sourcemaps": "^1.6.0", "gulp-typescript": "^2.13.6", "gulp-uglify": "^2.0.0", "typescript": "^1.8.10" } }

    Please notice that, at the time of writing, Gulp 3.9.1 is the latest version. You can check the most recent build by checking the Visual Studio Intellisense, which is able to perform impressive real-time update checks upon each package's latest build just like it does within the project.json file:

    [ 328 ] WOW! eBook www.wowebook.org

    Getting Ready

    As an alternative, you can always visit the Gulp project NPM page at https://www.npmjs. com/package/gulpand change the build version numbers accordingly. Notice that you aren't forced to input precise build numbers, as you can also use the standard npmjs syntax to specify auto-update rules bound to custom version ranges using the supported prefixes, such as the following: The Tilde (~): ~1.1.4 will match all 1.1.x versions, excluding 1.2.0, 1.0.x and so on. The Caret (^): ^1.1.4 will match everything above 1.1.4, excluding 2.0.0 and above. This is another scenario where Intellisense will come in handy, as it will also suggest how to do it. For an extensive list of available npmjs commands and prefixes, you can also check out the official npmjs documentation at https://docs.npmjs.com/files/package.json.

    Working with Gulp As you most likely already know, Gulp is a powerful task runner toolkit that we will use to automate some time-consuming tasks in our development workflow. If you never worked with a task runner, think of it as a batch file or, even better, a PowerShell script powered with some useful web development tools such as file concatenation, js/css shrinking, and more. You will notice that, together with Gulp, we're also installing some Gulp-related plugins: gulp-clean: This deletes the contents of the destination folders gulp-concat: This merges multiple files into one gulp-sourcemaps: This generates the Source Maps gulp-typescript: This compiles TypeScript files into JS gulp-uglify: This minifies JavaScript files

    Configuring Gulp is as simple as adding the Gulp Configuration file to your project. You can do that just like you did with the NPM Configuration file, right-click on the root project node in the Solution Explorer and select Add, then New Item. From the client-side tree panel, locate and select the Gulp Configuration file and add a gulpfile.js file to the project root.

    [ 329 ] WOW! eBook www.wowebook.org

    Getting Ready

    The Gulp configuration file, generally known as Gulpfile, is basically a list of tasks and commands that Gulp will execute, written using a mostly readable JavaScript syntax. Working with it can be tricky at first, especially if you want to use it to handle complex tasks. Digging too much into it would take us out of the scope of this book, so we'll just see how we can tell it to do a rather simple set of jobs, merge all the JavaScript files in the /Scripts/ folder, minus those in a directory for third-party scripts/libraries, into a single, minified all.min.js file which will be generated into the wwwroot folder. This can be obtained with the following code: var gulp = require('gulp'), gp_clean = require('gulp-clean'), gp_concat = require('gulp-concat'), gp_sourcemaps = require('gulp-sourcemaps'), gp_typescript = require('gulp-typescript'), gp_uglify = require('gulp-uglify'); /// Define paths var srcPaths = { app: ['Scripts/app/main.ts', 'Scripts/app/**/*.ts'], js: ['Scripts/js/**/*.js'] }; var destPaths = { app: 'wwwroot/app/', js: 'wwwroot/js/' }; // Compile, minify and create sourcemaps all TypeScript files and place them to wwwroot/app, together with their js.map files. gulp.task('app', function () { return gulp.src(srcPaths.app) .pipe(gp_sourcemaps.init()) .pipe(gp_typescript(require('./tsconfig.json').compilerOptions)) .pipe(gp_uglify({ mangle: false })) .pipe(gp_sourcemaps.write('/')) .pipe(gulp.dest(destPaths.app)); }); // Delete wwwroot/app contents gulp.task('app_clean', function () { return gulp.src(destPaths.app + "*", { read: false }) .pipe(gp_clean({ force: true })); }); // Copy all JS files from external libraries to wwwroot/js gulp.task('js', function () { return gulp.src(srcPaths.js) // .pipe(gp_uglify({ mangle: false })) // disable uglify

    [ 330 ] WOW! eBook www.wowebook.org

    Getting Ready // .pipe(gp_concat('all-js.min.js')) // disable concat .pipe(gulp.dest(destPaths.js)); }); // Delete wwwroot/js contents gulp.task('js_clean', function () { return gulp.src(destPaths.js + "*", { read: false }) .pipe(gp_clean({ force: true })); }); // Watch specified files and define what to do upon file changes gulp.task('watch', function () { gulp.watch([srcPaths.app, srcPaths.js], ['app', 'js']); }); // Global cleanup task gulp.task('cleanup', ['app_clean', 'js_clean']); // Define the default task so it will launch all other tasks gulp.task('default', ['app', 'js', 'watch']);

    Here's a brief explanation of what these commands actually do: In lines 1-6, we set up Gulp and the required plugins. Notice that every one of them will be initialized into a variable, as that's the way Gulp works. In lines 8-17,we define the file paths we will use. Doing that here will allow us to write them only once, thus making the Gulp file easier to maintain. Notice that we're splitting the source and the destination paths into two distinct objects, this will mentally help us to keep the development environment separated from the production one. In lines 19-27, we have the app task, which is the most important and complex one. This task makes use of three different plugins: sourcemaps, typescript, and uglify. Notice how each line is bound to a specific plugin action, which takes care of a single job, following the "It should do one thing" paradigm Gulp is all about. Line 21 defines the source files, line 22 initializes the sourcemaps plugin, line 23 instructs the plugin to fetch the TypeScript compiler options from the tsconfig.json file, and so on. In lines 29-33, we have the app_clean task, which will erase every file and subfolder within the wwwroot/app target path.

    [ 331 ] WOW! eBook www.wowebook.org

    Getting Ready

    In lines 35-39, we define the js task, which is meant to publish external JavaScript libraries manually added to the project. This is a rather simple task, as it performs a mere copy of the .js files from a source folder into its production counterpart. Notice how we could easily choose to also minify and/or concatenate the source files into a single one by uncommenting a few lines of code. However, since this task is meant to handle third-party JavaScript libraries, it would be wise not to manipulate their code. In lines 43-47, there's the js_clean task that will erase everything within the wwwroot/js target path. In lines 49-52, we have the watch task. This is basically a file watcher/monitoring task that will execute the app and js tasks if one or more of their source files will change. In lines 54-55, we created a cleanup task that will erase the content of all destination paths by launching the app_clean and js_clean tasks. It can be useful to reset these folders to their initial state. Last but not least, in lines 57-58, we define the default task that will execute all the other tasks, including watch, within a single call. This is the one we will launch. It's worth noting that the uglify plugin requires the mangle option to be set to false: this is a workaround for a bug in the Angular 2 RC5 that will hopefully be fixed in future releases. For more info regarding this topic, check out the following URL from the official Angular 2 GitHub repository: https://github.com/angular/angular/issues/10618. To execute the Gulp file, right-click on it from the Solution Explorer and choose Task Runner Explorer, or open it manually by selecting it from File, Other Windows in the Visual Studio 2015 main menu bar. Once opened, click the refresh button, then right-click the default task and select Run to activate it.

    [ 332 ] WOW! eBook www.wowebook.org

    Getting Ready

    Once you do that, the watch task we just created will silently run in background, keeping tracks of our changes and act accordingly. Notice the default (running) word within the tab label, which will remind us that there is still an ongoing task: closing that console window will immediately shut it down. That's everything we'll ever need from Gulp for this project; there's no need for us to go further. If you want to learn more about Gulp and its configuration file syntax, you will find a lot of resources and documentation, together with a lot of useful samples, in the official website at the following address: http://gulpjs.com/.

    Dependent tasks Since we're going to do a lot of modifications to the files contained in the /Scripts/app/ folder within the following chapters, it would be great if the app_clean task could run automatically before the app task: such behavior would ensure that we'll always get rid of outdated and orphaned files without having to manually perform the cleanup. Luckily enough, Gulp allows us to easily do that by passing an optional array of dependent tasks that will be launched before the main one.

    [ 333 ] WOW! eBook www.wowebook.org

    Getting Ready

    Let's go back to the line where we defined the app task and add the following (updated code are highlighted): gulp.task('app', ['app_clean'], function () {

    That's it. From now on the app task will launch the app_clean dependent task and wait for its completion before executing itself, meaning that the /wwwroot/app/ folder contents will be erased before the arrival of the new file, this is precisely what we wanted. It's worth noting that the synchronous behavior of the app_clean task is guaranteed by its return value, whenever a dependent task is returning itself, the main task will wait for its completion before running.

    Using Grunt instead of Gulp Before Gulp was widely welcomed by the web application development community, Grunt used to be the king of the hill. Does that mean the former killed the latter? Well, most certainly not. Grunt is still an excellent tool and can be a great alternative to Gulp, especially if you already know how to use it. Unless you do, though, we suggest starting with Gulp because we think that it has a fair edge on code flexibility, and also a more streamlined approach. However, if you don't feel like using it, sticking to Grunt and its renowned plugins is just as good, you won't ever get fired for such a choice. Using Grunt instead of Gulp is as easy as doing the following: 1. In NPM'spackage.json file, replace the Gulp references - together with the relevant plugins, with the Grunt packages. 2. In the Solution Explorer, add a gruntfile.js file instead of a gulpfile.js. 3. In the gruntfile.js, rewrite the same tasks defined previously using the Grunt syntax. This is a good example of the Grunt-powered package.json: { "version": "1.0.0", "name": "opengamelistwebapp", "private": true, "dependencies": { }, "devDependencies": { "grunt": "^0.4.5", "grunt-contrib-clean": "^1.0.0", "grunt-contrib-copy": "^1.0.0",

    [ 334 ] WOW! eBook www.wowebook.org

    Getting Ready "grunt-contrib-uglify": "^1.0.0", "grunt-contrib-watch": "^0.6.1", "grunt-ts": "^5.3.2", } }

    And this is how the gruntfile.js would look after porting all the Gulp tasks defined previously to Grunt syntax: module.exports = function (grunt) { grunt.loadNpmTasks('grunt-contrib-clean'); grunt.loadNpmTasks('grunt-contrib-copy'); grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.loadNpmTasks('grunt-contrib-watch'); grunt.loadNpmTasks('grunt-ts'); grunt.initConfig({ clean: [ 'Scripts/app/*', 'Scripts/js/*' ], ts: { base: { src: [ 'Scripts/app/main.ts', 'Scripts/app/**/*.ts' ], outDir: 'wwwroot/ app', tsconfig: './tsconfig.json' } }, uglify: { my_target: { files: [{ expand: true, cwd: 'wwwroot/ app', src: ['**/*.js'], dest: 'wwwroot/ app' }] }, options: { sourceMap: true } }, // Copy all JS files from external libraries and required NPM packages to wwwroot/js

    [ 335 ] WOW! eBook www.wowebook.org

    Getting Ready copy: { main: { files: [{ expand: true, flatten: true, src: [ 'Scripts/js/**/*.js' ], dest: 'wwwroot/js/', filter: 'isFile' }] } }, // Watch specified files and define what to do upon file changes watch: { scripts: { files: [ 'Scripts/**/*.ts', 'Scripts/**/*.js', ], tasks: ['clean', 'ts', 'uglify', 'copy'] } } }); // Global cleanup task grunt.registerTask('cleanup', ['clean']); // Define the default task so it will launch all other tasks grunt.registerTask('default', ['clean', 'ts', 'uglify', 'copy', 'watch']); };

    As we can see, the syntax is very different but the underlying logic and features are quite similar, we still have a bunch of dedicated tasks to handle TypeScript files, minify actions, copy operations, and monitor content change, and then the default one to wrap everything up. Notice how, just like we did with Gulp, the TypeScript compiler options are fetched from the tsconfig.json file so we don't have to write them a second time. To learn more about Grunt and its configuration file syntax you can visit the official website at the following address: http://gruntjs.com/.

    [ 336 ] WOW! eBook www.wowebook.org

    Getting Ready

    That's basically all we need to know about task runners. Too bad we don't yet have any TS and/or JS files to properly test what we just did! Don't worry, though, we'll get there soon enough. Before that, let's finish our NPM journey by bringing the last (but not least) of our players into the loop.

    Adding Angular 2 There are basically two ways to get Angular 2, both with their pros and cons, using NPM, which implies fetching the latest build and hosting the code within our project, or by a dedicated set of links to a suitable CDN.

    Using NPM To install Angular 2 using NPM we need to open the package.json file and add a bunch of packages under the dependencies node. This is how the file should look like at the end (Angular lines are highlighted): { "version": "1.0.0", "name": "OpenGameListWebApp", "private": true, "dependencies": { "@angular/common": "2.0.0-rc.5", "@angular/compiler": "2.0.0-rc.5", "@angular/core": "2.0.0-rc.5", "@angular/http": "2.0.0-rc.5", "@angular/platform-browser": "2.0.0-rc.5", "@angular/platform-browser-dynamic": "2.0.0-rc.5", "@angular/upgrade": "2.0.0-rc.5", "core-js": "^2.4.1", "reflect-metadata": "^0.1.3", "rxjs": "5.0.0-beta.6", "systemjs": "^0.19.37", "zone.js": "^0.6.12" }, "devDependencies": { "gulp": "^3.9.1", "gulp-clean": "^0.3.2", "gulp-concat": "^2.6.0", "gulp-sourcemaps": "^1.6.0", "gulp-typescript": "^2.13.6", "gulp-uglify": "^2.0.0", "typescript": "^1.8.10" } }

    [ 337 ] WOW! eBook www.wowebook.org

    Getting Ready

    The packages starting with the @ symbol are part of the Angular 2 bundle, which, at the time of writing, reached its Release Candidate 5 (RC5) development stage. The other ones are a set of ES6 shims (core-js), polyfills (reflect-metadata), loading libraries (systemjs), dependencies (rxjs), and helper tools (zone.js). All of these packages are required for a number of good reasons: They ensure backward-compatibility between the new ECMAScript v6 (ES6) language features - used by most Angular 2 packages, and ECMAScript v5 (ES5), which will be our compilation target. They make our code compatible with the most common/used web browsers, including older ones. They adopt a modern, module-based loading API to handle the required JS dependencies in an efficient way. They use Angular 2 features to their full extent. All this information can be quite confusing, especially for old-school JavaScript developers, but don't worry, we're going to come back to these topics later on. As usual, all of these packages will be downloaded in the background by the IDE as soon as you Save the file. Eventually, you will have a local instance of Angular 2 available under the following folder: /node_modules/@angular/

    It's worth noting that we have also added a new script node that will execute a postinstall command against the typings package we just added. Now we need to move these files to the wwwroot folder. We can achieve this by adding a dedicated task to our Gulp configuration file as follows (new lines are highlighted): /* This file in the main entry point for defining Gulp tasks and using Gulp plugins. Click here to learn more. http://go.microsoft.com/fwlink/?LinkId=518007 */ var gulp = require('gulp'), gp_clean = require('gulp-clean), gp_concat = require('gulp-concat'), gp_sourcemaps = require('gulp-sourcemaps'), gp_typescript = require('gulp-typescript'), gp_uglify = require('gulp-uglify');

    [ 338 ] WOW! eBook www.wowebook.org

    Getting Ready /// Define paths var srcPaths = { app: ['Scripts/app/main.ts', 'Scripts/app/**/*.ts'], js: [ 'Scripts/js/**/*.js', 'node_modules/core-js/client/shim.min.js', 'node_modules/zone.js/dist/zone.js', 'node_modules/reflect-metadata/Reflect.js', 'node_modules/systemjs/dist/system.src.js', 'node_modules/typescript/lib/typescript.js' ], js_angular: [ 'node_modules/@angular/**' ], js_rxjs: [ 'node_modules/rxjs/**' ] }; var destPaths = { app: 'wwwroot/app/', js: 'wwwroot/js/', js_angular: 'wwwroot/js/@angular/', js_rxjs: 'wwwroot/js/rxjs/' }; // Compile, minify and create sourcemaps all TypeScript files and place them to wwwroot/app, together with their js.map files. gulp.task('app', ['app_clean'], function () { return gulp.src(srcPaths.app) .pipe(gp_sourcemaps.init()) .pipe(gp_typescript(require('./tsconfig.json').compilerOptions)) .pipe(gp_uglify({ mangle: false })) .pipe(gp_sourcemaps.write('/')) .pipe(gulp.dest(destPaths.app)); }); // Delete wwwroot/app contents gulp.task('app_clean', function () { return gulp.src(destPaths.app + "*", { read: false }) .pipe(gp_clean({ force: true })); }); // Copy all JS files from external libraries to wwwroot/js gulp.task('js', function () { gulp.src(srcPaths.js_angular) .pipe(gulp.dest(destPaths.js_angular)); gulp.src(srcPaths.js_rxjs)

    [ 339 ] WOW! eBook www.wowebook.org

    Getting Ready .pipe(gulp.dest(destPaths.js_rxjs)); return gulp.src(srcPaths.js) // .pipe(gp_uglify({ mangle: false })) // disable uglify // .pipe(gp_concat('all-js.min.js')) // disable concat .pipe(gulp.dest(destPaths.js)); }); // Delete wwwroot/js contents gulp.task('js_clean', function () { return gulp.src(destPaths.js + "*", { read: false }) .pipe(gp_clean({ force: true })); }); // Watch specified files and define what to do upon file changes gulp.task('watch', function () { gulp.watch([srcPaths.app, srcPaths.js], ['app', 'js']); }); // Global cleanup task gulp.task('cleanup', ['app_clean', 'js_clean']); // Define the default task so it will launch all other tasks gulp.task('default', ['app', 'js', 'watch']);

    As you can see, it hasn't changed much, we just added a bunch of JS files that we need to copy from the /node_modules/ folder into the wwwroot/js/ folder, the same one we were already using to host third-party libraries. There's nothing odd there, NPM package files are external libraries, after all. For this very reason, it is also preferable to restrain ourselves from uglifying and/or concatenating them. It's also worth noting that we defined a separate, dedicated folder for Angular 2 and Rxjs packages, since both of them are spanned into multiple files.

    Adding Typings Before moving forward, there is another thing we need to take care of. Since we plan to transpile our TypeScript code into ECMAScript5, we have added the core-js NPM module into our package.json file. In case you've never heard of it, let's just say that it happens to be a standard JavaScript library providing a great set of polyfills for ES6, which is precisely what we need. If you would like to know more about the core-js, here's the URL to the project's official GitHub repository: https://github.com/zloirock/core -js.

    [ 340 ] WOW! eBook www.wowebook.org

    Getting Ready

    The only problem is that it doesn't come with a proper TypeScript definition file, meaning that both of our TypeScript compilers, either Visual Studio's or Gulp's, won't be aware of its existence, thus throwing a fair amount of TS2304 (type not found) exceptions upon each build attempt. The best thing we can do in order to fix that is to add the proper type definitions to our project. To do that, open the package.json file again and add the following (new lines are highlighted): { "version": "1.0.0", "name": "opengamelistwebapp", "private": true, "dependencies": { "@angular/common": "2.0.0-rc.5", "@angular/compiler": "2.0.0-rc.5", "@angular/core": "2.0.0-rc.5", "@angular/http": "2.0.0-rc.5", "@angular/platform-browser": "2.0.0-rc.5", "@angular/platform-browser-dynamic": "2.0.0-rc.5", "@angular/upgrade": "2.0.0-rc.5", "core-js": "^2.4.1", "reflect-metadata": "^0.1.8", "rxjs": "5.0.0-beta.6", "systemjs": "^0.19.37", "typings": "^1.3.2", "zone.js": "^0.6.12" }, "devDependencies": { "gulp": "^3.9.1", "gulp-concat": "^2.6.0", "gulp-sourcemaps": "^1.6.0", "gulp-typescript": "^2.13.6", "gulp-uglify": "^2.0.0", "typescript": "^1.8.10" }, "scripts": { "postinstall": "typings install dt~core-js --global" } }

    [ 341 ] WOW! eBook www.wowebook.org

    Getting Ready

    We can see that there are two new things here: A new NPM package called typings, which is a TypeScript type definition manager. In other words, a tool we can use to retrieve type definitions from the web. A whole new script key containing a small command that will be executed during the post-install phase. This basically means that the script will trigger every time we change something within the package.json file, right after all the NPM modules are retrieved and installed (or removed). As soon as we Save our package.json file, a new typings folder will be added to our OpenGameListWebApp project's root, containing the type definition file we need. If typings didn't install successfully during the Save, try to use the Restore Package option by right-clicking on the project's Dependencies node. Another way is to use the command line to install the typings explicitly. To do this, navigate to the root folder of your app and press CTRL+SHIFT, then select Open command window here. In the command line, type the following command: > npm run typings install That should do the trick.

    Using a CDN At the time of writing, the only valid CDN hosting Angular 2 updated builds is npmcdn.com. If we want to use it, we can avoid updating our gulpfile.js file and wait until we are working on the index.html and systemjs.config.js files. Keep reading, as we'll get there shortly.

    Upgrading the typescriptServices.js file At the time of writing, adding Angular 2 via NPM would produce the following TypeScript compilation error: Invalid module name in augmentation, module '../../Observable' cannot be found. This is due to a known bug in the TypeScript version currently shipped with Visual Studio 2015. The most effective way to fix that is to replace this local VS2015 file: C:Program Files (x86)Microsoft Visual Studio 14.0Common7IDECommonExtensions MicrosoftTypeScripttypescriptServices.js.

    [ 342 ] WOW! eBook www.wowebook.org

    Getting Ready

    With the following remote file: https://raw.githubusercontent.com/Microsoft/TypeScript/Fix8518/lib/type scriptServices.js. Doing that will also fix a couple more errors we will most likely get in Chapter 3, Angular 2 Components and Client-Side Routing. Needless to say, it's highly advisable to make a backup of the original file before replacing it. Additional information regarding the bug is available through the following URLs: https ://github.com/Microsoft/TypeScript/issues/8518 and https://github.com/Microsof t/TypeScript/pull/7507.

    Setting up the client code It's time to lay down the first bricks of our app, a working skeleton, used as a starting point for what we're going to do in the next chapters. It will consist of: An Angular 2 component file: Scripts/app/app.component.ts An Angular 2 module file: Scripts/app/app.module.ts An Angular 2 bootstrap file: Scripts/app/main.ts A module loader configuration file: wwwroot/systemjs.config.js An HTML file to wrap everything up: wwwroot/index.html

    The component file The component is the most basic and fundamental concept in Angular 2. Think of it like a class that controls a specific piece of a web page where we can either display some data to each user and/or respond to their feedbacks. We can say that our Angular 2 app will be almost entirely built upon multiple Components serving specific purposes-most of them will be reusable, others will be only used once. They can also either be as small as a few lines or they can result in a ridiculously long piece of code.

    [ 343 ] WOW! eBook www.wowebook.org

    Getting Ready

    Luckily, our first component is quite simple. In the Solution Explorer, right-click on the /Scripts/app/ folder and add a new app.component.ts file:

    Once created, feed it with the following code: import {Component} from "@angular/core"; @Component({ selector: "opengamelist", template: `OpenGameListWork in progress...` }) export class AppComponent { }

    That's it. Let's see what we just did here in detail: In line 1, we're importing the Component function from the Angular 2 main library, which is @angular/core. The Component function is what we need to define a Component's metadata for a class, which basically means being able to make Components, this is a required step for what we'll do next. In line 3-6, we're creating our first Component by applying the Component function to the class. In TypeScript, we do that by prefixing it with the @ symbol and invoking it just above the component class. In other words, the @Component instruction tells Angular that this class is an Angular component. Notice that the selector and template fields are passed as a configuration object, as we will analyze them soon enough.

    [ 344 ] WOW! eBook www.wowebook.org

    Getting Ready

    In line 8, we're defining the AppComponent class. Notice the export keyword, which will allow us to import it from other components. The act of exporting makes our app.component.js file a module.

    A word on components and modules Angular 2 is a modular framework: this means that Angular 2 apps are also modular, as they consist of many files dedicated to a specific purpose. Typically, most application files export a single Componentclass, which is a class bundled with component metadata. Our app.component.js file, for example, exports the AppComponent class decorated with its component metadata, thus being a Module file exporting a Component class. We could also say that the Component is the content, while the Module is the file itself. We'll see how to import Components in the next chapter.

    The module file Angular Modules, also known as NgModules, have been introduced in Angular 2 RC5, which is the latest version at the time of writing, and are a great and powerful way to organize and bootstrap any Angular 2 application. They help developers to consolidate their own set of components, directives, and pipes into reusable blocks. If you've already played with previous versions of Angular 2, you will find the following URL useful to understand the transition by checking out the following URL: https://angular.io/docs/ts/latest/cookbook /rc4-to-rc5.html. Every Angular 2 application since RC5 must have at least one module, which is conventionally called the root module and given the AppModule class name. From the Solution Explorer, right-click on the /Scripts/app/ folder, add a new app.module.ts file, and fill it with the following code: /// import {NgModule} from "@angular/core"; import {BrowserModule} from "@angular/platform-browser"; import {HttpModule} from "@angular/http"; import "rxjs/Rx"; import {AppComponent} from "./app.component"; @NgModule({ // directives, components, and pipes declarations: [

    [ 345 ] WOW! eBook www.wowebook.org

    Getting Ready AppComponent ], // modules imports: [ BrowserModule, HttpModule ], // providers providers: [ ], bootstrap: [ AppComponent ] }) export class AppModule { }

    Again, let's take a look at what we just wrote: In line 1, we added a reference to the type definitions we fetched during the previous section to ensure our TypeScript compiler(s) could find it. Notice that, if we're using a CDN or a pre-compiled version of Angular 2, we could (and should) definitely remove this line. In lines 2-4, we import the basic Angular 2 modules that we will need right from the start. In line 5, we import the rxjs library definition file(s), which will be useful to compile the Angular 2 libraries. In line 7, right after the first empty line, we import the application root component that we just wrote. In line 9, we declare our root NgModule. As we can see it consists in an array of named arrays, each one containing a set of Angular 2 objects that serves a common purpose: directives, components, pipes, modules, and providers. The last one of them contains the component(s) we want to bootstrap, which in our case is the AppComponent one. Working with the root module only is a very viable approach until the Angular 2 app grows to a certain size. When it becomes bigger, it will be more practical to refactor it into a number of feature modules, each one of them grouping together a set of related tasks.

    [ 346 ] WOW! eBook www.wowebook.org

    Getting Ready

    The bootstrap file Now that we have our root module, we need to bootstrap it. From the Solution Explorer, right-click on the /Scripts/app/ folder and add a new main.ts file, then fill it with the following code: import {platformBrowserDynamic} from "@angular/platform-browser-dynamic"; import {AppModule} from "./app.module"; platformBrowserDynamic().bootstrapModule(AppModule);

    Now we're just missing an entry point to load with the browser. Let's add it right now.

    The module loader configuration file In this application, we're going to use theSystemJS module loader library to load our application and all the required Angular 2 modules and dependencies. In order to do that, we have to add a systemjs.config.js file to the wwwroot folder and define a number of configuration rules within it, as follows: (function (global) { // map tells the System loader where to look for things var map = { 'app': 'app', // our application files '@angular': 'js/@angular', // angular2 packages 'rxjs': 'js/rxjs' // Rxjs package }; // packages tells the System loader which filename and/or extensions to look for by default (when none are specified) var packages = { 'app': { main: 'main.js', defaultExtension: 'js' }, 'rxjs': { defaultExtension: 'js' } }; // configure @angular packages var ngPackageNames = [ 'common', 'compiler', 'core', 'http', 'platform-browser', 'platform-browser-dynamic', 'upgrade', ];

    [ 347 ] WOW! eBook www.wowebook.org

    Getting Ready function packIndex(pkgName) { packages['@angular/' + pkgName] = { main: 'index.js', defaultExtension: 'js' }; } function packUmd(pkgName) { packages['@angular/' + pkgName] = { main: '/bundles/' + pkgName + '.umd.js', defaultExtension: 'js' }; }; var setPackageConfig = System.packageWithIndex ? packIndex : packUmd; ngPackageNames.forEach(setPackageConfig); var config = { map: map, packages: packages } System.config(config); })(this);

    The code is pretty much self-documented with inline comments, yet it could be useful to highlight the most relevant tasks: The map variable will host the three dynamic packages we're using SystemJS for, all of them relative to /wwwroot/: app for our application; js/@angular for Angular 2 and js/rxjs for Rxjs. The packages variable will set the default filename and/or extension values for each package. These will be used whenever we define an import statement without specifying them. The rest of the file is dedicated to dynamically loading the Angular 2 built-in packages. If we want to use a CDN instead of relying upon the local JS folder, we only need to perform a minor update within the previous code, in the map section, as follows: var map = { 'app': 'app', // our application files '@angular': 'js/@angular', // angular2 packages 'rxjs': 'https://npmcdn.com/[email protected]' // Rxjs package (CDN) };

    [ 348 ] WOW! eBook www.wowebook.org

    Getting Ready

    And also within the two module loader functions: function packIndex(pkgName) { packages['https://npmcdn.com/' + pkgName] = { main: 'index.js', defaultExtension: 'js' }; } function packUmd(pkgName) { packages['https://npmcdn.com/' + pkgName] = { main: '/bundles/' + pkgName + '.umd.js', defaultExtension: 'js' }; };

    For further info regarding SystemJS and its Configuration API, including advanced options, we strongly suggest reading the official documentation on the project's GitHub page: https://github.com/systemjs/systemjs and https://github.com/systemjs/systemjs/blob/master/docs/conf ig-api.md.

    Why use a dynamic module loader? Before going further, it might be useful to explain why we worked so hard with a module loader instead of adding all the relevant JS files into theindex.html file right from the start. To keep it simple, we did it because it's the only way to efficiently handle any modern JavaScript modular system such as Angular 2, Rxjs and also all applications based upon them, including the one we're working on right now. What's a modular system exactly? It's nothing more than a package, library, or application split into a series of smaller files which depend on each other using reference statements such as import, require, and so on. ASP.NET, Java, Python, and most compilation-based languages have it. That's not the case with script-based languages such as PHP and JavaScript: they that are doomed to pre-load everything in the memory before being able to determine whether they'll be using it or not. All of these changes, with the introduction of ECMAScript 6 (ES6), bring a fully-featured module and dependency management solution for JavaScript. SystemJS basically acts as an ES6-polyfill for browsers that don't support it already, allowing us to get that module system working in modern browsers. Since both Angular 2 and Rxjs leverage that dynamic-loading approach, implementing it within our project will result in a huge performance gain.

    [ 349 ] WOW! eBook www.wowebook.org

    Getting Ready

    Keep in mind that SystemJS is not the only choice we have to load Angular 2 packages: there are other good choices out there, for example the popular module bundler known as webpack. Should we want to use that instead, here's a great guide for doing that: https://angular.io/doc s/ts/latest/guide/webpack.html.

    The index.html file The HTML file serves two main purposes: being an entry point for the browser so it can load the client-script files and execute the application, and laying out the DOM structure used by Angular 2 to display it. In the Solution Explorer, right-click on the wwwroot folder and add a new index.html file, then fill it with the following code:



    OpenGameList







    [ 543 ] WOW! eBook www.wowebook.org

    Applying Styles

    Testing it up Before going further, it's advisable to run a quick check in order to ensure that everything we made so far is working as it should. Open the /Scripts/app/home.component.ts file and remove everything within the styles block so that it will look like the following: styles: []

    Alternatively, we could entirely remove it, as we're not going to use it anymore. Once done, run the application in Debug mode by hitting F5 and check if the item-list components still have their CSS styles applied to them:

    They definitely should, since we placed them on the style.less file that, in its .css compiled form, should now be linked to our application.

    [ 544 ] WOW! eBook www.wowebook.org

    Applying Styles

    DIY versus framework-based styling Now that we have added a fully-functional style.less stylesheet file, we could also move all our custom styles there, just like we did with the home.component.ts file; the question is, are they really worth that much? Hardly, considering that we put them together for demonstration purpose only, a quick'n'dirty, temporary solution until we could commit ourselves to the upcoming styling phase. Now that the time has come, we shouldn't bother too much about preserving these samples, we need to think about switching from a full doit-yourself approach to a framework-based one. Anyone who is into CSS design is well aware of such a debate, which we could summarize in the following single phrase: Should we build our own grid-based responsive layout or use a responsive design framework instead? The answer is not that simple because either alternative has their set of advantages. Let's try to perform a quick recap of the most relevant arguments.

    Do it yourself approach The most classic approach: we build our very own grid-based layout, featuring a custom set of resizing raster and/or vector set of images and icon files, following the responsive design good practices and guidelines as issued by the famous Ethan Marcotte 2010 article published in the A List Apart blog: http://alistapart.com/article/responsive-web-de sign

    And also follow the subsequent, improved theories and patterns described in his following brief book (Responsive Web Design, A Book Apart, 2011).

    Pros Faster loading times, as we will be able to only code, add, and/or include what we need. Unique design (all framework-based websites are supposed to look the same).

    [ 545 ] WOW! eBook www.wowebook.org

    Applying Styles

    Cons Can be quite hard to handle, unless we're true CSS3/MediaQuery experts. Slow development, due to the massive amount of required tests for all the existing platforms (browsers, operating systems, mobile devices). Hard to keep it updated to the latest standards since there will be no one that will bother to test or improve that code other than us.

    Framework-based approach The common approach nowadays: we start from a consolidated, widely-accepted UI frontend framework such as Bootstrap, Foundation, Pure, or YUI and customize it to suit our needs. The word customizing can mean a number of things here, from picking a skin to completely changing the structural behavior of most classes, depending on how much we want to customize the results and/or how much time we are allowed to spend doing that.

    Pros Development speed: These frameworks are a time-saver, as we will be able to use reliable, cross-browser compatible code blocks instead of coding everything from scratch. Consistency: One of the biggest frontend framework achievements is that they make designers and developers speak the same language, as they will both be able to acknowledge, understand, and apply their changes to the project in a consistent way. Community support: Each framework has a huge support community, meaning that we'll receive free code samples, support, and updates for as long as we need to. This can be huge, especially if we want to achieve good results without having to commit too much into cross-browser, responsive CSS design.

    [ 546 ] WOW! eBook www.wowebook.org

    Applying Styles

    Cons Limited knowledge: We didn't write that code, so we won't always be able to understand what we're doing and why the stuff we're using behaves like that. Performance heavy: Even the most lightweight and modular framework will undoubtedly be packed with a lot of stuff we won't be using in our project: these contents will be sent by the web server and loaded by the client anyway. Updating issues: Whenever an improved build of the framework is out, we will have to choose between updating it, and taking the risk of breaking something, and not updating it, and risk losing the bug fixes and the added/improved support for the new CSS standards.

    Conclusions As we can see, both ways could be viable depending on our specific scenario; therefore, the decision between going with a custom grid and adopting a framework-based one should be made on a case-by-case basis. That said, after our non-exhaustive analysis, we think that adopting a frontend framework might be a good call for our project; we'll also be picking Bootstrap for the task, since it happens to be one of the most suited ones for native web applications based on Angular 2, as we'll be able to see in the following paragraphs. It's worth noting that by choosing Bootstrap we're ditching a great alternative that will surely pave its way in the upcoming months: we're talking about material2, a top-notch component library based upon material design. The only reason we didn't pick it is that the project is still in alpha, yet there are already a couple of demos out that demonstrate its striking potential. If we're bold enough to try that, we can ditch this chapter entirely and learn our own way by looking at the official project page at the following URL: https://material.angular.io/ Or look at their source code repository on GitHub: https://github.com/angular/m aterial2

    [ 547 ] WOW! eBook www.wowebook.org

    Applying Styles

    Adding Bootstrap Installing Bootstrap into our project is just as easy as adding the following line within the block of our wwwroot/index.html file:

    It's advisable to place it right above the style.css link, so we'll be able to override the Bootstrap default rules with our own style sheet file.

    As we can see, we'll be using the Bootstrap 3 Yeti Theme (actually the 3.3.6 build, the latest stable one at the time of writing), by linking a pre-built, minified .css file hosted by MaxCDN, which is the primary content delivery network (CDN) for Bootstrap and Bootswatch. Using a CDN-hosted link for frontend frameworks is often a good practice and can be very advisable in most cases, as it will remove some weight from our web server for a relevant server-side performance benefit: however, it will also have some relevant downsides, such as preventing us from modifying the source CSS (or LESS) file(s). If we feel like we'll be needing that, and/or if we don't want a CDN for other good reasons, we're also free to download the file locally and change the link so it will point to the project filesystem instead. The choice, as always, is entirely up to the developer. We're choosing to stick with the v3 because Bootstrap 4, although almost ready, is currently still in a beta release. We want to adopt a stable, reliable, and widely-tested framework, we won't push things there, so we'll stick to the latest final instead.

    [ 548 ] WOW! eBook www.wowebook.org

    Applying Styles

    We chose the Yeti theme because it features a minimalistic, yet suitable style that will nicely blend with our project; if you don't like it, you're free to choose another one from the Bootswatch project's page: http://bootsw atch.com/ And here's a link to their primary CDN: https://www.bootst rapcdn.com/bootswatch/

    Choosing a native directives module If we've already worked with Bootstrap, we know that placing a link to the .css file isn't going to be enough: if we want to use transitions, modals, tabs, date/time pickers, and a lot of advanced yet very useful components, we'll also have to add a reference to the plugin's JavaScript code. In standard web projects, it's very common to do that by adding some



  • Home
  • About


  • Login / Register
  • Logout
  • Edit Account


  • All we did here was reposition the content we already had add a couple HTML comments to split these elements into three groups: the navigation commands accessible to everyone, those restricted to authenticated users only, and the account/related ones. We also added the right CSS class to the Login/Register and Logout elements as we moved them into the latter group.

    [ 720 ] WOW! eBook www.wowebook.org

    User Registration and Account Edit

    Before moving out, let's hit F5 one last time to see what our improved menu looks like:

    Conclusions Our sample SPA application is mostly done. We are perfectly aware of the fact that a number of relevant features are still missing, such as the inability to refresh JWT Tokens, the lack of View Comments/Insert Comment components, the missing support for useruploaded item image files, and so on. However, it is undoubtedly a potentially shippable product that will most likely meet our product owner's expectations. All we have to do now is add some finishing touches and then publish it in a production environment, which is precisely what we're going to do in the next chapter.

    [ 721 ] WOW! eBook www.wowebook.org

    User Registration and Account Edit

    Suggested topics Template-driven forms, model-driven forms, FormBuilder, regex, Angular 2 Template Syntax, ternary operators, Media Queries.

    Summary Our native web application is slowly coming to an end, with most of its expected features ready and working. In this chapter, we implemented two of them that were still missing: a User Registration view, where new users can register themselves, and an Edit Account view they can use to change their e-mail address, password, and/or display name. For each one of them, we added a Web API interface using standard RESTful methods, a set of Angular 2 AuthService methods, and a client-side routing strategy. While we were there, we also made some cosmetic changes to the navigation menu items and their corresponding style sheets to grant our visitors a better user experience.

    [ 722 ] WOW! eBook www.wowebook.org

    10

    Finalization and Deployment Our valuable journey through ASP.NET Core Web API and Angular 2 development is coming to an end. The native web application we've been working on since Chapter 2, ASP.NET Controllers and Server-Side Routes, is now a potentially shippable product, ready to be published in a suitable environment for evaluation purposes. However, in order to do that, we need to give our project some finishing touches.

    Switching to SQL Server Although localDB proved itself to be a great development choice, it's not a good idea to use it in production as well. That's why we'll replace it with SQL Server. As for the chosen edition, we can either go for Express, Web, Standard, or Enterprise depending on what we need and/or can afford. For the sake of simplicity, we'll be using SQL Server 2016 Express Edition, which can be downloaded for free from the following Microsoft official URL: https://www.microsoft.c om/en-US/server-cloud/products/sql-server-editions/sql-server-express.aspx

    Needless to say, we need to install it on a machine that is reachable from our web server via a Local Area Network (LAN).

    WOW! eBook www.wowebook.org

    Finalization and Deployment

    Installing SQL Server 2016 Express The installation process is pretty straightforward. Unless we don't need anything specific, we can just go for the basic type:

    Eventually, we'll be prompted with an Installation Complete window, which will also give us some useful info, including the database instance name and a default connection string ready for a connection test:

    [ 724 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Installing SQL Server Management Studio From here, we can click the Install tools button and download SQL Server Management Studio, a tool that we can use to create the OpenGameList database and also a dedicated user that can access it. SQL Server Management Studio is a separate product available for free download at the following URL: https://msdn.microsoft.com/en-us/l ibrary/mt238290.aspx

    [ 725 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Configuring the database Once we've downloaded and installed it, launch the SQL Server Management Studio. We will be prompted by a Connect to Server modal window that will allow us to connect to our local SQL Server instance. To do this, select the Database Engine server type and then, from the Server name combo box, choose . Another pop-up window will appear, from which we'll be able to select the database engine we just installed on our server:

    As for the Authentication part, we can leave Windows Authentication, the default SQL Server authentication mode. When we're done, click on the Connect button and a Server Explorer window will appear, containing a tree view representing the structure of your SQL Server instance. This is the interface we'll use to create our database and also the user/password that our application will use to access it. If you have a strong knowledge of SQL Server, you might want to skip the following steps and configure your instance as you prefer; otherwise, keep reading.

    [ 726 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Changing the authentication mode The first thing we need to do is to change the default SQL Server authentication mode, so we won't be forced to use an existing Windows account. To do so, right-click on the root tree view node, which represents our SQL Server instance, and select Properties from the contextual menu. From the modal window that appears, select the Security page, then switch from Windows Authentication mode to SQL Server and Windows Authentication mode:

    Adding the OpenGameList database Now we can create the database that will host our application's tables. Right-click on the Databases folder and choose Add Database from the contextual menu. Give it the OpenGameList name and click on OK.

    Adding the OpenGameList login Go back to the rootDatabases folders, then expand the Security folder, which should be just below it. From there, right-click on the Logins subfolder and choose New Login. Again, give it the OpenGameList name. From the radio button list below, select SQL Server Authentication and set a suitable password (for example, SamplePassword123), then click on OK.

    [ 727 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    If you want a simpler password, such as OpenGameList, you might have to also disable the enforce password policy option.

    Mapping the login to the database The next thing we need to do is to properly map this login to the OpenGameList database we added earlier. From the navigation menu to the left, switch to the User Mapping tab. Click on the checkbox right to the OpenGameList database, then write OpenGameList in the User cell and assign the db_owner membership role:

    As soon as we click on the OK button, a new OpenGameList user will be added to the OpenGameList database with full administrative rights.

    [ 728 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    We can easily confirm that by going back to the root Databases folder and expanding it to OpenGameList | Security | Users:

    That's it! Now we'll be able to access our brand new OpenGameList database with a standard connection string using the credentials we just created.

    Adding a SQL Server connection string Now that the SQL Server database has been set up, we need to tell our application to use it instead of localDb while in production. We can easily do that by adopting the ASP.NET Core default pattern for configuring application behavior across multiple environments.

    [ 729 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    To implement it within our project, we need to perform the following steps: 1. Create an appsettings.production.json file to override the localDb connection string with the SQL Server one when the application runs in a production environment. 2. Configure the publishOptions within the project.json file to publish these kinds of file. 3. Check that our Startup class is properly configured to load the application settings files using a cascading logic that privileges the current running environment. 4. Update the launchSettings.json file to ensure that the production environment will be set whenever we publish our project.

    Creating an application settings file for production Right-click to the project's root and select Add | New Item. Choose ASP.NET Configuration File, name it appsettings.production.json, and click on OK. An environment-specific version of the appsettings.json file will be created accordingly. Visual Studio should also nest it below the main configuration file, as can be seen in the following screenshot:

    [ 730 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    From now on, any key/value pair we'll include within the appsettings.production.json file will override the corresponding key/value in the main configuration file whenever the application will run within a production environment. It means that we can use it to redefine a number of production-specific values there, including (yet not limited to) the default connection string. As a matter of fact, that's precisely what we need to do. Open the new appsettings.production.json file and replace the sample contents with the following code: { "Data": { "DefaultConnection": { "ConnectionString": "Server=localhostSQLEXPRESS;Database=OpenGameList;User Id=OpenGameList;Password=SamplePassword123;Integrated Security=False;MultipleActiveResultSets=True" } } }

    Updating AppSettings Another important thing we definitely need to override within the appsettings.production.json file is the Authority that will be used to issue (and check) our JWT Tokens. To do that, add the following lines right before the data key we just added: "Authentication": { "OpenIddict": { "Authority": "http://www.your-website-url.com/" } },

    Be sure to check the other settings before closing the file, changing them accordingly to a production environment. For example, we might want to switch to a less resource-intensive debugging behavior: "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Warning", "System": "Error", "Microsoft": "Error" } },

    [ 731 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    We might also want to increase our caching policy for local files: "StaticFiles": { "Headers": { "Cache-Control": "max-age=3600", "Pragma": "cache", "Expires": null } }

    This will tell the browser client (and/or a proxy) to keep all the static files for one hour.

    Updating external providers If we implemented one or more external providers (see Chapter 8, Third-Party Authentication and External Providers), we'll probably need to replace the redirect URIs we configured earlier and also add our public facing URL (www.our-website-url.com) to the allowed JavaScript origins URL list. For a detailed guide about how to do that, we can refer to Chapter 8, Third-Party Authentication and External Providers. Although most providers will allow to set multiple values for the allowed origin URIs, it's strongly advisable to add a whole new app, such as OpenGameList_Production, to use for production purposes. If you do that you will also have to override the public and private keys, but you won't compromise your development environment.

    Configuring the publishOptions We now need to update the root project.json file to include these files whenever we publish our application. Within that file, look for the publishOptions key and change the include array values to the following (updated lines are highlighted): "publishOptions": { "include": [ "wwwroot", "Views", "Areas/**/Views", "appsettings*.json", "web.config" ] },

    [ 732 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    This small change ensures that, whenever we publish our application, each and every appsettings file will be included as well.

    Checking the Startup class The application settings file(s) loading logic can be customized within the Startup method of the Startup class. The Startup.cs file bundled with the default ASP.NET Core project template that we chose back in Chapter 1, Getting Ready, already features a cascading logic in place that perfectly suits our needs. All we need to do is to ensure that it's still there. We can easily check that by opening the Startup.cs file and having a look at the two AddJsonFile method calls within the constructor: public Startup(IHostingEnvironment env) { var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile( "appsettings.json", optional: true, reloadOnChange: true) .AddJsonFile( $"appsettings.{env.EnvironmentName}.json", optional: true) .AddEnvironmentVariables(); Configuration = builder.Build(); }

    We can see how the first highlighted line will load the default appsettings.json file, while the second will look for an (optional) environment-specific version containing the relevant value overrides. If everything is still in place, we don't need to change anything, since the existing behavior perfectly fits what we want.

    Updating the launchSettings.json Last but not least, we need to set up our app so that it will run in a production environment whenever we publish it.

    [ 733 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    To do that, open the /properties/launchSettings.json file and change the ASPNETCORE_ENVIRONMENT variable within our application's profile from Development to Production in the following way: "OpenGameListWebApp": { "commandName": "Project", "launchBrowser": true, "launchUrl": "http://localhost:5000/api/values", "environmentVariables": { "ASPNETCORE_ENVIRONMENT": "Production" } }

    This will ensure the proper loading of the appsettings.production.json file.

    Publishing our native web application Installing and/or configuring a production-ready web server, such as Internet Information Services (IIS) or Apache, can be a very complex task depending on a number of things that we can't address now, as they would easily bring us far beyond the scope of this book. To keep things simple, we'll just assume that we already have access to an up and running, physical or virtualized Windows Server machine featuring a running IIS instance (we'll call it web server from now on) that we can configure to suit our needs. Ideally, we'll be able to do that via a dedicated management interface such as Remote Desktop, IIS Remote Configuration, Plesk, or any other remote administration mechanism made available by our web farm and/or service provider. Windows 2008 R2 (or newer) and IIS 7.5 (or above) will be required to host a .NET Core web application, as stated by the official Microsoft publishing and deployment documentation available at the following URL: https ://docs.asp.net/en/latest/publishing/iis.html

    Last but not least, we'll also assume that our web server provides FTP-based access to the /inetpub/ folder that we can use to publish our web projects. If we're facing a different scenario, it could be advisable to skip this chapter entirely and follow the instructions given by our chosen web hosting provider instead.

    [ 734 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Creating a Publish profile The most convenient way to deploy a web-based project in Visual Studio is creating one or more Publish profiles. Each one of them is basically an XML configuration file with a .pubxml extension that contains a set of deployment-related information, most of which depends on the server/target we're deploying our application into, Azure, FTP, filesystem and more. We can easily set up one or more Publish profile files using the Publish profile wizard. As soon as we have at least one ready, we'll be able to execute it with a single mouse click and have our application published. To open the Publish profile wizard, right-click on the project's root node (OpenGameListWebApp in our case) and select the Publish... option from the contextual menu. A pop-up window should open showing the following welcome screen:

    [ 735 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    File System Publish profile Among the various available profiles, the easiest to configure is the one that builds everything into a dedicated folder within our local File System. To create it, follow these steps: 1. Select the Custom publish target. 2. Give a suitable name to the profile, such as Production-FileSystem. 3. Select the File System publish method. 4. When prompted for a Target Location, specify the path of the folder that will contain the published application. Visual Studio will suggest a path located within the application's /bin/Release/ subfolder. However, it's strongly advisable to change it and to choose an external directory instead. 5. Set Configuration to Release. 6. Set the Target Framework accordingly to what we used for our project (.NET Framework 4.6.1 in our example). We can then click on the Publish button to start the publishing process. Once done, we'll have to manually upload the generated contents to the web server.

    FTP Publish profile As a viable alternative we can also create a Publish profile that will automatically upload our web project to our web server using a properly configured FTP connection endpoint: we will then link the remote destination folder to a new website project using IIS. As we said earlier, we're doing all that assuming that we have a web server accessible through FTP, since it's one of the most common deployment scenarios. If that's not the case, we might as well skip this paragraph and configure a different Publish profile. We already know how to launch the wizard. Right-click on the project's root node and select Publish. Wait for it to start, then do the following: 1. 2. 3. 4.

    Select the Custom publish target. Give a suitable name to the profile, such as Production-FTP. Select the FTP publish method. When prompted for a Server, specify the FTP server URL, such as ftp.yourftp-server.com. In the Site Path option, insert the target folder from the FTP server root, such as /OpenGameList/.

    [ 736 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    5. Set the Passive Mode, Username, and Password according to our FTP server settings. 6. Set Configuration to Release. 7. Set the Target Framework accordingly to what we used for our project (.NET Framework 4.6.1 in our example). Right after that, the wizard's Connection tab should eventually look not too different from the following screenshot:

    [ 737 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Configuring IIS We should now connect to our web server and set up our web application within IIS. As we said earlier, configuring a web application can be either a very easy or an insanely complex task depending on a number of things, such as caching, load balancing, CPU optimization, database load, and security issues. Although the most common issues will be briefly handled within this chapter, it's advisable to follow a dedicated guide to properly handle each one of them.

    Installing the ASP.NET Core module for IIS We might think that IIS is the ideal platform to host ASP.NET Core applications, as it always has been since the first release of ASP.NET. As a matter of fact, it's not. ASP.NET Core web applications run via the highly optimized Kestrel server. Whenever we choose to host one of them with IIS, we basically need it to act as a reverse proxy for the underlying Kestrel server. The good news is that we don't need to configure anything by ourselves, because the ASP.NET Core template we used back in Chapter 1, Getting Ready, provided us with a root web.config file containing all the settings, to do just that. The relevant configuration lines should be contained within the element, which should resemble the following XML code:





    We can see that we have a dedicated aspNetCore handler and some related configuration placeholders that will be transformed into actual values upon publication.

    [ 738 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    As we said earlier, we don't need to change anything, as the handler will do everything by itself, assuming that it is installed on the Web Server. Since ASP.NET Core is a rather new technology, this might as well not be the case, so we could need to download and install it. At the time of writing, we need to obtain the .NET Core Windows Server Hosting bundle, which conveniently includes all the required packages to host a .NET Core application on a IIS powered server machine: the .NET Core Runtime, the .NET Core Library, the ASP.NET Core module, and also the required reverse proxy between IIS and the Kestrel server. The bundle can be downloaded from the following URL: https://go.microsoft.com/fwli nk/?LinkId=817246

    For further references regarding ASP.NET Core IIS publishing settings, it's strongly advised to check out this official guide: https://docs.asp.net /en/latest/publishing/iis.html#iis-configurationA (mostly) complete list of all the available .NET Core related packages (SDK, IIS module, and more) is also available at the following URL: https://www.m icrosoft.com/net/download

    Adding the website As soon as we install the .NET Core Windows Server Hosting bundle, we'll be able to configure our IIS instance to host our application. As we said earlier in this chapter, to host ASP.NET Core web applications, we're going to need IIS 7.5 or above.

    From the IIS Manager interface, right-click on Sites and choose the Add New Website option. Name it OpenGameList. By looking at the read-only textbox to the immediate right, we can see that a new Application Pool will also be created with that same name. Take a mental note of it, as we'll need to configure it soon enough. Set the physical path of the Content Directory to the folder we targeted for FTP publishing. In our previous example, it should be something like C:inetpubOpenGameList, assuming that the FTP root for the web admin user points to C:inetpub.

    [ 739 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Be sure to target the application's root folder, not the wwwroot one. Needless to say, we need to grant read/write permissions for that folder to the IUSR and/or IIS_IUSRS accounts, or any other identity our Application Pool is using. As for the bindings, either choose a specific IP address or leave the All Unassigned option and choose a Host name that is already configured to redirect to our web server via DNS:

    In our example, we already set up http://www.opengamelist.com, so we'll just use that.

    [ 740 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Before clicking on the OK button, ensure that the Start Website immediately option is checked, so the website will be immediately available. We're assuming that the server comes with the .NET Framework installed, as it's a default package with all the latest Windows Server versions. In case it doesn't, we can manually install it either via Server Manager, Web Platform Installer, or Windows Update.

    Configuring the Application Pool We can now switch to the Application Pools node. Select the OpenGameList one, the same that we created a short while ago, and set the .NET CLR version to No Managed Code:

    [ 741 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    This might seem rather counterintuitive, as it looks like we're ruling out ASP.NET. As a matter of fact, that's the exact opposite: since we're publishing an ASP.NET Core application, we need to ensure that the soon-to-be-outdated .NET 4 CLR won't get in the way. Since the former is still (at the time of writing) in a pre-release state, it isn't available yet within the GUI, leaving us with the only option to remove any reference here. We already configured the .NET Core module for IIS to do the required job anyway. This is one of the many things that will surely change in the future. There is a good chance that, by the time you're reading this book, the new CLR will be integrated within the Application Pool GUI settings.

    Firing up the engine It's time to publish our native web application. Before doing that, ensure that the Task Runner default task is running, as we want to upload the latest version of our client files. Right-click on the project's root node, then left-click on Publish. Select the Production-FTP profile and hit the Publish button to start the build and upload process. The whole publishing process flow can be checked in real time within the Visual Studio Output window. As soon as the FTP connection will be attempted, we'll be asked for username and password, unless we gave our consent to store our login credentials within the Production-FTP.pubxml file.

    [ 742 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    The publishing task will require a lot more time than it used to be for ASP.NET 4 applications, because the whole set of ASP.NET Core libraries will be published in the destination folder, together with all the required Angular 2 modules and dependencies. There's nothing to worry about, as this is another thing that will surely change once the ASP.NET Core framework reaches its final stage.

    Once done, our default web browser will be automatically launched against the URL we specified within the Publish profile settings.

    [ 743 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    If everything has been set up properly, our native web application will show itself in all its splendor:

    Troubleshooting ...Or maybe not. The deployment task isn't always easy, as there could be a number of issues (mostly depending on the server machine state) that can prevent it from going well. This statement is particularly true for ASP.NET Core application IIS-based deployments, as the reverse proxy mechanism undeniably adds an additional level of complexity. These are the three best things we can do to diagnose the most common problems: 1. Read the browser's output messages. 2. Examine the Event Viewer's application log. 3. Enable the ASP.NET Core module stdout logging feature.

    [ 744 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    The first one is rather obvious: who doesn't look at the browser output? However, for ASP.NET Core applications, it's far less effective than it used to be since most errors are still unhandled and won't appear there. The Event Viewer is often underestimated, yet it's very important for debugging ASP.NET Core application for the same reason as before: we will find most of the relevant stuff there. The ASP.NET Core Module Log is a new feature brought by the new CLR. However, when it comes to troubleshooting issues, it happens to be the real deal. Activating it is just as easy as opening the root web.config file and change the stdoutLogEnabled attribute from false to true. We also need to manually create a /logs/ folder inside the root application folder on the web server, otherwise the logs won't be generated. The log folder, location, and filename prefix can be configured by changing the stdoutLogFile attribute value. Remember to manually create the chosen folder whenever you change it and also to grant read/write permissions to the identity used by the Application Pool.

    The Kestrel test A quick and effective way to check if the application is working properly is to entirely skip IIS and run it directly on Kestrel. Doing this is just as easy as opening the application folder on the Web Server, locating the OpenGameList.exe file, and executing it with (or without) administrative rights.

    [ 745 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Doing this will open a command prompt where we'll be able to see the whole application bootstrap process. Once it completes, we should be able to test the application by opening a web browser and pointing it to http://localhost:5000/, 5000 being the default TCP listening port for Kestrel as defined in the /settings/launchSettings.json file:

    If the application completes its boot phase and starts running, the issue is most likely related to the IIS configuration and/or the ASP.NET Core module; otherwise, there's a good chance that our problem lies within the application code itself. If that's the case, checking the Event Viewer and the aforementioned stdout logs will be our best weapons to identify and overcome the issue.

    Suggested topics LAN, SQL Server 2016, SQL Server Management Studio, Windows Server, IIS, Apache, FTP server, Publish profile, ASP.NET Core module for IIS, ASP.NET 5, .NET CLR v4, Kestrel, stdout log.

    [ 746 ] WOW! eBook www.wowebook.org

    Finalization and Deployment

    Summary Eventually, our journey through ASP.NET Core MVC/Web API and Angular 2 has come to an end. Our last effort was getting our native web application ready for being published into a production environment, where it can be checked by the product owner as the potentially shippable product it now definitely is. The first thing we did was changing the underlying database from localDb to a real SQL Server instance. For the sake of simplicity, we chose to install SQL Server 2016 Express, which is freely available for download from the Microsoft Download Center. We briefly installed it, together with the SQL Server Management Studio tools, then we used the latter to properly configure the former: creating the database, adding the login credentials, and doing what it takes to make our application able to connect using a standard connection string. We also took advantage of the ASP.NET Core default pattern to handle multiple environments by defining an application settings file for production, which we used to override the localDb connection string with the SQL Server one. The next step was creating a Publish profile for our Visual Studio project. We evaluated two alternatives, File System and FTP, each one of them being viable or not depending on our own deployment scenario. Eventually, we switched to the Web Server, where we found out that configuring IIS was not as easy as it used to be for ASP.NET 4 and below because the new CLR isn't fully integrated within the GUI yet. We had to install the ASP.NET Core module for IIS which does all the required jobs, making IIS act like a reverse proxy for the underlying Kestrel server. Right after that, we were able to create our website entry together with its related Application Pool. Once we did all that, we were able to actually publish our native web application and watch the result on the web browser. In the event that something didn't go as expected, we took some time to analyze the most common troubleshooting issues and give some useful advice to overcome them.

    [ 747 ] WOW! eBook www.wowebook.org

    Module 3

    ASP.NET Core 1.0 High Performance Create fast, scalable, and high performance applications

    WOW! eBook www.wowebook.org

    1

    Why Performance Is a Feature This is an exciting time to be a C# developer. Microsoft is in the middle of one of the biggest changes in its history, and it is embracing open source software. The ASP.NET and .NET frameworks are being rebuilt from the ground up to be componentized, cross-platform, and open source. ASP.NET Core 1.0 and .NET Core 1.0 (previously called ASP.NET 5 and .NET Core 5) embrace many ideas from popular open source projects, such as Go's ability to produce a statically-linked, standalone binary. You can now compile a single native executable that is free of any external dependencies and run it on a system without .NET installed. The ASP.NET Model View Controller (MVC) web application framework, which is now part of ASP.NET Core 1.0, borrows heavily from Ruby on Rails and Microsoft is keen in promoting tools, such as Node.js, Grunt, gulp, and Yeoman. There is also TypeScript, which is a statically-typed version of JavaScript that was developed at Microsoft. By reading this book, you will learn how to write high-performance software using these new .NET Core technologies. You'll be able to make your web applications responsive to input and scalable to demand. We'll focus on the latest Core versions of .NET. Yet, many of these techniques also apply to previous versions, and they will be useful for web application development in general (in any language or framework). Understanding how all of these new frameworks and libraries fit together can be a bit confusing. We'll present the various available options while still using the newest technology, guiding you down the path to high-speed success, and avoiding performance pitfalls.

    WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    After finishing this book, you will understand what problems can occur when web applications are deployed at scale (to distributed infrastructure) and know how to avoid or mitigate these issues. You will gain the experience of how to write high-performance applications without learning about issues the hard way. In this chapter, we will cover the following topics. Performance as a feature The common classes of performance issues Basic hardware knowledge Microsoft tools and alternatives New .NET naming and compatibility

    Performance as a feature You may have previously heard about the practice of treating performance as a first-class feature. Traditionally, performance (along with things such as security, availability and uptime) was only considered a Non-Functional Requirement (NFR) and usually had some arbitrary made-up metrics that needed to be fulfilled. You may have heard the term "performant" before. This is the quality of performing well and, often, is captured in requirements without quantification, providing very little value. It is better to avoid this sort of corporate jargon when corresponding with clients or users. Using the outdated waterfall method of development, these NFRs were inevitably left until the end, and dropped from an over-budget and late project in order to get the functional requirements completed. This resulted in a substandard product that was unreliable, slow, and often insecure (as reliability and security are also often neglected NFRs). Think about how many times you're frustrated at software that lags behind in responding to your input. Perhaps, you used a ticket-vending machine or a self-service checkout that is unresponsive to the point of being unusable. There is a better way. By treating performance as a feature and considering it at every stage of your agile development process, you can get users and customers to love your product. When software responds quicker than a user can perceive, it is a delight to use, and this doesn't slow them down. When there is noticeable lag, then users need to adjust their behavior to wait for the machine instead of working at their own pace.

    [ 750 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    Computers have incredible amounts of processing power today, and they now possess many more resources than they did even just a few years ago. So, why do we still have software that is noticeably slow at responding, when computers are so fast and can calculate much quicker than people can? The answer to this is poorly written software that does not consider performance. Why does this happen? The reason is that often the signs of poor performance are not visible in development, and they only appear when deployed. However, if you know what to look for, then you can avoid these problems before releasing your software to the production environment. This book will show you how to write software that is a joy to use and never keeps the user waiting or uninformed. You will learn how to make products that users will love instead of products that frustrate them all the time.

    Common classes of performance problems Let's take a look at some common areas of performance problems and see whether they matter or not. We will also learn why we often miss these issues during development.

    Language considerations People often focus on the speed of the programming language that is used. However, this often misses the point. This is a very simplistic view that glosses over the nuances of technology choices. It is easy to write slow software in any language. With the huge amounts of processing speed that is available today, relatively "slow" interpreted languages can often be fast enough, and the increase in development speed is worth it. It is important to understand the arguments and the trade-offs involved even if by reading this book you have already decided to use C# and .NET. The way to write the fastest software is to get down to the metal and write in assembler (or even machine code). This is extremely time consuming, requires expert knowledge, and ties you to a particular processor architecture and instruction set; therefore, we rarely do this these days. If this happens, then it's only done for very niche applications (such as virtual reality games, scientific data crunching, and sometimes embedded devices) and usually only for a tiny part of the software. The next level of abstraction up is writing in a language, such as Go, C, or C++, and compiling the code to run on the machine. This is still popular for games and other performance-sensitive applications, but you often have to manage your own memory (which can cause memory leaks or security issues, such as buffer overflows).

    [ 751 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    A level above is software that compiles to an intermediate language or byte code and runs on a virtual machine. Examples of this are Java, Scala, Clojure, and, of course, C#. Memory management is normally taken care of, and there is usually a Garbage Collector (GC) to tidy up unused references (Go also has a GC). These applications can run on multiple platforms, and they are safer. However, you can still get near to native performance in terms of execution speed. Above these are interpreted languages, such as Ruby, Python, and JavaScript. These languages are not usually compiled, and they are run line-by-line by an interpreter. They usually run slower than a compiled language, but this is often not a problem. A more serious concern is catching bugs when using dynamic typing. You won't be able to see an error until you encounter it, whereas many errors can be caught at compile time when using statically-typed languages. It is best to avoid generic advice. You may hear an argument against using Ruby on Rails, citing the example of Twitter having to migrate to Java for performance reasons. This may well not be a problem for your application, and indeed having the popularity of Twitter would be a nice problem to have. A bigger concern when running Rails may be the large memory footprint, making it expensive to run on cloud instances. This section is only to give you a taste, and the main lesson is that normally, language doesn't matter. It is not usually the language that makes a program slow, it's poor design choices. C# offers a nice balance between speed and flexibility that makes it suitable for a wide range of applications, especially server-side web applications.

    Types of performance problems There are many types of performance problems, and most of them are independent of the programming language that is used. A lot of these result from how the code runs on the computer, and we will cover the impact of this later on in the chapter. We will briefly introduce common performance problems here and will cover them in more detail in later chapters of this book. Issues that you may encounter will usually fall into a few simple categories, including the following: Latency:

    Memory latency Network latency Disk and I/O latency Chattiness / handshakes

    [ 752 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    Bandwidth:

    Computation:

    Excessive payloads Unoptimized data Compression Working on too much data Calculating unnecessary results Brute forcing algorithms

    Doing work in the wrong place: Synchronous operations that could be done offline Caching and coping with stale data When writing software for a platform, you are usually constrained by two resources. These are the computation processing speed and accessing remote (to the processor) resources. Processing speed is rarely a limiting factor these days, and this could be traded for other resources, for example, compressing some data to reduce the network transfer time. Accessing remote resources, such as main memory, disk, and the network will have various time costs. It is important to understand that speed is not a single value, and it has multiple parameters. The most important of these are bandwidth and, crucially, latency. Latency is the lag in time before the operation starts, whereas bandwidth is the rate at which data is transferred once the operation starts. Posting a hard drive has a very high bandwidth, but it also has very high latency. This would make it very slow to send lots of text files back and forth, but perhaps, this is a good choice to send a large batch of 3D videos (depending on the Weissman score). A mobile phone data connection may be better for the text files. Although this is a contrived example, the same concerns are applicable to every layer of the computing stack often with similar orders of magnitude in time difference. The problem is that the differences are too quick to perceive, and we need to use tools and science to see them. The secret to solving performance problems is in gaining a deeper understanding of the technology and knowing what happens at the lower levels. You should appreciate what the framework is doing with your instructions at the network level. It's also important to have a basic grasp of how these commands run on the underlying hardware, and how they are affected by the infrastructure that they are deployed to.

    [ 753 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    When performance matters Performance is not always important in every situation. Learning when performance does and doesn't matter is an important skill to acquire. A general rule of thumb is that if the user has to wait for something to happen, then it should perform well. If this is something that can be performed asynchronously, then the constraints are not as strict, unless an operation is so slow that it takes longer than the time window for it; for example, an overnight batch job on an old financial services mainframe. A good example from a web application standpoint is rendering user view versus sending e-mail. It is a common, yet naïve, practice to accept a form submission and send an e-mail (or worse, many e-mails) before returning the result. Yet, unlike a database update, an email is not something that happens almost instantly. There are many stages over which we have no control that will delay an e-mail in reaching a user. Therefore, there is no need to send an e-mail before returning the result of the form. You can do this offline and asynchronously after the result of the form submission is returned. The important thing to remember here is that it is the perception of performance that matters and not absolute performance. It can be better to not do some work (or at least defer it) rather than speed it up. This may be counterintuitive, especially considering how individual computer operations can be too quick to perceive. However, the multiplying factor is scale. One operation may be relatively quick, but millions of them may accumulate to a visible delay. Optimizing these will have a corresponding effect due to the magnification. Improving code that runs in a tight loop or for every user is better than fixing a routine that runs only once a day.

    Slower is sometimes better In some situations, processes are designed to be slow, and this is essential to their operation and security. A good example of this, which may be hit in profiling, is password hashing or key stretching. A secure password hashing function should be slow so that the password, which (despite being bad practice) may have been reused on other services, is not easily recovered. We should not use generic hashing functions, such as MD5, SHA1, and SHA256, to hash passwords because they are too quick. Some better algorithms that are designed for this task are PBKDF2 and bcrypt, or even Argon2 for new projects. Always remember to use a unique salt per password too. We won't go into any more details here, but you can clearly see that speeding up password hashing would be bad, and it's important to identify where to apply optimizations.

    [ 754 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    Why issues are missed One of the main reasons that performance issues are not noticed in development is that some problems are not perceivable on a development system. Issues may not occur until latency increases. This may be because a large amount of data was loaded into the system and retrieving a specific record takes longer. This may also be because each piece of the system is deployed to a separate server, increasing the network latency. When the number of users accessing a resource increases, then the latency will also increase. For example, we can quickly insert a row into an empty database or retrieve a record from a small table, especially when the database is running on the same physical machine as the web server. When a web server is on one virtual machine and the big database server is on another, then the time taken for this operation can increase dramatically. This will not be a problem for one single database operation, which appears just as quick to a user in both cases. However, if the software is poorly written and performs hundreds or even thousands of database operations per request, then this quickly becomes slow. Scale this up to all the users that a web server deals with (and all of the web servers) and this can be a real problem. A developer may not notice that this problem exists if they're not looking for it, as the software performs well on their workstation. Tools can help in identifying these problems before the software is released.

    Measuring The most important takeaway from this book is the importance of measuring. You need to measure problems or you can't fix them. You won't even know when you have fixed them. Measurement is the key to fixing performance issues before they become noticeable. Slow operations can be identified early on, and then they can be fixed. However, not all operations need optimizing. It's important to keep a sense of perspective, but you should understand where the chokepoints are and how they will behave when magnified by scale. We'll cover measuring and profiling in the next chapter.

    The benefits of planning ahead By considering performance from the very beginning, it is cheaper and quicker to fix issues. This is true for most problems in software development. The earlier you catch a bug, the better. The worst time to find a bug is once it is deployed and then being reported by your users.

    [ 755 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    Performance bugs are a little different when compared to functional bugs because often, they only reveal themselves at scale, and you won't notice them before a live deployment unless you go looking for them. You can write integration and load tests to check performance, which we will cover later in this book.

    Understanding hardware Remember that there is a computer in computer science. It is important to understand what your code runs on and the effects that this has, this isn't magic.

    Storage access speeds Computers are so fast that it can be difficult to understand which operation is a quick operation and which is a slow one. Everything appears instant. In fact, anything that happens in less than a few hundred milliseconds is imperceptible to humans. However, certain things are much faster than others are, and you only get performance issues at scale when millions of operations are performed in parallel. There are various different resources that can be accessed by an application, and a selection of these are listed, as follows: CPU caches and registers: L1 cache L2 cache L3 cache RAM Permanent storage: Local Solid State Drive (SSD) Local Hard Disk Drive (HDD) Network resources: Local Area Network (LAN) Regional networking Global internetworking Virtual Machines (VMs) and cloud infrastructure services could add more complications. The local disk that is mounted on a machine may in fact be a shared network disk and respond much slower than a real physical disk that is attached to the same machine. You may also have to contend with other users for resources.

    [ 756 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    In order to appreciate the differences in speed between the various forms of storage, consider the following graph. This shows the time taken to retrieve a small amount of data from a selection of storage mediums:

    This graph has a logarithmic scale, which means that the differences are very large. The top of the graph represents one second or one billion nanoseconds. Sending a packet across the Atlantic Ocean and back takes roughly 150 milliseconds (ms) or 150 million nanoseconds (ns), and this is mainly limited by the speed of light. This is still far quicker than you can think about, and it will appear instantaneous. Indeed, it can often take longer to push a pixel to a screen than to get a packet to another continent. The next largest bar is the time that it takes a physical HDD to move the read head into position to start reading data (10 ms). Mechanical devices are slow. The next bar down is how long it takes to randomly read a small block of data from a local SSD, which is about 150 microseconds. These are based on Flash memory technology, and they are usually connected in the same way as a HDD. The next value is the time taken to send a small datagram of 1 KB (1 kilobyte or 8 kilobits) over a gigabit LAN, which is just under 10 microseconds. This is typically how servers are connected in a data center. Note how the network itself is pretty quick. The thing that really matters is what you are connecting to at the other end. A network lookup to a value in memory on another machine can be much quicker than accessing a local drive (as this is a log graph, you can't just stack the bars).

    [ 757 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    This brings us on to main memory or RAM. This is fast (about 100 ns for a lookup), and this is where most of your program will run. However, this is not directly connected to the CPU, and it is slower than the on die caches. RAM can be large, often large enough to hold all of your working dataset. However, it is not as big as disks can be, and it is not permanent. It disappears when the power is lost. The CPU itself will contain small caches for data that is currently being worked on, which can respond in less than 10 ns. Modern CPUs may have up to three or even four caches of increasing size and latency. The fastest (less than 1 ns to respond) is the Level 1 (L1) cache, but this is also usually the smallest. If you can fit your working data into these few MB or KB in caches, then you can process it very quickly.

    Scaling approach changes For many years, the speed and processing capacity of computers increased at an exponential rate. This was known as Moore's Law, named after Gordon Moore of Intel. Sadly, this era is no Moore (sorry). Single-core processor speeds have flattened out, and these days increases in processing ability come from scaling out to multiple cores, multiple CPUs, and multiple machines (both virtual and physical). Multithreaded programming is no longer exotic, it is essential. Otherwise, you cannot hope to go beyond the capacity of a single core. Modern CPUs typically have at least four cores (even for mobiles). Add in a technology such as hyper-threading, and you have at least eight logical CPUs to play with. Naïve programming will not be able to fully utilize these. Traditionally, performance (and redundancy) was provided by improving the hardware. Everything ran on a single server or mainframe, and the solution was to use faster hardware and duplicate all components for reliability. This is known as vertical scaling, and it has reached the end of its life. It is very expensive to scale this way and impossible beyond a certain size. The future is in distributed-horizontal scaling using commodity hardware and cloud computing resources. This requires that we write software in a different manner than we did previously. Traditional software can't take advantage of this scaling like it can easily use the extra capabilities and speed of an upgraded computer processor. There are many trade-offs that have to be made when considering performance, and it can sometimes feel like more of a black art than a science. However, taking a scientific approach and measuring results is essential. You will often have to balance memory usage against processing power, bandwidth against storage, and latency against throughput.

    [ 758 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    An example is deciding whether you should compress data on the server (including what algorithms and settings to use) or send it raw over the wire. This will depend on many factors, including the capacity of the network and the devices at both ends.

    Tools and costs Licensing of Microsoft products has historically been a minefield of complexity. You can even sit for an official exam on it and get a qualification. Microsoft's recent move toward open source practices is very encouraging, as the biggest benefit of open source is not the free monetary cost but that you don't have to think about the licensing costs. You can also fix issues, and with a permissive license (such as MIT), you don't have to worry about much. The time costs and cognitive load of working out licensing implications now and in the future can dwarf the financial sums involved (especially for a small company or startup).

    Tools Despite the new .NET framework being open source, many of the tools are not. Some editions of Visual Studio and SQL Server can be very expensive. With the new licensing practice of subscriptions, you will lose access if you stop paying, and you are required to sign in to develop. Previously, you could keep using existing versions licensed from a Microsoft Developer Network (MSDN) or BizSpark subscription after it expired and you didn't need to sign in. With this in mind, we will try to stick to the free (community) editions of Visual Studio and the Express version of SQL Server unless there is a feature that is essential to the lesson, which we will highlight when it occurs. We will also use as many free and open source libraries, frameworks, and tools as possible. There are many alternative options for lots of the tools and software that augments the ASP.NET ecosystem, and you don't just need to use the default Microsoft products. This is known as the ALT.NET (alternative .NET) movement, which embraces practices from the rest of the open source world.

    [ 759 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    Looking at some alternative tools For version control, git is a very popular alternative to Team Foundation Server (TFS). This is integrated into many tools (including Visual Studio) and services, such as GitHub or GitLab. Mercurial (hg) is also an option. However, git has gained the most developer mindshare. Visual Studio Online offers both git and TFS integration. PostgreSQL is a fantastic open source relational database, and it works with many Object Relational Mappers (O/RMs), including Entity Framework (EF) and NHibernate. Dapper is a great, and high-performance, alternative to EF and other bloated O/RMs. There are plenty of NoSQL options that are available too; for example, Redis and MongoDB. Other code editors and Integrated Development Environments (IDEs) are available, such as Visual Studio Code by Microsoft, which also works on Apple Mac OS X. ASP.NET Core 1.0 (previously ASP.NET 5) runs on Linux (on Mono and CoreCLR). Therefore, you don't need Windows (although Nano Server may be worth investigating). RabbitMQ is a brilliant open source message queuing server that is written in Erlang (which WhatsApp also uses). This is far better than Microsoft Message Queuing (MSMQ), which comes with Windows. Hosted services are readily available, for example, CloudAMQP. The author has been a long time Mac user (since the PowerPC days), and he has run Linux servers since well before this. It's positive to see OS X become popular and to observe the rise of Linux on Android smartphones and cheap computers, such as the Raspberry Pi. You can run Windows 10 on a Raspberry Pi 2 and 3, but this is not a full operating system and only meant to run Internet of Things (IoT) devices. Having used Windows professionally for a long time, developing and deploying with Mac and Linux, and seeing what performance effects this brings is an interesting opportunity. Although not open source (or always free), it is worth mentioning JetBrains products. TeamCity is a very good build and Continuous Integration (CI) server that has a free tier. ReSharper is an awesome plugin for Visual Studio, which will make you a better coder. They're also working on a C# IDE called Project Rider that promises to be good. There is a product called Octopus Deploy, which is extremely useful for the deployment of .NET applications, and it has a free tier. Regarding cloud services, Amazon Web Services (AWS) is an obvious alternative to Azure, even if the AWS Windows support leaves something to be desired. There are many other hosts available, and dedicated servers can often be cheaper for a steady load if you don't need the dynamic scaling of the cloud. Much of this is beyond the scope of this book, but you would be wise to investigate some of these tools. The point is that there is always a choice about how to build a system from the huge range of components available, especially with the new version of ASP.NET.

    [ 760 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    The new .NET The new ASP.NET and the .NET Framework that it relies upon were rewritten to be open source and cross-platform. This work was called ASP.NET 5 while in development, but this has since been renamed to ASP.NET Core 1.0 to reflect that it's a new product with a stripped down set of features. Similarly, .NET Core 5 is now .NET Core 1.0, and Entity Framework 7 is now Entity Framework Core 1.0. The web application framework that was called ASP.NET MVC has been merged into ASP.NET Core, although it's a package that can be added like any other dependency. The latest version of MVC is 6 and, along with Web API 2, this has been combined into a single product, called ASP.NET Core. MVC and Web API aren't normally referred to directly any more as they are simply NuGet packages that are used by ASP.NET Core. Not all features are available in the new Core frameworks yet, and the focus is on server-side web applications to start with. All these different names can be perplexing, but naming things is hard. A variation of Phil Karlton's famous quote goes like this: "There are only two hard things in Computer Science: cache invalidation, naming things, and off-by-one errors." We've looked at naming here, and we'll get to caching later on in this book. It can be a little confusing understanding how all of these versions fit together. This is best explained with a diagram like the following, which shows how the layers interact:

    [ 761 ] WOW! eBook www.wowebook.org

    Why Performance Is a Feature

    ASP.NET Core 1.0 can run against the existing .NET Framework 4.6 or the new .NET Core 1.0 framework. Similarly, .NET Core can run on Windows, Mac OS X, and Linux, but the old .NET only runs on Windows. There is also the Mono framework, which has been omitted for clarity. This was a previous project that allowed .NET to run on multiple platforms. Mono was recently acquired by Microsoft, and it was open sourced (along with other Xamarin products). Therefore, you should be able to run ASP.NET Core using Mono on any supported operating system. .NET Core focuses on web-application development and server-side frameworks. It is not as feature filled as the existing .NET Framework. If you write native-graphical desktop applications, perhaps using Windows Presentation Foundation (WPF), then you should stick with .NET 4.6. As this book is mainly about web-application development, we will use the latest Core versions of all software. We will investigate the performance implications of various operating systems and architectures. This is particularly important if your deployment target is a computer, such as the Raspberry Pi, which uses a processor with an ARM architecture. It also has limited memory, which is important to consider when using a managed runtime that includes garbage collection, such as .NET.

    Summary Let's sum up what we covered in this introductory chapter and what we will cover in the next chapter. We introduced the concept of treating performance as a feature, and we covered why this is important. We also briefly touched on some common performance problems and why we often miss them in the software development process. We'll cover these in more detail later on in this book. We showed the performance differences between various types of storage hardware. We highlighted the importance of knowing what your code runs on and, crucially, what it will run on when your users see it. We talked about how the process of scaling systems has changed from what it used to be, how scaling is now performed horizontally instead of vertically, and how you can take advantage of this in the architecting of your code and systems. We showed you the tools that you can use and the licensing implications of some of them. We also explained the new world of .NET and how these latest frameworks fit in with the stable ones. We touched upon why measurement is vitally important. In the next chapter, we'll expand on this and show you how to measure your software to see whether it's slow.

    [ 762 ] WOW! eBook www.wowebook.org

    2

    Measuring Performance Bottlenecks Measurement is the most crucial aspect of building high performance systems. You can't change what you can't measure, because you won't know what effect your change has had, if any. Without measuring your application you won't know if it's performing well. If you only go by when your software feels slow then you have left it too late. You are reactively fixing a problem rather than proactively avoiding one. You must measure to achieve good performance even though it's the feel that matters to a user. Some books leave measurement, analysis, and profiling until the end. Yet this is the first thing that should be considered. It's easy to fix the wrong problem and optimize areas that are not having performance difficulties. In this chapter we will cover the following topics. Structured Query Language (SQL) database profiling SQL Server Profiler MiniProfiler Web application profiling Glimpse Integrated Development Environment (IDE) profilers HTTP monitoring Browser developer tools Fiddler proxy

    WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Network monitoring Microsoft message analyzer Wireshark Scientific method and repeatability This chapter will show you how to measure if there are performance issues and where they are occurring. We will describe the tools that can give you this information and demonstrate how to use them effectively and correctly. We'll also show you how to repeat your experiments consistently so that you can tell if you have fixed a problem once you've identified it. We will cover measurement again towards the end of the book, but there we'll focus on continuous automated monitoring to avoid regressions. This chapter will focus more on manual testing to identify potential performance problems during development and debugging.

    Tools Good debugging tools are essential in discovering where problems lie. You can write your own crude timing code and we will show you how. However, purpose built tools are much nicer to work with. Many of the tools in this chapter help examine areas external to your code. We will cover profiling of code too, but it's hard to identify problems this way unless the work is purely computational. Slowdowns often happen because of actions your app initiates outside of its immediate stack, and these can be hard to debug by simply stepping through the code. Moving through your program line-by-line slows down execution so much that it can be difficult to identify which lines are fast and which are slow. The same approach taken for fixing functional bugs cannot always be applied to fix performance issues. One of the problems with adopting a new framework (such as ASP.NET Core) early is that it can take a while for the existing tools to be updated to work with it. We will point out when this is the case but these compatibility problems should improve over time. As many of the tools are open source, you could help out and contribute to the community.

    [ 764 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    SQL First off we will cover SQL related issues, so if you're not using a relational database then you can skip this bit, perhaps if you're using a NoSQL store or a document database instead. Relational databases are a very mature technology and are flexible in their uses. However, it is essential to have a basic knowledge of the SQL syntax and how databases work in order to use them effectively. It can be tempting when using an O/RM such as Entity Framework (EF) to ignore SQL and stay in a C# world, but a competent developer should be able to write a high performance SQL query. Ignoring the realities of how a database engine works will often lead to performance issues. It's easy to write code with an O/RM that's too chatty with the database and issues far too many queries for an operation. Not having the correct indexes on a table will also result in poor performance. During development you may not notice these mistakes, unless you use tools to identify the inefficient events occurring. Here we will show you a couple of ways of doing this.

    SQL Server Profiler SQL Server Profiler is a tool that allows you to inspect what commands are being executed on your SQL Server database. If you have the management tools installed then you should already have it on your machine. 1. If you are using Microsoft SQL Server, then SQL Server Profiler can be accessed from the Tools menu of SQL Server Management Studio (SSMS).

    [ 765 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    2. Load SQL Server Profiler and connect to the database that you are using. Name your trace and select the Tuning profile template.

    3. Click Run and you should see that the trace has begun. You are now ready to run your code against the database to see what queries it's actually executing.

    [ 766 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Executing a simple query As a test you can execute a simple select query with Management Studio and the profile window should be flooded with entries. Locate the query that you just executed among the noise. This is easier if it starts with a comment, as shown in the following screenshot.

    The Duration column shows the time taken for the query in milliseconds (ms). In this example, selecting the top 1000 rows from a table containing over a million entries took 246 ms. This would appear very quick, almost instantaneous, to the user. Modifying the query to return all rows makes it much slower, as shown in the following screenshot:

    [ 767 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    The query has now taken over 13 seconds (13455 ms) to complete, which is noticeably slow. This is an extreme example, but it is quite a common mistake to request more data than needed and to filter or sort it in application code. The database is often much better suited to this task and is usually the correct location to select the data you want. We will cover specific SQL mistakes and remedies in the following chapters. However, the first step is knowing how to detect what is going on with the communication between your application and the database. If you can't detect that your app is making inefficient queries then it will be difficult to improve performance.

    MiniProfiler MiniProfiler is an excellent open source tool for debugging data queries. It supports SQL databases and some NoSQL databases, such as Mongo and Raven. It came out of Stack Exchange, the people who run the Stack Overflow Q&A site. It embeds a widget into your web pages that shows you how long they take to get their data. It also shows you the SQL queries and warns you about common mistakes. Although MiniProfiler started with .NET, it is also available for Ruby, Go, and Node.js.

    [ 768 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    The biggest benefit of MiniProfiler over SQL Server Profiler is that it's always there. You don't need to explicitly go looking for SQL problems and so it can highlight issues much earlier. It's even a good idea to run it in production, only visible to logged in admins. This way, every time you work on the website you will see how the data access is performing. Make sure you test for the performance impacts of this before deploying it though. Unfortunately MiniProfiler doesn't yet support ASP.NET Core 1.0. It relies on the System.Web library and doesn't tie into the Open Web Interface for .NET (OWIN) style lifecycle used by the new framework. Hopefully it will soon support ASP.NET Core (and may do by the time you read this) as it's a very useful tool. If you're using a supported version of ASP.NET then MiniProfiler is highly recommended. ASP.NET Core support is planned, so keep an eye on ANCLAFS.com for the latest support details. Due to these limitations, we won't cover MiniProfiler in too much detail but you can get more information at miniprofiler.com. If you are using a previous version of ASP.NET then you can install it into your project with the NuGet package manager. Type InstallPackage MiniProfiler into the package manager PowerShell console window in Visual Studio. Then follow the rest of the setup instructions from the website.

    Application profiling Often you will want a breakdown of where all the time is being taken up within your application. There are various tools available for this and we will cover a couple of them here.

    [ 769 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Glimpse Glimpse is a fantastic open source add-on for your web application. Like MiniProfiler, it adds a widget to your web pages so that you can see problems as you navigate around and work on your site. It provides information similar to your browser developer tools but also delves inside your server side application to show you a trace of what actions are taking the most time. Glimpse is available from getglimpse.com and you can install it with NuGet for the web framework and O/RM you're using. For ASP.NET Core we will need to use Glimpse version 2. This is currently a beta prerelease, but by the time you read this it may be stable.

    Using Glimpse Installing Glimpse is really simple, there are only three steps. 1. Install with NuGet. 2. Add lines to Startup.cs. 3. Build and run the app. Let's have a look at these steps. Installing the package Right-click on your web application project in the Solution Explorer and select Manage NuGet Packages... to open the graphical package manager window. Search for Glimpse, select it, and then click on the Install button. If you want to install the beta version of Glimpse 2, then make sure the Include prerelease checkbox is selected:

    [ 770 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Add code You need to add three snippets of code to your Startup.cs file. In the using directives at the top of the file, add the following: using Glimpse;

    In the ConfigureServices function, add the following: services.AddGlimpse();

    In the Configure function, add the following: app.UseGlimpse();

    Running your web application Build and run your web application by pressing F5. If you encounter a duplicate type error when running then simply clean the solution and do a full rebuild. You may need to apply migrations to the database, but this is just a button click in the browser. However, if you add this to an existing project with data, then you should take more care. You should then look at your web application with the Glimpse bar at the bottom of the browser window, which looks like the following screenshot. The bar is one long strip, but it has been broken up in this screenshot to display it more clearly:

    The first page load may take much longer than subsequent ones, so you should refresh for more representative data. However, we won't worry about this for the purposes of this demo. Mouse over each section in the Glimpse toolbar to expand it for more information. You can minimize and restore each section (HTTP, HOST, or AJAX) by clicking on its title.

    [ 771 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    If you used the default web application template (and left the authentication as the default individual user accounts), then you can register a new user to cause the application to perform some database operations. You will then see some values in the DB queries field, as shown in the following screenshot:

    Click on the Glimpse logo to see a history of all the page requests. Select one to view the details and expand the SQL queries by clicking on them, which is displayed in the following screenshot:

    [ 772 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Glimpse shows you how much time is spent in each layer of your application and exactly which SQL queries are being run. In this case, EF Core 1.0 generates the SQL. Glimpse is very useful to track down where performance problems lie. You can easily see how long each part of the page pipeline takes and identify slow parts.

    IDE Using the profiling tools that are built into Visual Studio (VS) can be very informative to understand the CPU and memory usage of your code. You can also see the time taken for certain operations. When running your application, open the diagnostic tools window in VS, as shown in the following screenshot:

    [ 773 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    You can see the CPU and memory profiles, including the automatic Garbage Collection events that are used to free up memory (the marker just prior to memory use decreasing). You will be able to see breakpoint events and, if you have the enterprise version of VS, IntelliTrace events as well. IntelliTrace is only available in the enterprise edition of Visual Studio. However, you can still use the performance and diagnostic tools in the free community edition. If you have IntelliTrace, then you can find it in the VS options, as shown in the following screenshot. However, the diagnostic tools are still useful without this premium feature:

    When you put in a breakpoint and your code hits it, then VS will tell you how long it was since the previous event. This is shown in the events list and also overlaid near the breakpoint.

    [ 774 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    You can't install the community edition and enterprise edition of Visual Studio on the same machine. Use a Virtual Machine (VM), for example, with Hyper-V or VirtualBox, if you wish to run both. Alternatives to VStools are Redgate ANTS and Telerik JustTrace. JetBrains also have dotTrace and dotMemory. However, all of these tools can be quite expensive and we won't cover them here.

    Monitoring HTTP When dealing with a web application, you will normally use HTTP as the application protocol. It's very useful to know what requests occur between the browsers and your servers.

    Browsers Most modern browsers have excellent developer tools, which include a network tab to monitor requests to and responses from the web server. You can normally access the dev tools by pressing F12. These are handy to view the web traffic and you can still see encrypted communications without messing about with certificates. The dev tools in both Chrome and Firefox are superb. We'll focus on the network and timing component, but we highly recommend that you learn all of the features. If you know how to get the best out of them, then web development is much easier.

    Chrome The network dev tools in Chrome are very useful. They provide a good visualization of the request timings, and you can throttle the connection to see how it behaves over various different internet connections. This is particularly important for mobile devices.

    [ 775 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    A selection of network requests from the Chrome dev tools are shown in the following screenshot. Additional columns are available if you right-click on the Headers bar:

    You can disable the cache with a simple checkbox so that the browser will always load assets fresh from your web server. You can also click on a resource for more information and the timing tab is very useful to provide a breakdown of the connection components, for example, Time To First Byte (TTFB). Some basic timing details from a local web server are shown in the following screenshot:

    [ 776 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    On a local web server, this breakdown won't contain much extra information, but on a remote server it will display other things, such as the Domain Name System (DNS) hostname lookup, and SSL/TLS handshake. These additional components are shown in the next screenshot:

    Firefox Firefox has similar dev tools to Chrome but with some added features. For example, you can edit a request and resend it to the web server. The Network tab presents the same sort of information as Chrome does, as shown in the following screenshot:

    [ 777 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    The detail view is also very similar, including the Timings tab. This tab is shown in the following screenshot:

    [ 778 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Fiddler Sometimes, browser tools aren't enough. Perhaps, you are debugging a native application, backend web client, or mobile browser. Fiddler is a free debugging proxy that can capture all HTTP traffic between the client and server. With a little bit of work, it can also intercept HTTPS traffic. Fiddler is available at www.telerik.com/fiddler. As this book focuses on web development we won't go into more detail. The browser dev tools are suitable for most work these days. They now fulfil a large part of the role that Fiddler used to play before they acquired the same features. Fiddler is still there if you need it and it can be handy if your web server calls an external HTTP API. Although this can also often be debugged directly inside VS.

    Network Occasionally, you will need to debug at a lower level than HTTP or SQL. This is where network monitors or packet sniffers come in. Perhaps, you want to debug a Tabular Data Stream (TDS) message to a SQL Server DB or a TLS handshake to an SSL terminating load balancer. Or maybe you want to analyze a SOAP web service envelope or Simple Mail Transfer Protocol (SMTP) e-mail connection to see why it's not working correctly.

    Microsoft Message Analyzer Microsoft Message Analyzer supersedes Microsoft Network Monitor (Netmon) and is a tool to capture network traffic on Windows systems. Netmon requires you to log out and back in again after installation, whereas Wireshark doesn't. You can read more about these two Microsoft tools online, but, for clarity and brevity, we will focus on Wireshark for low-level network monitoring.

    Wireshark Wireshark (previously called Ethereal) is an open source and cross-platform packet capture and network analysis application. It is probably the most popular tool of its kind and has many uses. You can install Wireshark without needing to restart or log out which makes it great to debug a live problem that's hard to recreate. You can download Wireshark from www.wireshark.org and it runs on Windows, Mac OS X, and Linux. Wireshark isn't particularly useful for local development as it only captures traffic going over the network, not to localhost. The only thing you are likely to see if you run it against a local web application is VS reporting what you do back to Microsoft.

    [ 779 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    You can turn off the Customer Experience Improvement Program (CEIP) by clicking on the button next to the quick launch box and selecting Settings.... By default, it is on (it's now opt-out rather than the opt-in of previous products). Click on the fin icon button at the top-left of Wireshark to start a trace. Then perform some network actions, such as loading a webpage from a test server. Once you capture some packets, click on the stop button and you can then examine the collected data at your leisure. Ask your IT administrator before running Wireshark. It can often pick up sensitive information off of the LAN, which may be against your IT acceptable use policy. The following screenshot shows part of the results from a Wireshark capture.

    [ 780 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    There can be a lot of noise when using Wireshark. You will see low-level network packets, such as Address Resolution Protocol (ARP) traffic, which you can filter out or ignore. You may also see data from VoIP phones and other networked devices or data that is intended for other computers on the network. Select the packet that you are interested in from the top pane. The middle pane will display the contents of the packet. You can normally ignore the lower layers of the network stack, such as Ethernet and TCP/IP (presented at the top of the list). Dive straight into the application layer that is listed last. If this is a protocol that Wireshark recognizes, then it will break it down into the fields that it's made up of. The bottom pane displays a hex dump of the raw packet. This is not normally as useful as the parsed data in the middle pane. If you use TLS/SSL (hint: you should), then you won't see the contents of HTTP traffic. You would need a copy of the server's private key to see inside the TLS connection, which wraps and encrypts the HTTP application data. You will only be able to see the domain that was connected to via the DNS lookup and TLS certificate, not the full URL or any payload data. Using Wireshark is a huge topic, and there are many great resources available to learn about it, both online and offline. We won't go into much more detail here because it's usually not necessary to go down to this low level of network scrutiny. However, this is a good tool to have in your back pocket.

    Roll your own Sometimes, you may want to write your own performance measurement code. Make sure that you have exhausted all other options and investigated the available tools first. Perhaps, you want to record the execution time of some process to write to your logs. Maybe you want to send this to a system such as Logstash then visualize the changes over time with Kibana. Both are great open source products from Elastic, and they store data in the Elasticsearch search server. You can read more about both of these tools at elastic.co You can easily record the length of time for a task by storing the current time before it starts and comparing this to the current time after it finishes, but this approach has many limitations. The act of measuring will affect the result to some extent, and the clock is not accurate for short durations. It can be useful for really slow events if you need to share states between processes or multiple runs, but, to benchmark, you should normally use the Stopwatch class.

    [ 781 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    It is usually best to store timestamps in Coordinated Universal Time (UTC), which is otherwise known as Greenwich Mean Time (GMT), and name timestamps in UTC. You will avoid many issues with time zones and daylight saving if you use DateTimeOffset.UtcNow (or at least DateTime.UtcNow). Name variables and columns to indicate this, for example, TimestampUtc. Use TimeSpan for lengths of time, but, if you must use primitives (such as integers), then include the units in the variable or column name. For example, DurationInMilliseconds or TimeoutInSeconds. This will help to avoid confusion when another developer (or your future self) comes to use them. However, to benchmark quick operations, you should use a Stopwatch. This class is in the System.Diagnostics namespace. If you try to measure a single quick event, then you will not get accurate results. A way around this is to repeat the task many times and then take an average. This is useful to benchmark, but it is not usually applicable to real applications. However, once you identify what works quickest with a test, then you can apply it to your own programs. Let's illustrate this with a small experiment to time how long it takes to hash a password with the PBKDF2 algorithm (in the System.Security.Cryptography namespace). In this case, the operation under test is not important, as we are simply interested in the timing code. A naive approach may look like the following code: var s = Stopwatch.StartNew(); pbkdf2.GetBytes(2048); s.Stop(); Console.WriteLine($"Test duration = {s.ElapsedMilliseconds} ms");

    This code will output a different value every time it is run, due to the flaws in the measurement process. A better way would be to repeat the test many times and average the output, like the following code: var s = Stopwatch.StartNew(); for (var ii = 0; ii < tests; ii++) { pbkdf2.GetBytes(2048); } s.Stop(); var mean = s.ElapsedMilliseconds / tests; Console.WriteLine($"{tests} runs mean duration = {mean} ms");

    This code will output very similar results every time. The higher the value of tests, the more accurate it will be, but the longer the test will take.

    [ 782 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    We use the new concise string-formatting method here, but you can use the traditional overloads to Console.WriteLine if you prefer.

    Let's write a quick example application that demonstrates the differences by running these two different versions multiple times. We'll extract the two tests into methods and call them each a few times: var pbkdf2 = new Rfc2898DeriveBytes("password", 64, 256); SingleTest(pbkdf2); SingleTest(pbkdf2); SingleTest(pbkdf2); Console.WriteLine(); var tests = 1000; AvgTest(pbkdf2, tests); AvgTest(pbkdf2, tests); AvgTest(pbkdf2, tests); Console.WriteLine(); Console.WriteLine("Press any key..."); Console.ReadKey(true);

    The output will look something like the following screenshot. You can find the full application listing in the code that accompanies this book if you want to run it for yourself:

    You can see that the three individual tests can give wildly different results and yet the averaged tests are identical.

    [ 783 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Detailed steps to download the code bundle are mentioned in the Preface of this book. Please have a look. The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/ASP.NET-C ore-1.0-High-Performance. We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktP ublishing/. Check them out! Your results will vary. This is due to the inherent variability in computer systems. Embedded systems that are time sensitive usually use a real time OS. Normal software systems typically run on a time-sharing OS, where your instructions can easily get interrupted, and VMs make the problem even worse. You will get different results, depending on whether you build in debug or release mode and if you run with or without debugging. Release mode without debugging (Ctrl + F5) is the fastest. The following screenshot shows the same benchmarking demo application running with debugging. You can tell because the dotnet executable is shown in the title bar of the command prompt. If it ran without debugging, then this would display cmd.exe (on Windows), as in the previous screenshot.

    Unit testing is very valuable and you may even practice Test-Driven Development (TDD), but you should be careful about including performance tests as unit tests. Unit tests must be quick to be valuable, and tests that accurately measure the time taken for operations are often slow. You should set a timeout on your unit tests to make sure that you don't write slow ones with external dependencies. You can still test performance, but you should do it in the integration testing stage along with tests that hit an API, DB, or disk.

    [ 784 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Science We dealt with the computer in computer science by showcasing some hardware in the previous chapter. Now, it's time for the science bit. It's important to take a scientific approach if you wish to achieve consistently reliable results. Have a methodology or test plan and follow it the same way every time, only changing the thing that you want to measure. Automation can help a lot with this. It's also important to always measure for your use case on your systems with your data. What worked well for someone else may not work out great for you. We will talk more about science and statistics later in the book. Taking a simple average can be misleading but it's fine to use it as a gentle introduction. Read Chapter 8, The Downsides of Performance Enhancing Tools, for more on concepts such as medians and percentiles.

    Repeatability Results need to be repeatable. If you get wildly different results every time you test, then they can't be relied upon. You should repeat tests and take the average result to normalize out any variability in the application or hardware under test. It is also important to clearly record the units of measurement. When you compare a new value to a historic one, you need to know this. NASA famously lost a Mars probe because of unit confusion.

    Only change one thing When testing, you aim to measure the impact of a single change. If you change more than one thing at a time, then you cannot be sure which one has made the difference. The aim is to minimize the effects of any other changes apart from the one you are interested in. This means keeping external factors as static as possible and performing multiple tests, taking the average result.

    [ 785 ] WOW! eBook www.wowebook.org

    Measuring Performance Bottlenecks

    Summary Let's sum up what we covered about measurement in this chapter and what we'll cover in the next chapter. We covered the importance of measurement in solving performance problems. Without measuring, you cannot hope to write high-performance software; you will be coding in the dark. We highlighted some of the tools that you can use to measure performance. We showed you how to use a selection of these and how to write your own. We also covered the value of taking a scientific approach to measurement. Making sure that your results are repeatable and that you record the correct units of measurement are important concerns. In the next chapter, we will learn how to fix common performance problems. You will gain the skills to speed up the low-hanging fruit and make yourself look like a performance wizard to your colleagues. No longer will it be a case of it worked in test, it's an operations problem now.

    [ 786 ] WOW! eBook www.wowebook.org

    3

    Fixing Common Performance Problems This chapter gets into the meat of optimization, once you identify and locate performance problems. It covers a selection of the most common performance issues across a variety of areas and explains simple solutions to some of the mistakes people often make. When using these techniques, you'll look like a wizard to your clients and colleagues by quickly speeding up their software. Topics covered in this chapter include the following: Network latency Select N+1 problems Disk I/O issues on virtual machines Asynchronous operations in a web application Performing too many operations in one web request Static site generators Pragmatic solutions with hardware Shrinking overly-large images Most of the problems in this chapter center on what happens when you add latency to common operations or when throughput is reduced from what it was in development. Things that worked fine in test when everything was on one physical machine with minimal data are now no longer quite as speedy when you have an API on a different continent, a full database on a different machine to your web server, and its virtual disk somewhere else on the network entirely.

    WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    You will learn how to identify and fix issues that are not always apparent when everything is running on a single machine. You'll see how to identify when your O/RM or framework behaves badly and is too chatty with the database, which can easily happen if it's not used correctly. We will see how to ensure that work is performed in the most appropriate place, and we'll look at some ways of keeping your images small using the correct resolution and format. These techniques will ensure that your application is efficient and that data is not sent over the wire unnecessarily. We'll also discuss how to mitigate performance issues with an alternative approach by improving the underlying hardware to reduce the factors that amplify issues in bad software. This can be a good temporary measure if the software is already deployed to production and in use. If you already have live performance problems then this can buy you some time to engineer a proper fix.

    Latency As covered in previous chapters, latency is the delay that occurs before an operation can complete, sometimes also known as lag. You may not be able to control the latency of the infrastructure that your software runs on, but you can write your application in such a way that it can cope with this latency in a graceful manner. The two main types of latency that we will discuss here are network latency and disk latency. As the names suggest these are, respectively, the delay in performing an operation over the network and the delay to read from or write to a persistent storage medium. You will often deal with both at the same time, for example, a database (DB) query to a server on a remote virtual machine will require the following operations: A network operation from web server to DB server A network operation from DB server to remote disk on a Storage Area Network (SAN) A disk operation to look up data on the physical drive Although Solid State Drives (SSDs) have much lower latency than spinning platter disks, they are still relatively slow. When we talk about disk I/O here, we refer to both types of drive.

    [ 788 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    You can clearly see that, if you issue too many DB operations, the latency present in typical production infrastructure will compound the problem. You can fix this by minimizing the number of DB operations so that they can't be amplified as much. Let's illustrate this with an example. Let's suppose you wish to return 200 records from your DB and the round trip latency is 50 milliseconds (ms). If you retrieve all of the records at once, then the total time will be 50 ms plus the time to transfer the records. However, if you first retrieve a list of the record identifiers and then retrieve all of them individually, the total time will be at least 201 * 50 ms = 10.05 seconds! Unfortunately, this is a very common mistake. In a system where latency dominates throughput, it is important to keep requests to a minimum.

    Asynchronous operations Most new .NET framework APIs that have significant latency will have asynchronous (async) methods. For example, the .NET HTTP client (superseding the web client), SMTP client, and Entity Framework (EF) all have async versions of common methods. In fact, the async version is usually the native implementation and the non-async method is simply a blocking wrapper to it. These methods are very beneficial and you should use them. However, they may not have the effect that you imagine when applied to web application programming. We will cover async operations and asynchronous architecture later in this book. We'll also go into Message Queuing (MQ) and worker services. This chapter is just a quick introduction and we will simply show you some tools to go after the low-hanging fruit on web applications. An async API returns control to the calling method before it completes. This can also be awaited so that on completion, execution resumes from where the asynchronous call was made. With a native desktop or mobile application, awaiting an async method returns control to the user interface (UI) thread, which means that the software remains responsive to user input. The app can process user interactions rather than blocking on your method. Traditionally, you may have used a background worker for these tasks. You should never perform expensive work on the UI thread. Therefore, this technique does increase performance for native applications. However, for a web application this UI blocking problem does not exist because the browser is the UI. Therefore, this technique will not increase performance for a single user in isolation.

    [ 789 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    Awaiting asynchronous API methods in a web application is still good practice, but it only allows the software to scale better and handle more concurrent users. A web request typically cannot complete until the async operation also completes. Therefore, although the thread is surrendered back into the thread pool and you can use it for other requests, the individual web request will not complete quicker.

    Simple asynchronous tools As this book deals with web application programming, we won't go into much more detail on native application UIs in this chapter. Instead, we will showcase some simple tools and techniques that can help with async tasks in web applications. The tools we are about to cover offer some simple solutions that are only suitable for very small applications. They may not always be reliable, but sometimes they can be good enough. If you are after a more robust solution, then you should read the later chapters about distributed architecture.

    Background queuing Background queuing is a useful technique when you have an operation that does not need to occur straight away. For example, logging stats to a database, sending an e-mail, or processing a payment transaction. If you perform too much work in a single web request, then background queuing may offer a convenient solution, especially if you don't require the operation to always succeed. If you use ASP.NET 4.6 (or any version from 4.5.2 onwards), then you can use HostingEnvironment.QueueBackgroundWorkItem to run a method in the background. This is preferable to simply setting a task running, as if ASP.NET shuts down then it will issue a cancellation request and wait for a grace period before killing the item. However, this still does not guarantee completion because the application can die at any point due to an unexpected reboot or hardware failure. If the task needs to complete, then it should be transactional and make a record of success upon completion. It can then be retried if it failed. Queuing a background work item is okay for fire-and-forget events, if you genuinely don't care whether they succeed or not. Unfortunately, HostingEnvironment.QueueBackgroundWorkItem is not part of ASP.NET Core. Therefore, if you want to use this, then you will have to simply queue a job. We will show you how to do this later, but if you use the full version of ASP.NET, then you can do the following to send an e-mail in the background:

    [ 790 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems var client = new SmtpClient(); HostingEnvironment.QueueBackgroundWorkItem(ct => client.SendMailAsync(message));

    Assuming that you already have your message, this will create an SMTP client and send the e-mail message in the background without blocking further execution. This does not use the ct (cancellation token) variable. Keep in mind that the e-mail is not guaranteed to be sent. Therefore, if you need to definitely dispatch it, then consider using another method. If you use ASP.NET Core, then this functionality is not available. However, you can manually create something similar with Task.Run as in the following example. However, this is probably not the best approach for anything nontrivial: Task.Run(() => asyncMethod(cancellationToken));

    If you can cancel your task, then you can get the ApplicationStopping token from an injected instance of the IApplicationLifetime interface to pass in as your cancellation token. This will let your task know when the application is about to stop, and you can also block shutdown with it while you gracefully clean up. You should use this technique with caution, so we won't give you a full example here. Although, you should now have enough pointers to dig deeper and understand the ASP.NET Core application lifecycle if you wish.

    Hangfire Hangfire is an excellent library to run simple background jobs. It does not support ASP.NET Core yet, but this is planned (refer to ANCLAFS.com for the latest information). If you use the full .NET, then you should consider it and you can read more at hangfire.io. You need persistent storage, such as SQL Server, to use Hangfire. This is required so that it can ensure that tasks are completed. If your tasks are very quick, then this overhead can outweigh the benefits. You can reduce the latency using message queues or the in-memory store Redis, but these are advanced topics. As this book focuses on ASP.NET Core and Hangfire does not support it yet, we won't cover its use in more detail. It is also beyond the scope of this chapter in terms of quick and easy solutions.

    [ 791 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    Select N+1 problems You may have heard of Select N+1 problems before. It's a name for a class of performance problems that relate to inefficient querying of a DB. The pathological case is where you query one table for a list of items and then query another table to get the details for each item, one at a time. This is where the name comes from. Instead of the single query required, you perform N queries (one for the details of each item) and the one query to get the list to begin with. Perhaps a better name would be Select 1+N. You will hopefully not write such bad-performing queries by hand, but an O/RM can easily output very inefficient SQL if used incorrectly. You might also use some sort of business objects abstraction framework, where each object lazily loads itself from the DB. This can become a performance nightmare if you want to put a lot of these objects in a list or calculate some dashboard metrics from a large set. We will go into detail about SQL and O/RM optimization in Chapter 5, Optimizing I/O Performance. This chapter will simply offer some quick fixes to common problems. If you have a slow application that has performance issues when retrieving data, then Select N+1 may be the problem. Run a SQL profiler tool, as described in the previous chapter, to discover if this is the case. If you see lots of SQL queries for your data instead of just one, then you can move on to the solution stage. For example, if your screen fills with queries on a page load, then you know you have a problem. In the following example, we will use the micro-O/RM Dapper (made by the team at Stack Overflow) to better illustrate what occurs. However, you are more likely to encounter these problems when using a large lazy loading library or O/RM (such as EF or NHibernate). Entity Framework Core 1.0 (previously EF 7) does not support lazy loading, so you are unlikely to encounter Select N+1 problems when using it. Previous versions of EF do support this and it may be added to EF Core in the future. You currently need to use a beta pre-release version of Dapper to work with ASP.NET Core. As with Glimpse, this may be stable by the time you read this (check ANCLAFS.com). To use Glimpse with Dapper, you need to use the Glimpse.ADO package, which unfortunately does not yet support .NET Core.

    [ 792 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    Consider a simple blog website. On the home page, we would like a list of the posts along with the number of comments each post has. Our blog post model may look something like the following: namespace SelectNPlusOne.Models { public class BlogPost { public int BlogPostId { get; set; } public string Title { get; set; } public string Content { get; set; } public int CommentCount { get; set; } } }

    We also have a model for a comment, which may look like this. namespace SelectNPlusOne.Models { public class BlogPostComment { public int BlogPostCommentId { get; set; } public string CommenterName { get; set; } public string Content { get; set; } } }

    As this is an example, we kept things simple and only used a single set of models. In a real application, you will typically have separate view models and data access layer models. The controller will map between these, perhaps assisted by a library, such as AutoMapper (automapper.org). Our view to render this into HTML may look something like the following: @model IEnumerable @foreach (var post in Model) {

    [ 793 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems }
    Title # Comments
    @post.Title @post.CommentCount


    We want to populate these models and view from our database. We have two tables, which look like this:

    The relationship between the two tables in question looks like the following:

    [ 794 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    In our controller, we can write code, such as the following to query the database, populate the model from the database results, and return the view to render it: using (var connection = new SqlConnection(connectionString)) { await connection.OpenAsync(); var blogPosts = await connection.QueryAsync(@" SELECT * FROM BlogPost"); foreach (var post in blogPosts) { var comments = await connection.QueryAsync(@" SELECT * FROM BlogPostComment WHERE BlogPostId = @BlogPostId", new { BlogPostId = post.BlogPostId }); post.CommentCount = comments.Count(); } return View(blogPosts); }

    We test this and it works! We feel pretty happy with ourselves. It completes quickly on our local test database that contains a handful of rows. We used the async methods everywhere, which must be what makes this so quick. We even only get the comments for each blog in question, not all comments everytime. We also used a parameterized query to avoid SQL injection, and everything looks good. Ship it! As this is an example, we kept it simple for clarity. In a real application, you will want to use techniques such as dependency injection (such as the DI built into ASP.NET Core) to make it more flexible. Unfortunately, when the data starts to increase (as posts and comments are added), the blog starts to get much slower with pages taking a longer time to load. Our readers get bored waiting and give up. Audience figures drop along with revenue. Let's profile the database to see what the problem might be. We run SQL Server Profiler filtering on the database in question, and look at the SQL being executed.

    [ 795 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    The following screenshot shows the filter dialog in SQL Server Profiler:

    The trace that we capture reveals that lots of queries are being executed; far too many for the data that we need. The problem is that our code is not very efficient because it uses multiple simple queries rather than one slightly more complicated one. Our code first gets a list of blog posts and then gets the comments for each post, one post at a time. We also bring back way more data than we need. Async does not speed up an individual request because we still need all of the data before we can render the page. The bad coding is obvious in this example because Dapper has the SQL right in your code. However, if you use another O/RM, then you wouldn't typically see the SQL in Visual Studio (or your editor of choice). This is an additional benefit of using Dapper because you see the SQL where it's used, so there are no surprises. However, the main benefit of Dapper is that it's fast, very fast, and much faster than EF. It's a great choice for performance and you can read more about it at github.com/StackExchange/dapper-dot-net. We only want a count of the comments for each post and we can get everything that we need (and only what we need) in one query. Let's alter our previous code to use a slightly more complicated SQL query rather than two simpler queries, one of which was inside a foreach loop: A SQL query inside a loop is an obvious code smell that indicates things may not be as well thought-out as they can be.

    [ 796 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems using (var connection = new SqlConnection(connectionString)) { await connection.OpenAsync(); var blogPosts = await connection.QueryAsync(@" SELECT bp.BlogPostId, bp.Title, COUNT(bpc.BlogPostCommentId) 'CommentCount' FROM BlogPost bp LEFT JOIN BlogPostComment bpc ON bpc.BlogPostId = bp.BlogPostId GROUP BY bp.BlogPostId, bp.Title"); return View(blogPosts); }

    This more efficient code only performs a single query against the database and gets all of the information that we need. We join the comments table to the posts in the database and then aggregate by grouping. We only request the columns that we need and add the count of the comments to our selection. Let's profile the new code to see whether we fixed the problem. The following image shows that we now only have a single query being executed rather than the thousands being executed before:

    The number of queries has been dramatically reduced. Therefore, the page loads much faster. However, the page is still very big because all the blog posts are listed on it and there are a lot. This slows down rendering and increases the time to deliver the page to a browser.

    [ 797 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    Efficient paging In a real application, you want to implement paging so that your list is not too long when a lot of data is in the table. It's a bad idea to list thousands of items on a single page. You may want to do this with LINQ commands because they are very convenient. However, you need to be careful. If your O/RM is not LINQ aware or you accidentally cast to the wrong type a little too early, then the filtering may occur inside the application when the best place to perform this filtering is actually in the database. Your code may be retrieving all of the data and throwing most of it away without you realizing it. Perhaps you are tempted to modify the action method return statement to look something like the following: return View(blogPosts.OrderByDescending(bp => bp.CommentCount) .Skip(pageSize * (pageNumber - 1)) .Take(pageSize));

    This works and will speed up your application considerably. However, it may not have the effect that you think it has. The application is quicker because the view rendering is speedier due to generating a smaller page. This also reduces the time to send the page to the browser and for the browser to render the HTML. Yet, the application still gets all of the blog posts from the database and loads them into memory. This can become a problem as the amount of data grows. If you want to use LINQ methods such as this, then you need to check that they are handled all the way to the database. It's a very good idea to read the documentation for your O/RM or framework and double-check the SQL that is generated using a profiler. Let's have a look at what the SQL should look like. For example, if you use SQL Server, starting with the preceding query, you can take only the top ten most-commented posts by altering it like the following: SELECT TOP 10 bp.BlogPostId, bp.Title, COUNT(bpc.BlogPostCommentId) 'CommentCount' FROM BlogPost bp LEFT JOIN BlogPostComment bpc ON bpc.BlogPostId = bp.BlogPostId GROUP BY bp.BlogPostId, bp.Title ORDER BY COUNT(bpc.BlogPostCommentId) DESC

    [ 798 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    We order by comment count in descending order. However, you can sort by descending ID to get a rough reverse chronological order if you like. From this ordered set, we select (or take) only the top ten records. If you want to skip records for paging, the SELECT TOP clause is not good enough. In SQL Server 2012 and onward, you can use the following instead: SELECT bp.BlogPostId, bp.Title, COUNT(bpc.BlogPostCommentId) 'CommentCount' FROM BlogPost bp LEFT JOIN BlogPostComment bpc ON bpc.BlogPostId = bp.BlogPostId GROUP BY bp.BlogPostId, bp.Title ORDER BY COUNT(bpc.BlogPostCommentId) DESC OFFSET 0 ROWS FETCH NEXT 10 ROWS ONLY

    You can adjust the value for OFFSET to get the correct entries for your page number. The FETCH NEXT value will change the page size (the number of entries on a page). You can pass these values in with a parameterized query, as follows: using (var connection = new SqlConnection(connectionString)) { await connection.OpenAsync(); var blogPosts = await connection.QueryAsync(@" SELECT bp.BlogPostId, bp.Title, COUNT(bpc.BlogPostCommentId) 'CommentCount' FROM BlogPost bp LEFT JOIN BlogPostComment bpc ON bpc.BlogPostId = bp.BlogPostId GROUP BY bp.BlogPostId, bp.Title ORDER BY COUNT(bpc.BlogPostCommentId) DESC OFFSET @OffsetRows ROWS FETCH NEXT @LimitRows ROWS ONLY", new { OffsetRows = pageSize * (pageNumber - 1), LimitRows = pageSize } ); return View(blogPosts); }

    [ 799 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    You can pass in the page size and number as URL parameters if you update your action method signature to the following: public async Task Index(int pageNumber = 1, int pageSize = 10)

    Here, we provided default values for both parameters, so they are optional. When no parameters are provided, then the first page of ten results is shown. We need to multiply the page size by the zero-indexed page number to calculate the correct offset. It should be zero for the first page so that no records are skipped. It would be a very good idea to apply some validation to the paging parameters. Don't allow users to set them to anything outside of a reasonable range. This is left as an exercise to the reader. If we look in the profiler at the queries being executed on the database server, then we can see what SQL is now being run. We can also see the time taken and compare this to our results from previously:

    [ 800 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    The query in the screenshot gets the data for the third page with the page size set to 50 entries. Therefore, it used an offset of 100 (to skip the first two pages of 50) and fetched the next 50 rows. The URL query string for this can look something like the following: /?pagenumber=3&pagesize=50

    We can see that the duration of the query has decreased from 24 ms previously to 14 ms now. Note how the SQL executes differently when parameters are passed into the query. This is much safer than concatenating user-supplied values directly into a SQL command. If you do not use any parameters, then the default values are used and the home page shows only ten entries, which looks something like the following screenshot, depending on the data in the database:

    By default, the home page only displays the top 10 most commented posts, but you can easily add page navigation with hyperlinks. Simply add the pagenumber and pagesize query string parameters to the URL.

    [ 801 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    You can use the example URL query string shown previously on either the home page or the bad paging path, for example, /Home/BadPaging/?pagenumber=3&pagesize=50. The links in the navigation bar load the examples that we just walked through. The best is the same as the home page and is the default. Top 10 and bad paging should be fairly selfexplanatory. Bad will take a long time to load, especially if you use the DB creation script included with the project. You can time it with your browser developer tools. For previous versions of SQL Server (prior to 2012), there are paging workarounds using ROW_NUMBER() or nested SELECT statements, which invert the sort order. If you use another database, such as PostgreSQL, MySQL, or SQLite, then you can easily implement paging with the LIMIT clause. These free databases often have more features than SQL Server, such as the Limit clause mentioned here and a concatenation aggregator to name just one other. One of the touted benefits of big O/RMs is the layer of abstraction that they offer. This allows you to change the database that you use. However, in practice, it is rare to change something as core as a database. As you can see from the simple paging example, syntax varies between databases for anything other than simple standard SQL. To get the best performance, you really need to understand the features and custom syntax of the database that you use.

    Static site generators The database is the logical place to perform any work to filter and sort data. Doing this in the application is a waste. However, for a simple blog that is updated infrequently, a database may be unnecessary in the first place. It may even become the bottleneck and slow the whole blog down. This is a typical problem of blog engines, such as WordPress. A better approach may be to use a static site generator. A static site generator prerenders all of the pages and saves the resulting HTML. This can easily be served by a simple web server and scales well. When a change is made and pages need updating, then this site is regenerated and a new version is deployed. This approach doesn't include dynamic features, such as comments, but third-party services are available to provide these added extras.

    [ 802 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    A popular static site generator is Jekyll, which is written in Ruby. GitHub provides a free static site-hosting service called GitHub Pages, which supports Jekyll, and you can read more about it at pages.github.com. Another static site generator (written in Go) is Hugo, which you can read about at gohugo.io. These tools are basically a form of extreme caching. We'll cover caching in the next section and later on in this book. It's often worth taking a step back to see whether the problem that you're trying to solve is even a problem. You may well improve database performance by removing the database.

    Pragmatic solutions with hardware The best approach to take with a poorly performing application is usually to fix the software. However, it is good to be pragmatic and try to look at the bigger picture. Depending on the size and scale of an application, it can be cheaper to throw better hardware at it, at least as a short term measure. Hardware is much cheaper than developer time and is always getting better. Installing some new hardware can work as a quick fix and buy you some time. You can then address the root causes of any performance issues in software as part of the ongoing development. You can add a little time to the schedule to refactor and improve an area of the code base as you work on it. Once you discover that the cause of your performance problem is latency, you have two possible approaches: Reduce the number of latency-sensitive operations Reduce the latency itself using faster computers or by moving the computers closer together With the rise of cloud computing, you may not need to buy or install new hardware. You can just pay more for a higher-performing instance class or you can move things around inside your cloud provider's infrastructure to reduce latency.

    A desktop example To borrow an example from native desktop applications, it is quite common to have poorlyperforming Line of Business (LoB) applications on corporate desktops. The desktop will probably be old and underpowered. The networking back to the central database may be slow because the connection might be over a remote link to a regional office.

    [ 803 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    With a badly-written application that is too chatty with the DB, it can be better, performance-wise, to run the application on a server close to (or on the same server as) the DB. Perhaps the application workspace and DB servers can be in the same server rack at the data center and connected by Gigabit (or ten Gigabit) Ethernet. The user can then use a remote desktop connection or Citrix session to interact with the application. This will reduce the latency to the DB and can speed things up, even taking into consideration the lag of the remote UI. This effectively turns the desktop PC into a thin client, similar to how old mainframes are used. For example, you can build a high-performance server with RAID SSDs and lots of RAM for much less than the cost of the developer time to fix a large application. Even badly-written software can perform well if you run the application and DB together on the same machine, especially if you run it on bare metal with no virtual machines in the way. This tactic would buy you time to fix things properly. These remote application and virtualization technologies are usually sold as tools to aid deployment and maintenance. However, the potential performance benefits are also worth considering. Due to the rise of web applications, thick client desktop applications are now less common. Architecture seems to oscillate between computation on the server and doing work on the client, as the relative progress of networking speed and processing power race each other.

    Web applications The same relocation approach does not typically work as well for web applications, but it depends on the architecture used. The good news is that, for web applications, you usually control the infrastructure. This is not normally the case for native application hardware. If you use a three-tier architecture, then you can move the application servers closer to the DB server. Whether this is effective or not depends on how chatty the web servers are with the application servers. If they issue too many web API requests, then this won't work well. A two-tier architecture (where the web servers talk directly to the database) is more common for typical web applications. There are solutions using clustered databases or read-only mirrors to place the data close to the web servers, but these add complexity and cost.

    [ 804 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    What can make a significant difference are proxy servers. A popular open source proxy server is Varnish and you can also use the NGINX web server as a proxy. Proxy servers cache the output of web servers so that a page doesn't have to be regenerated for each user. This is useful for shared pages but caching is hard; typically, you should not cache personalized pages. You don't want to accidentally serve someone's authenticated private information to another user. Proxies such as Varnish can also route different parts of your website to different servers. If you have a small area of your site that performs badly due to DB chatter, then you could host that part from web servers on (or very close to, such as on the same VM host) the DB machines and route requests for it to there. The rest of the site could remain on the existing web server farm. This isn't a long term solution, but it allows you to split off a poorly performing part of your program so that it doesn't impact the rest of your system. You're then free to fix it once it's been decoupled or isolated. You can even split off the data required to a separate DB and synchronize it with a background process. There are also Content Delivery Networks (CDNs), such as CloudFlare, Amazon CloudFront, and Azure CDN, which are good to cache static assets. CDNs cache parts of your site in data centers close to your users, reducing the latency. CloudFlare can even add HTTPS to your site for free, including automatically issuing certificates. You can read more about the CDN offerings of CloudFlare (including HTTP/2 server push and WebSockets on the free plan) at www.cloudflare.com. We will cover caching in more detail in Chapter 7, Learning Caching and Message Queuing, so we won't go into more detail here. Caching is a challenging subject and needs to be understood well so that you can use it effectively.

    Oversized images While we're on the subject of static assets, we should briefly mention image optimization. We'll cover this in much more detail in the next chapter, but it's worth highlighting some common problems here. As you have very little control over network conditions between your infrastructure and the user, low throughput may be a problem in addition to high latency.

    [ 805 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    Web applications heavily use images, especially on landing pages or home pages, where they might form a fullscreen background. It is regrettably common to see a raw photo from a camera simply dropped straight in. Images from cameras are typically many megabytes in size, far too big for a web page. You can test whether there are problems on a web page using a tool, such as Google's PageSpeed Insights. Visit developers.google.com/speed/pagespeed/insights, enter a URL, and click on ANALYZE to view the results. Google use this information as part of their search engine ranking, so you would do well to take its advice up to a point. Slow sites rank lower in search results. You can also use the browser developer tools to view the size of images. Press F12 and look at the network tab after a page load to view how much data was transferred and how long it took. You can often miss these performance issues on a local machine or test server because the image will load quickly. After the first load, it will also be stored in the browser's cache, so make sure you do a full hard reload and empty or disable the cache. In Chrome (when the dev tools are open), you can right-click or long click on the reload button for these extra options. It's also a good idea to use the built-in throttling tools to see how a user will experience the page loading. The most basic image optimization problems typically fall into two categories: Images that are overly large for the display area they are displayed in Images that use the wrong compression format for the subject matter

    Image resolution The most common issue is that an image has too high a resolution for the area that displays it. This forces the browser to resize the image to fit the screen or area. If the size of the file is unnecessarily large, then it will take longer to be transferred over an internet connection. The browser will then throw away most of the information. You should resize the image ahead of time before adding it to the site. There are many image manipulation tools available to resize pictures. If you run Windows, then Paint.NET (www.getpaint.net) is an excellent free piece of software. This is much better than the Paint program that comes with Windows (although this will work if you have no other option).

    [ 806 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    For other platforms, GIMP (www.gimp.org) is very good. If you prefer using the command line, then you may like ImageMagick (imagemagick.org), which can perform lots of image manipulation tasks programmatically or in batches. There are also cloud-hosted image management services, such as Cloudinary (cloudinary.com). You should shrink images to the actual size that they will be displayed on the user's screen. There can be complications when dealing with responsive images, which scale with the size of the user's screen or browser window. Also keep in mind high DPI or Retina displays, which may have more than one physical pixel to every logical pixel. Your images may have to be bigger to not look blurry, but the upper bound is still likely to be lower than the raw size. It is rare to need an image at more than twice the size of the displayed resolution. We will discuss responsive images in more detail later in this book, but it is worth keeping them in mind. The following image displays the resizing dialog from Paint.NET:

    When resizing an image, it is usually important to keep the aspect ratio of the image the same. This means changing the horizontal and vertical sizes in proportion to each other. For example, reducing the Height from 600 px to 300 px and reducing the Width from 800 px to 400 px (meaning both dimensions are reduced by 50%) keeps the image looking the same, only smaller. Most image-manipulation software will assist with this process. Keeping the aspect ratio the same will avoid images looking stretched. If images need to fit a different shape, then they should be cropped instead.

    [ 807 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    Image format The next most common problem is using images in the wrong file format for the content. You should never use raw uncompressed images, such as bitmap (BMP), on the web. For natural images such as photos, use the JPEG file format. JPEG is a lossy codec, which means that information is lost when using it. It is very good for pictures with a lot of gradients in them, such as images of natural scenes or people. JPEG looks poor if there is any text in the image because there will be compression artifacts around the edges of the letters. Most mid- and low-end cameras natively save images as JPEG, so you do not lose anything by staying with it. However, you should resize the images to make them smaller, as mentioned previously. For artificial images such as diagrams or icons, use PNG. PNG is a lossless codec, which means that no information is discarded. This works best for images with large blocks of solid color, such as diagrams drawn in painting software or screenshots. This also supports transparency, so you can have images that don't appear rectangular or are translucent. You can also have animated PNGs, which are of superior quality to GIFs, but we won't go into the details of them in this chapter. You can alter the format of images using the same tools that you use to resize them, as mentioned previously, by simply changing the file format when saving the image. As always, you should test for what works best in your specific use case. You can perform experiments by saving the same image in different formats (and different resolutions) then observing the sizes of the files on disk. The following image displays the available image options in Paint.NET. Additional formats are available and we will go into more detail in Chapter 4, Addressing Network Performance:

    [ 808 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    Even when you only choose between JPEG and PNG, you can still make a significant difference. The following screenshot displays the difference in the size of the file of the same image in two resolutions and two formats.

    The following test image is the one used in the experiment. Due to the hard edges it looks best as a PNG, but the gradient background makes it more difficult to compress:

    The test images used here are available for download with this book, so you can try the experiment for yourself.

    In a web context, this particular image may be best served with a transparent background using CSS for the gradient. However, simple shapes, such as these can better be represented as Scalable Vector Graphics (SVG) or with a HTML5 canvas.

    [ 809 ] WOW! eBook www.wowebook.org

    Fixing Common Performance Problems

    Summary In this chapter, you learned about some common performance problems and how to fix them. We covered asynchronous operations, Select N+1 problems, pragmatic hardware choices, and overly-large images. In the next chapter, we will expand on image optimization and extend this to other forms of compression for different types of resources. We'll look at the new process for the bundling and minification of static assets in ASP.NET Core using open source tools. Additionally, we will introduce networking topics, such as TCP/IP, HTTP, WebSockets, and encryption. We'll also cover caching, including another look at CDNs.

    [ 810 ] WOW! eBook www.wowebook.org

    4

    Addressing Network Performance This chapter builds on a subset of the problems that were discussed in the previous chapter but in more detail. It deals with latency, or lag, which originates at the networking level between the user and the application. This is mostly applicable to web applications where the user interacts with the application via a web browser. You will learn how to optimize your application to cater for bandwidth and latency that is unknown and outside of your control. You'll compress your payloads to be as small as possible, and then you will deliver them to the user as quickly as possible. You will learn about the tools and techniques that can be used to achieve a fast and responsive application. You'll also see the trade-offs involved and be able to calculate whether these methods should be applied to your software. The topics that we will cover in this chapter include the following: TCP/IP HTTP and HTTP/2 HTTPS (TLS/SSL) WebSockets and push notifications Compression Bundling and minification Caching and CDNs This chapter deals with how to speed up the experience for a user using your application. The skills in this chapter are just as applicable to a static site or client-side web app as they are to a dynamic web application.

    WOW! eBook www.wowebook.org

    Addressing Network Performance

    These topics apply to any web application framework. However, we will focus on how they are implemented with ASP.NET—in particular with the new ASP.NET Core implementation, and how this differs from the existing full ASP.NET.

    Internet protocols It's important to know about how your HTML and other assets are delivered from the web server to your user's browser. Much of this is abstracted away and transparent to web development, but it's a good idea to have at least a basic understanding in order to achieve high performance.

    TCP/IP Transmission Control Protocol / Internet Protocol (TCP/IP) is the name for a pair of communication protocols that underpin the internet. IP is the lower-level protocol of the two, and this deals with routing packets to their correct destinations. IP can run on top of many different lower-level protocols (such as Ethernet), and this is where IP addresses come from. TCP is a layer above IP, and it is concerned with the reliable delivery of packets and flow control. TCP is where ports come from, such as port 80 for HTTP, and port 443 for HTTPS. There is also the User Datagram Protocol (UDP), which can be used instead of TCP, but it provides fewer features. HTTP runs on top of TCP, and it is usually what you will deal with in a web application. You may occasionally need to directly use Simple Mail Transfer Protocol (SMTP) to send e-mails, the Domain Name System (DNS) to resolve hostnames to IP addresses, or File Transfer Protocol (FTP) to upload and download files. The basic unencrypted versions of these protocols run directly on TCP, but the secure encrypted versions (HTTPS, SMTPS, and FTPS) have a layer in between them. This layer is called Transport Layer Security(TLS), and this is the modern successor to the Secure Sockets Layer(SSL). SSL is insecure and deprecated, and it should no longer be used. However, the term SSL is still commonly and confusingly used to describe TLS. All browsers require the use of TLS encryption to support HTTP/2. You may not often think about the lower-level protocols when you build web applications. Indeed, you may not need to consider even HTTP/HTTPS that much. However, the protocol stack below your application can have significant performance implications.

    [ 812 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    The following diagram shows how the protocols are typically stacked:

    Slow-start TCP implements an algorithm called slow-start for congestion-control purposes. This means that the connection from a browser to a web server is initially slow and ramps up over time to discover the available bandwidth. You can alter the settings for this so that the ramp up is more aggressive, and connections get quicker faster. If you increase the initial congestion window, then performance can improve, especially on connections with good bandwidth but high latency, such as mobile 4G or servers on other continents. As usual, you should test for your use case perhaps using Wireshark, as described previously in Chapter 2, Measuring Performance Bottlenecks. There are downsides to altering this window, and it should be considered carefully. Although this may speed up websites, it can cause buffers in networking equipment to fill, which can generate latency problems for VoIP applications and games if no Quality of Service (QoS) is in use end to end. You can change this value on Windows Server 2008 R2 with a hotfix (KB2472264) and higher. You can also easily adjust this on Linux, and ASP.NET Core enables you to run your .NET web app on Linux (and Mac OS X) in addition to Windows.

    [ 813 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    We won't provide detailed instructions here because this should be a cautiously considered decision, and you shouldn't apply advice blindly. You can easily find instructions online for the operating system that you use on your web server. TCPslow-start is just one example of why you can't ignore the lower levels of Internet technology on the shoulders of which web applications stand. Let's move up the stack a little to the application layer.

    HTTP As a web application developer who wants to deliver high performance, it's important to understand Hypertext Transfer Protocol. You should know what version of HTTP you use, how it works, and how this affects things, such as request pipelining and encryption. HTTP/1.1 is the version that you will probably be most familiar with today because it has been in use for some time. HTTP/2 is becoming more popular, and it changes the best way to do many things.

    Headers HTTP uses headers to provide metadata about a request along with the main payload in the body of the message, much like e-mails do. You won't see these headers when you view the source, but you can observe them using the browser developer tools. You can use headers for many things, such as cache control and authentication. Cookies are also sent and received as headers. Browsers will only open a limited number of HTTP/1.1 connections at one time to a single host. If you require a lot of requests to retrieve all the assets for a page, then they are queued, which increases the total time taken to fully load it. When combined with the TCP slow-start mentioned previously, this effect can be amplified, degrading the performance. This is less of a problem with HTTP/2, which we will cover shortly. You can reduce the impact of this problem by allowing the browser to reuse connections. You can do this by ensuring that your web server doesn't send a Connection: close header with HTTP responses.

    [ 814 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    HTTP methods There are multiple methods (or verbs) that HTTP uses. The most common are GET and POST, but there are many more. Typically, we use GET requests to retrieve data from a server, and we use POST to submit data and make changes. GET should not be used to alter data. Other useful verbs are HEAD and OPTIONS. HEAD can check the headers for a GET request without the overhead of downloading the body. This is useful to check caching headers to see whether the resource has changed. OPTIONS is commonly used for Cross Origin Resource Sharing (CORS) to perform a preflight check to validate a domain. Other often used verbs are PUT, DELETE, and PATCH. We mainly use these for Representational State Transfer (REST) APIs because they can mimic operations on resources or files. However, not all software (such as some proxy servers) understands them, so sometimes, we emulate them using POST. You may even have problems with OPTIONS being blocked by proxies and web servers.

    Status codes HTTP uses numeric response codes to indicate a status. You are probably familiar with 200 (OK) and 404 (Not Found), but there are many others. For example, 451 indicates that the content has been blocked by a government-mandated censorship filter. The 451 status code is in reference to the book Fahrenheit 451 (whose title is the purported temperature at which paper burns). You can read the official document (RFC 7725) at tools.ietf.org/html/rfc7725. If this code is not used, then it can be tricky to discover if and why a site is unavailable. For example, you can find out whether the UK government is blocking your site at blocked.org.uk, but this is just a volunteer effort run by the Open Rights Group the British version of the Electronic Frontier Foundation (EFF). We commonly use 3xx codes for redirection (perhaps to HTTPS). There are various forms of redirection with different performance characteristics (and other effects). You can use a 302 to temporarily redirect a page, but then the browser has to request the original page every time to see whether the redirect has ended. It also has bad implications for Search Engine Optimization (SEO), but we won't discuss these here.

    [ 815 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    A better approach is to use a 301 to indicate a permanent redirect. However, you need to be careful, as this can't be undone and clients won't look at the original URL again. If you use redirects to upgrade users from HTTP to HTTPS, then you should also consider using HTTP Strict Transport Security (HSTS) headers. Again, do this carefully.

    Encryption HTTP encryption is very important. It not only secures data in transit to prevent eavesdropping, but it also provides authentication. This ensures that users actually connect to the site that they think they are and that the page wasn't tampered with. Otherwise, unscrupulous internet connection providers can inject or replace adverts on your site, or they can block internet access until you have opted out of a parental filter. Or, these can be worse things, such as stealing your user's data, which you are usually required by law to protect. There is really no good reason today to not use encryption everywhere. The overheads are tiny, although we will still consider them and show you how to avoid potential issues. Arguments against using HTTPS are usually hangovers from a time long ago when computation was expensive. Modern computing hardware is very capable and often has special acceleration for common encryption tasks. There are many studies that show that the processing overheads of encryption are negligible. However, there can be a small delay in initially setting up a secure connection for the first time. In order to understand this, it is useful to illustrate a simple model of how TLS works. There are two parts to secure communication: the initial key exchange and the ongoing encryption of the channel. Session ciphers, such as the Advanced Encryption Standard (AES), can be very quick, and they can operate at close to line speed. However, these ciphers are symmetrical and both parties need to know the key. This key needs to be distributed securely so that only the two communicating parties possess it. This is called key exchange, and it uses asymmetric encryption. This usually also requires a third party to vouch for the server, so we have a system of certificates. This initial setup is the slow part, although we will show you an alternative for devices that lack the AES acceleration later.

    [ 816 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    Key exchange As mentioned previously, key exchange is the process of securely sharing an encryption key between two parties without being intercepted. There are various methods of doing this, which mostly rely on asymmetric encryption. Unlike symmetric encryption (that we exchange this key for), this can only be performed in one direction with a single key. In other words, the key that is used to encrypt cannot be used to decrypt, and a different key is required. This is not the case for the majority of the data once we have shared a key. The reason that we do this is that symmetric encryption is faster than asymmetric encryption. Therefore, it is not used for everything and is only needed to encrypt another key. In addition to exchanging a key, the browser (or other HTTPS client) should check the certificate to ensure that the server belongs to the domain that it claims to. Some programmatic clients fail to do this by default, so this is worth checking out. You can also implement certificate pinning (even in the browser with HTTP Public Key Pinning) to improve security, but this is beyond the scope of this book. We will illustrate two variations of key exchange by analogy in simplified forms to show you how the process works. You can look up the technical details if you wish. RSA RSA is traditionally the most common key-exchange mechanism that is used for TLS. Until recently, we used it on most HTTPS connections. RSA stands for Rivest-Shamir-Adleman after the names of its creators, and it is probably the most popular form of public key cryptography. The British snooping agency, Government Communications Headquarters (GCHQ), supposedly conceived public key cryptography at around the same time, but as it was only made public in 1997, it's impossible to prove this. The invention credit goes to Whitfield Diffie and Martin Hellman, who recently picked up a Turing Award for it. We'll talk more about their work shortly. The Turing Award is the Nobel Prize of computing. It's named after Alan Turing, the legendary computing pioneer who helped the allies win WWII while working for the nascent GCHQ, but who was later betrayed by the British government.

    [ 817 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    RSA uses large prime numbers to generate a public and private key pair. The public key can be used to encrypt information that can only be decrypted with the private key. In addition to this, the private key can be used to sign information (usually a hash of it), which can be verified with the public key. RSA is often used to sign TLS certificates even if another algorithm is used as the key exchange mechanism (this is negotiated during the initial TLS handshake). This hashing and signing of certificates is where you may have heard of SHA-1 certificates being deprecated by browsers. SHA-1 is no longer considered secure for hashing and, like MD5 before it, should not be used. Certificate chains must now use at least an SHA-2 hashing algorithm (such as SHA-256) to sign. An analogy to help explain how RSA works is to think of sending a lock instead of sending a key. You can post an open padlock to someone, retaining the key to it. They can then use your lock to secure a case with a key of theirs inside and send it back to you. Now, only you can open the case to get the new key. In reality, this is more complicated. You can't be sure that someone didn't intercept your lock and then use their own lock to get the key and copy it before sending it on to you. Typically, we solve this with Public Key Infrastructure (PKI). A trusted third party will sign your certificate and verify that it is indeed your public key and that you own the lock. Browsers typically display a warning if a Certificate Authority (CA) does not countersign the certificate in this way. Diffie-Hellman (D-H) key exchange is another method of gaining a shared key. Invented shortly before RSA, it has only recently become popular on the web. This is partly due to the reduced computational cost of the elliptic curve variant. However, another reason is that the ephemeral versions provide a quality called Perfect Forward Secrecy (PFS). Unlike RSA, the session key for the symmetric encryption never needs to be transmitted. Both parties can calculate the shared key without it needing to be sent on the wire (even in an encrypted state) or permanently stored. This means that an eavesdropped encrypted exchange cannot be decrypted in the future if the keys were recovered. With RSA key exchange, you can recover a recorded communication in plain text if you obtain the private keys later. PFS a is useful countermeasure against mass surveillance, where all communication is caught in a dragnet and permanently stored.

    [ 818 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    D-H is better explained with a color mixing analogy, where paints represent numbers. The two parties choose a shared color, and each party chooses a secret color of their own. Both mix the shared color with their secret color and send the result to the other. Each party then mixes the color that they received with their secret color again. Both parties now have the same color without ever having to send this color anywhere where it could be observed. As this is not a book about security, we won't go into any more detail on encryption algorithms. If you are interested, there is a huge amount of information available that we can't cover here. Encryption is a large subject, but security is an even broader concern. TLS handshake The point of briefly explaining how TLS key exchange works for various methods is to show that it is complex. Many messages need to be sent back and forth to establish a secure connection, and no matter how fast the connection, latency slows down every message. This all occurs in the TLS handshake, where the client (usually a web browser) and server negotiate common capabilities and agree on what ciphers they should use. There is also Server Name Indication (SNI) to consider, which is similar to the HTTP host header in that it allows multiple sites to use the same IP address. Some older clients don't support SNI. We can observe the TLS handshake using Wireshark. We won't go into a huge amount of detail, but you can see that at least four messages are exchanged between the client and server. These are client hello, server hello (including the certificate and server key exchange), client key exchange, and cipher agreement. The browser may send more messages if we do not optimally configured things, such as requesting an intermediate certificate. This may also check a revocation list to see whether the certificate was revoked. The following screenshot shows a TLS handshake captured with Wireshark:

    [ 819 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    All these network operations happen quickly. However, if the connection has a high latency, then these extra messages can have an amplified effect on performance. The computational delays are typically much smaller than the network delays, so we can discount these, unless you use very old hardware. Fortunately, there are some simple things you can do that will help speed things up and let you enjoy high performance while still being secure.

    Delay diagnostics There are various mechanisms that are built into TLS that can you can use to speed it up. However, there are also things that will slow it down if you don't do them correctly. Some great free online tools to assess your TLS configuration are available from Qualys SSL Labs at ssllabs.com. The server test at ssllabs.com/ssltest is very useful. You enter a URL, and they give you a grade along with lots of other information. For example, if we analyze the packtpub.com site, we can see that on the date of the test it got a B grade. This is due to supporting weak Diffie-Hellman parameters and the obsolete and insecure RC4 cipher. However, it is not always as simple as removing old ciphers. You can have a very secure site, but you might exclude some of your customers, who use older clients that don't support the latest standards. This will, of course, vary depending on the nature of your client base, and you should measure your traffic and consider your options carefully.

    [ 820 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    The following screenshot shows some of the report from SSL Labs for packtpub.com.

    If we have a look at a site with a better configuration (emoncms.org), we can see that it gets an A grade. You can get an A+ grade using HSTS headers. Additionally, these headers avoid the overhead of a redirect. You may also be able to get your site embedded in a preloaded list shipped with browsers if you submit the domains to the vendors.

    [ 821 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    The following screenshot shows some of the report from SSL Labs for emoncms.org:

    [ 822 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    The options chosen by modern browsers would typically be an Elliptic Curve DiffieHellman Ephemeral key exchange (ECDHE) with an RSA SHA-256 signature and AES session cipher. The ephemeral keys provide PFS because they are only held in memory for the session. You can see what connection has been negotiated by looking in your browser. In Firefox, you can do this by clicking on the lock icon in the URL bar and then clicking on the More Information button, as shown in the following image:

    In the Technical Details section, you will see the cipher suite used. The following image from Firefox shows ECDHE key exchange and RSA certificate signing:

    You can also view the certificate details by clicking on the View Certificate button. The domain is usually included as the Common Name (CN) in the Subject field. Alternative domains can also be included under the Certificate Subject Alt Name extension:

    [ 823 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    In Chrome, you can look at the TLS connection information in the Security tab of the developer tools. For example, the following image displays the security details for huxley.unop.uk:

    [ 824 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    The following screenshot displays the same window for emoncms.org:

    You may need to refresh the page to see TLS information if the site was already loaded when you opened the developer tools. You can access the same tab by clicking on the padlock in the Chrome URL bar and then clicking on the Details link. You can view the certificate (in the native operating system certificate store) by clicking on the Open full certificate details button. A link with the same function exists on the equivalent screen of Chrome for Android, although the certificate information is reduced.

    Performance tweaks We already discussed the most important performance tweak for TLS because it is not about TLS. You should ensure that your HTTP connections are reusable, because if this is not the case, then you will incur the added penalty of the TLS negotiation along with the TCP overhead. Caching is also very important, and we will talk more about this later.

    [ 825 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    TLS and HTTP both support compression, but these have security implications. Therefore, consider them carefully. They can leak information, and a determined adversary can use an analysis of them to recover encrypted data. TLS compression is deprecated, and it will be removed in TLS 1.3. Therefore, do not use it. We will discuss HTTP compression later on in this chapter. In regard to specific advice for TLS, there are a few things, which you can do to improve performance. The main technique is to ensure that you use session resumption. This is different to reusing HTTP connections, and this means that clients can reuse an existing TLS connection without having to go through the whole key exchange. You can implement sessions with IDs on the server or with encrypted tickets (in a similar manner to ASP.NET cookies that are encrypted with the machine key). There was a bug in the Microsoft client implementation around ticket encryption key rotation, but the KB3109853 patch fixed it. Make sure that you install this update, especially if you see exceptions thrown when connecting to secure endpoints from your .NET code. It is important to not overdo things and bigger is not always better, especially when it comes to key size. It is a trade-off between performance and security, and this will depend on your specific situation. In general, a good balance is not using 256 bit AES keys when 128 bit will do. A 2048 bit RSA key is big enough, lower is insecure and larger is too slow. You can use the Elliptic Curve Digital Signature Algorithm (ECDSA) to sign instead of RSA, as it is much quicker. However, support is limited, so you would need to deploy RSA in parallel. If you use ECDSA, then a 256 bit key is sufficient. For ECDHE, 256 bit is also fine, and for the slower version without elliptic curves (DHE), 2048 bit is sufficient. If you use ECDSA, then you will see this listed instead of the RSA signing in the connection details. For example, when visiting huxley.unop.uk, the details in the following screenshots are displayed in Firefox. This difference is also displayed in the previous Chrome screenshots:

    [ 826 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    Additionally, it is important to include the full certificate chain with your certificate. If you fail to include all intermediate certificates, then the browser will need to download them until it finds one in its trusted root store. You can also use a technique called Online Certificate Status Protocol (OCSP) stapling, by embedding revocation data so browsers don't need to check a certificate revocation list. Both of these certificate techniques may increase the size of payloads, which can be an issue if bandwidth is a concern. However, they will reduce the number of messages, which will increase performance if latency is the main problem, which is usually the case. Keeping key sizes small also helps a little with bandwidth. It is hard to recommend one generic approach. Therefore, as always, test for your unique situation. There is also an alternative stream cipher called ChaCha/Poly, which is especially useful for mobile devices. This uses the ChaCha20stream cipher and the Poly1305 Message Authentication Code (MAC) algorithm to create a secure alternative to RC4. AES is a block cipher and is fast with hardware support, but many mobile devices and some older computers don't have this acceleration. ChaCha/Poly is faster when using just software. Therefore, this is better for battery life. This is supported in Chrome, including Chrome for Android, and in Firefox (from version 47). As all algorithms are different, you can't directly compare key sizes as a measure of how secure they are. For example, a 256 bit ECDHE key is equivalent to a 3072 bit RSA key. AES is very secure with relatively small keys, but you cannot use it for key exchange. ChaCha/Poly is more comparable to the security of AES 256 than AES 128. In the following screenshot of Chrome on Android, you can see that when connecting to huxley.unop.uk, Chrome uses CHACHA20_POLY1305 as the stream cipher, ECDHE for the key exchange, and ECDSA for the signature:

    [ 827 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    The new version of TLS (1.3) is still a draft, but it may be finalized soon. It looks like it will only allow Authenticated Encryption with Additional Data (AEAD) ciphers. AES-GCM and ChaCha/Poly are the only two ciphers that currently meet these criteria. It will also remove some other obsolete features, such as TLS compression. It may sometimes sound like using TLS is not always worth it, but it is an excellent idea to use HTTPS on your entire site, including any third-party resources that you load in. By doing this, you will be able to take advantage of the performance enhancing features of HTTP/2, which include techniques that mean that it is no longer crucial to serve resources (such as JavaScript libraries) from multiple domains. You can securely host everything yourself and avoid the DNS, TCP, and TLS overhead of additional requests. All of this can also be free because Let's Encrypt and CloudFlare provide certificates at zero cost. Let's look at HTTP/2 in detail now.

    [ 828 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    HTTP/2 As the name suggests, HTTP/2 is the new version of HTTP. It contains some significant performance improvements for the modern web. It was predated by SPDY, which has since been deprecated in favor of HTTP/2. As mentioned previously, the first step toward using HTTP/2 is to use HTTPS on your entire site. Although not technically required, most clients (all the major browsers) mandate the use of TLS to enable HTTP/2. This is mainly due to the Application-Layer Protocol Negotiation (ALPN) that TLS provides, which allows easy support for HTTP/2. It also stops proxy servers from messing up the data, which many ISPs use to reduce their costs and record what their customers do online. HTTP/2 improves performance in a number of ways. It uses compression even for the headers, and multiplexing, which allows multiple requests to share the same connection. It also allows the server to push resources that it knows the client will need before the client has realized it needs them. Although, this requires some configuration to set the correct headers and it can waste bandwidth if it is overused. Multiplexing has implications for bundling and image concatenation (sprites), which we will talk about in the compression section later on in this chapter. This also means that you don't need to split assets over multiple domains (shards), where the extra overheads may even slow things down. However, you may still wish to use a cookie-free subdomain to serve static assets without cookies, even though the new header compression means that the bandwidth savings will be smaller. If you use a naked domain (without a www), then you may need a new domain name for cookie-less use. You can identify what version of HTTP is used to deliver your assets using the browser developer tools. In Firefox, you can see this on the details panel of the network tab. You will see the version listed as HTTP/1.1 when the old protocol is in use. The following screenshot shows that packtpub.com uses HTTP/1.1:

    [ 829 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    In Chrome, you can right-click on the column headers in the network inspector and add a Protocol column. You can also see more detailed network information by entering chrome://net-internals into the address bar. This displays things, such as sessions for HTTP/2 and Quick UDP Internet Connections (QUIC)—an experimental multiplexed stream transport. The following screenshot shows that emoncms.org also uses HTTP/1.1, even though TLS is configured differently. The encrypted transport layer is transparent to HTTP:

    When HTTP/2 is used, you will see the version listed as HTTP/2.0. The following screenshot displays this for huxley.unop.uk, and it also displays CORS, caching, and content compression headers:

    [ 830 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    WebSockets WebSockets is a different protocol to HTTP. However, HTTP initiates it and it uses the same ports, so we'll discuss it briefly here. This HTML5 feature is useful for push notifications and Real Time Communication (RTC) applications. WebSockets use the ws:// and wss:// protocol prefixes instead of http:// and https://. Once established by an existing HTTP connection, the protocol is full-duplex and binary in contrast to HTTP/1. Before WebSockets, if your web server wanted to notify a client of a change, then you would have to use a technique, such as long polling. This is where a web request is held open by the server in case it wants to send something. When the request gets a response or it times-out, it is re-established. Needless to say, polling is never very efficient. Push notifications can improve performance from a user's point of view because they receive updates as soon as they occur. They don't need to refresh anything or keep checking. You can immediately respond to the user when a long running process starts, run it asynchronously, and notify them immediately upon its completion. Socket.IO is a popular WebSocket library for Node.js. To see it in action, you can look in the browser developer tools on a site that uses it. For example, if you open the dev tools and go to https://www.opentraintimes.com/maps/signalling/staines, you will see the connection being upgraded from HTTPS to WSS (or from HTTP to WS if you use the insecure version). WebSockets predate HTTP/2, but they are still relevant despite the new server push technology. These two features appear similar, but they serve different purposes. WebSockets are for real-time and two-way data transfers, and server push is currently just to preload. In addition to HTTP/2 server push preloading, there is a new browser feature that is currently supported in Android, Chrome, and Opera, which allows you to declare resource preloading in markup using rel="preload" on a link tag. You can read the spec at w3c.github.io/preload and check the current state of browser support at caniuse.com/#feat=link-rel-preload. In Chrome, the protocol switch will look something like the following screenshot. You can't see the contents of a WebSocket connection, so you won't be able to view the data being transferred from within the dev tools:

    [ 831 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    There is a Microsoft library for ASP.NET, which is called SignalR. This library allows you to perform push notifications with WebSockets. It also falls back to long polling if the client or server does not support them. You will need a fairly recent version of Windows Server (2012 or later) and IIS (8.0 and above) to use WebSockets. Unfortunately, the latest stable version (SignalR 2) does not support .NET Core. The new version (SignalR 3) is not planned for release until after the Release to manufacturing (RTM) of ASP.NET Core, but this is a top priority. You can try a beta version, but it may be stable by the time you read this. You may also wish to look at StackExchange.NetGain as a WebSocket server.

    Compression Data compression is a broad topic, and we can't hope to cover it all. Here, we will learn about lossless compression and how to use it in HTTP. We will also cover lossy image compression of pictures later in the chapter. Compression is important because if we can make files smaller, we can shorten the time that it takes to transfer them over a network.

    [ 832 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    Lossless compression algorithms You may have noticed HTTP headers from some of the previous screenshots were related to encoding. The most common compression algorithms for HTTP are gzip and DEFLATE, which are very similar. These are both related to the algorithm used in ZIP files. If you do not already use HTTP compression, then this is a quick win, and it will improve the performance of your site if you enable it. There are many other more advanced compression algorithms, such as xz, which is similar to the 7-Zip (7z) format and uses the Lempel-Ziv-Markov chain Algorithm (LZMA/LZMA2). However, there are currently only two additional algorithms in common use in major browsers. These are Brotliand Shared Dictionary Compression for HTTP (SDCH). Both are from Google, but only Chrome supports SDCH, and it requires a large dictionary. Brotli is more interesting, and Opera and Chrome (currently behind a chrome://flags/#enable-brotli flag), and Firefox by default (version 44 or higher) support it. Both browsers require the use of HTTPS to support Brotli (yet another good reason to use TLS), and the encoding token used in the headers is br. Brotli promises significant performance improvements, especially for mobile devices on slow connections. If you access a site over HTTP, you will see the following for the request headers in the Chrome dev tools network inspector in the details of a request: Accept-Encoding: gzip, deflate, sdch

    However, if you use HTTPS then you will see this instead (after enabling the flag): Accept-Encoding: gzip, deflate, sdch, br

    The server can then respond with Brotli-encoded content using this response header: Content-Encoding: br

    For example, if you visit https://www.bayden.com/test/brotliimg.aspxin a supported browser, then Brotli will deliver the content (an image of a star). Here is a subset (for clarity and brevity) of the request headers from Chrome: GET /test/brotliimg.aspx HTTP/1.1 Host: www.bayden.com Connection: keep-alive Accept-Encoding: gzip, deflate, sdch, br

    [ 833 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    This is a subset of the corresponding response headers: HTTP/1.1 200 OK Content-Type: image/png Content-Encoding: br Server: Microsoft-IIS/7.5 X-AspNet-Version: 4.0.30319 YourAcceptEncoding: gzip, deflate, sdch, br

    Fiddler (the awesome HTTP debugging proxy by Eric Lawrence that we mentioned previously) also supports Brotli with a simple add-on (drop https://bayden.com/dl/Brot li.exeinto fiddler2tools and restart it). You can use this to easily test the impact on your site without deploying anything to your web servers.

    Bundling and minification Bundling and minification are techniques that you may already be familiar with. They speed up the delivery of static assets. They are usually used for text files, such as JavaScript and CSS content.

    Bundling Bundling is the technique of combining or concatenating multiple files together so that they can be delivered as one. This is a good idea when using HTTP/1.1 because the number of concurrent connections is limited. However, bundling is less necessary with HTTP/2, and in fact, it can reduce performance. The new multiplexing in HTTP/2 means that there is no longer a large penalty when you request many files instead of one that contain all of the same content. You can take advantage of this by only delivering what is needed for a page rather than the entire client side codebase for every page. Even if you selectively bundle per page, this could be inefficient. For example, you may include a validation library for use with forms. However, because this is bundled, it will be sent to all pages, including the ones with no forms to validate. If you have a separate bundle for validated pages, then there may be duplication in the common core code that is also sent. By keeping things separated, the client can cache them individually and reuse components. This also means that if you change something, you only need to invalidate the cache of this one part. The client can keep using the other unmodified parts and not have to redownload them.

    [ 834 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    As always, you should measure for your particular use case. You may find that bundling still reduces the total file size. The overheads for HTTP/2 are much lower but still not zero, and compression can work better on larger files. However, keep in mind the implications for caching and reusability.

    Minification Minification is the process of reducing the file size of a textual static asset. We do this by various means, including stripping out comments, removing whitespace, and shortening variable names. It can also be useful to obfuscate code to make it harder to reverse engineer. Minification is still useful when you use HTTP/2, but you should be careful when testing to compare preminified and postminified files size after the lossless compression has also been applied. As discussed previously, you should use HTTP content compression with at least the gzip or DEFLATE algorithms. These are pretty efficient; so, you may find that when compressed, your minified file is not much smaller than the compressed raw source file.

    Changes in ASP.NET Core In the full .NET Framework and previous versions of MVC, there was an integrated bundling and minification system. This has changed for ASP.NET Core, and there are new tools to perform this work. The new tools that were adopted include the task runner gulp, although you can use Grunt if you prefer to. Also, a generator tool called Yeoman is used for scaffolding. There are now package managers, such as Bower and npm, which are similar to NuGet, but for frontend libraries. For example, NuGet no longer delivers jQuery and Twitter Bootstrap, and they use Bower instead by default. Most of these tools are written in JavaScript, and they run on Node.js. The package manager for Node.js is npm. These tools are popular in other open source web frameworks, and they are well established. They're not new to the scene, only new to .NET. Gulp packages come from npm and are responsible for the minification of your static assets. This is now done at build time, as opposed to request time, as was previously the case. It works much more like a static site generator than a dynamic web application. A gulp file (gulpfile.js) in the root of your project configures these tasks using JavaScript.

    [ 835 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    The new tooling is not only restricted to ASP.NET Core, and you can use these features with traditional ASP.NET applications in Visual Studio. This is a good example of the crosspollination and benefits that the new frameworks can provide to the existing ones.

    Image optimization Digital media compression is much more complicated than the lossless file compression that we talked about previously even if we just stick to images. We briefly mentioned when to use PNG and when to use JPEG in the previous chapter. Here, we'll go into much more detail and explore some other exotic options. We covered the rule of thumb, which says that PNG is the best image format for icons and JPEG is better for photos. These two formats are the most common for lossless and lossy image compression, respectively. We will talk more about other image formats later, but you are usually constrained to the popular formats by what browsers support. So, how can you get more out of the common choices?

    PNG Portable Network Graphics (PNG) is a lossless image compression format that internally works similarly to a ZIP file (using the DEFLATE algorithm). It's a good choice for images that contain solid blocks of color, and it has a better quality (with more colors) than the old Graphics Interchange Format (GIF). PNG supports transparency in all modern browsers, so you should use it instead of GIF for static images. This is not a problem unless you need to support Internet Explorer 6, in which case this is probably the least of your troubles. PNG also supports animation with Animated PNG (APNG) files. These are like animated GIFs but of a much higher quality. Unfortunately, only Firefox and Safari support APNGs. A great site to look up which browsers support a particular feature is caniuse.com. You can search for feature support, then check this against the user agent analytics of your site. For example, you could search for PNG-alpha, Brotli, or APNG.

    [ 836 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    Some ZIP algorithm implementations are better than others, and they produce smaller files that can still be decoded by everyone. For example, 7-Zip is much more efficient than most other zip compression software on Windows, even when using the ZIP format, not its native 7z format. Likewise, you can compress a PNG more compactly without losing any data and still have it work in all browsers. This usually comes with a higher upfront computational cost. However, if you compress static assets, which rarely change, then it can be well worth the effort. You may already use the PNGOUT tool to losslessly reduce the size of your PNG images. If you're not, then you probably should. You can read more about it and download it at advsys.net/ken/utils.htm. However, there is a new algorithm called Zopfli that offers better compression, but it is very slow to compress. Decompression is just as quick, so it's only a single optimization cost for precompiled resources. Zopfli is a precursor to Brotli, but it's compatible with DEFLATE and gzip, as it's not a new format. You can get Zopfli from github.com/google/zopfli, but you should always test with your images and verify that there is indeed a file size reduction. You will find that these tools can help you deliver your assets quicker and achieve higher performance. You may also use the practice of combining many sprites into one image. As with bundling, this is less necessary when using HTTP/2. However, the same caveats apply as with compression, and you should always test for your set of images.

    JPEG JPEG is a lossy image compression format, which means that it usually discards data from the picture to make it smaller. It is best suited to natural gradients and continuous tones, such as those found in photographs. JPEG does not support transparency like PNG does, so if you want to use one on different backgrounds, then you will need to prerender them. It's a good space saving idea to remove the Exchangeable image file format (Exif) metadata from your JPEG files for the web. This contains information about the camera used and geographic data of where the photo was taken. JPEG has a quality setting, which affects the image file size and the level of detail. The lower the quality, the smaller the file, but the worse it will look. You can perform tests on your images to see what settings provide an acceptable trade-off. Crucially, the best value for this quality setting will vary per image, depending on the content. There are tools that allow you to automatically detect the optimal quality level, such as Google's butteraugli.

    [ 837 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    There is an interesting project from Mozilla (the makers of the Firefox browser) called mozjpeg. This aims to better compress JPEG images and is similar to what PNGOUT and Zopfli do for PNG images. You can use mozjpeg to compress your JPEG images to a smaller size than normal, without affecting decompression or quality. It is available at github.com/mozilla/mozjpeg, but you will need to compile it yourself. As always, results may vary, so test it for the photos on your site. JPEG Archive (github.com/danielgtaylor/jpeg-archive) is a handy tool that uses mozjpeg to compress JPEG images, using various comparison metrics. Another similar tool is imgmin (github.com/rflynn/imgmin), which is slightly older.

    Other image formats Many other image formats are available, but you are usually limited on the web by what browsers support. As discussed in the previous chapter, you shouldn't scale images in the browser, or you will get poor performance. This usually means having to save multiple separate copies of smaller images, for example, when displaying thumbnails. Clearly this results in duplication, which is inefficient. Some of these new image formats have clever solutions to the problem of responsive and scalable images. BPG is an image format by the talented Fabrice Bellard, and you can read more about it at bellard.org/bpg. It has a JavaScript polyfill to support browsers before native support is added to any of them. WebP is an image format from Google, and only Chrome, Android, and Opera support it. It has impressive space savings over JPEG, and it will be a good choice if it becomes more widely supported, so check caniuse.com for the latest adoption stats. JPEG2000 is an improved version of JPEG, although it may be encumbered by software patents, so it hasn't seen widespread adoption outside of medical imaging. Only Safari supports JPEG2000, and there is also JPEG XR, which is only supported in IE. Whereas JPEG uses a Discrete Cosine Transform (DCT), JPEG2000 is based on a Wavelet Transform. One of the properties this provides is a progressive download. This means that the image is stored in such a way that if you download a small part from the beginning of the file, then you have a smaller and lower quality version of the full image. This has obvious applications for responsive and scalable images. The browser would only need to download enough of the image to fill the area it is rendering to, and the file need only be stored once. No resizing and no duplication for thumbnails would be required. This technique is also used in the Free Lossless Image Format (FLIF).

    [ 838 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    FLIF is one of the more exciting upcoming image formats, as it is progressive and responsive, but free and not patented. FLIF is still in development, but it promises to be very useful if browsers support it, and you can read more about it at flif.info. JPEG and PNG can support progressive download, but this isn't normally useful for responsive images. Progressive JPEG subjectively loads more gracefully and can even make files smaller, but interlaced PNG usually makes files bigger. The problem is that most of these progressive image formats are not yet ready for the mainstream because all of the major browsers do not support them. It's a good idea to keep an eye on the future, but for now, we need to resize images for high performance.

    Resizing images Until new image formats gain widespread adoption, resizing is still required, and you may need to do this dynamically for different devices. Perhaps, you also have user-submitted image content, although you need to be very careful with this from a security point of view. Some image libraries are not safe, and a specially-crafted image can exploit your system. In fact, many image-processing libraries have issues when they are used in a web context. If you are not extremely diligent and careful, then you can easily end up with memory leaks, which can take down your web server. It is always a good idea to separate and sandbox a process that deals with large media files. Coming from a .NET standpoint, it can be tempting to use WinForms System.Drawing or its WPF successor (System.Windows.Media). However, these were designed for desktop software, and Microsoft strongly recommends against using them in a service or web application. Microsoft recommeds the Windows Imaging Component (WIC), but this is a Component Object Model (COM) API that is meant for use from C or C++ apps. In addition to this, none of these imaging libraries are cross-platform, so they are not suitable for use in .NET Core. If you use Windows, then you could try using ImageResizer by Imazen (imazen.io), from imageresizing.net. While it still uses the GDI+ System.Drawing, it is pretty battle hardened, so most of the bugs should have been worked out. There's also DynamicImage, which wraps the newer WPF image functions and uses shaders. You can read more about it at dynamicimage.apphb.com, although it hasn't been updated in a while, and it doesn't support .NET Core.

    [ 839 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    A popular option in open source circles is ImageMagick, which we've mentioned previously, and a fork called GraphicsMagick, which claims to be more efficient. Another popular image library is LibGD, and it's suitable for server use. You can read more at libgd.github.io. Although it's written in C, there are wrappers for other programming languages, for example, DotnetGD targeting .NET Core. One of the features that .NET Core lacks is that there is not yet a compelling option for image processing. ImageResizer 5 may help with this when released, so it is worth keeping an eye on it. Native code support is now much better in .NET Core, as it was a pain to do in classic .NET, which may help with integrating native cross-platform imaging libraries. There is also a new cross-platform version of the open source ImageProcessor libraries (imageprocessor.org), called ImageProcessorCore, which shows promise. However, this is still a work in progress, and it is not yet stable. If you want to try it out, then you can get the nightly packages from MyGet or build it from source. Platform support and compatibility changes rapidly, so check ANCLAFS.com for the latest information. Feel free to contribute to this list or to the projects. For now, it may be easier to install an open source service, such as Thumbor, or use a cloudbased imaging service, such as ImageEngine (WURFL.io) or Cloudinary, which we've already mentioned. Image manipulation is a common task, and it is effectively a solved problem. It may be better to use an existing solution and not reinvent the wheel, unless it's part of your core business or you have very unusual requirements. Once you have your resized images, you can load them responsively with the picture and source tags using the srcset and sizes attributes. You can also use this technique to provide newer image formats (such as WebP), with a fallback for browsers that don't yet support them. Or you can use Client Hints (refer to httpwg.org/http-extensions/clienthints.html and caniuse.com/#feat=client-hints-dpr-widthviewport).

    Caching It is often said (originally by Phil Karlton) that caching is one of the hardest problems in computer science, along with naming things. This may well be an exaggeration, but caching is certainly difficult. It can also be very frustrating to debug if you are not methodical and precise in your approach.

    [ 840 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    Caching can apply at various different levels from the browser to the server using many diverse technologies. You rarely use just a single cache even if you don't realize it. Multiple caches don't always work well together, and it's vexing if you can't clear one. We briefly touched upon caching in the previous chapter, and we'll go into much more detail in Chapter 7, Learning Caching and Message Queuing. However, as caching has an impact on network performance, we'll cover it here as well.

    Browser A lot of caching happens in the web browser, which is inconvenient because as you don't have direct control over it (unless it's your browser). Asking users to clear their cache is unsatisfactory and confusing to many. Yet, you can exert influence on how browsers cache resources by carefully controlling the HTTP headers that you set and the URLs that you use. If you fail to declare what resources are cacheable and for how long, then many browsers will just guess this. The heuristics for this can be wildly different between implementations. Therefore, this will result in suboptimal performance. You should be explicit and always declare cache information even (and especially) for assets that shouldn't be cached by marking them as noncacheable. You need to be vigilant with what you advertise as cacheable because if you are careless, then you can get yourself into a situation where you're unable to update a resource. You should have a cache-busting strategy in place, and tested, before using caching. There are various technologies that are used are to cache in browsers. Many different HTTP headers can be set, such as Age, Cache-Control, ETag (Entity Tag), Expires, and LastModified. These come from a few different standards, and the interactions can be complex, or they vary between browsers. We will explain these in more detail in Chapter 7, Learning Caching and Message Queuing. Another technique is to use a unique URL for content. If a URL changes, then a browser will treat it as a different resource, but if it is the same, then it may load it from its local cache. Some frameworks calculate a hash of the file contents, and then they use this as a query string parameter. This way, when the contents of the file changes, so does the URL. There are other and more modern features that you can use to cache, such as the HTML5 Application Cache (or AppCache). This was designed for offline web applications and wasn't very flexible. Busting the cache was complicated to put it mildly. AppCache is already deprecated, and you should use Service Workers instead. These provide much more flexibility, although support is pretty recent.

    [ 841 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    There are many improvements coming, in the latest browsers that give you more control, and we'll also show you how to use them in Chapter 7, Learning Caching and Message Queuing.

    Server The web server is a great place to cache because it is usually under your complete control. However, it's not really part of network performance, apart from generating the correct headers. There can be other great performance benefits with server-side caching in terms of improving the speed to generate pages, but we will cover these in later chapters. If you use the traditional .NET Framework on Microsoft's Internet Information Services (IIS) web server, then you can use output caching from within your application. This will take care of setting the correct headers and sending 304 (Not Modified) responses to browser requests. It will also cache the output on the server in memory, on disk or using Memcached/Redis. You can add attributes to your controller action methods to control the caching options, but other ways of doing this are available, for example, in the configuration files. OutputCache is not available in ASP.NET Core, but you can use ResponseCache to set the

    correct headers. The output is not cached on the server, but you can install a caching proxy in front of it. Again, we will cover this more and demonstrate server-side caching in Chapter 7, Learning Caching and Message Queuing. If you want to disable caching on an ASP.NET Core page, then add this annotation to your controller action: [ResponseCache(NoStore = true, Duration = 0)]

    This will set the following header on the HTTP response and ensure that it is not cached: Cache-Control: no-store

    To cache a page for an hour, add the following instead, Duration is in seconds: [ResponseCache(Duration = 3600, VaryByHeader = "Accept")]

    The cache control header will then look like the following: Cache-Control: public,max-age=3600

    There's plenty more to say about other caching configuration options and profiles. Therefore, if you're interested, then read the later chapters. It's a complex topic, and we've only scratched the surface here.

    [ 842 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    You can read documentation about response caching in ASP.NET Core at docs.asp.net/en/latest/performance/caching/response.html.

    These caching directives not only instruct the browser, but they also instruct any proxies on the way. Some of these may be in your infrastructure if you have a caching proxy, such as Squid, Varnish, or HAProxy. Or perhaps, you have a TLS-terminating load balancer (such as Azure Application Gateway) to reduce the load on your web servers that also caches. You can forcibly flush the caches of servers that you control, but there may be other caches in between you and your users where you can't do this.

    Proxy servers between you and your users There can be many proxy servers between you and your users over which you have no direct control. They may ignore your caching requests, block parts of your site, or even modify your content. The way to solve these problems is to use TLS, as we have already discussed. TLS creates a secure tunnel so that the connection between your infrastructure and the browser can't easily be tampered with. Corporate proxies commonly Man in the Middle attack (MitM) your connection to the user so that they can spy on what employees are doing online. This involves installing a customtrusted root certificate on users' workstations so that your certificate can be faked. Unfortunately, there isn't much you can do about this, apart from educating users. Certificate pinning is effective in native apps, but it's not so useful for web applications. HTTP Public Key Pinning (HPKP) is available but, as it is a Trust on First Use (TOFU) technique, the initial connection could be intercepted. Client certificates are another option, but they can be difficult to distribute, and they aren't commonly used. MitM can be useful if you trust the third party and remain in control. This is used by some Content Delivery Networks (CDNs) to speed up your site.

    CDNs CDNs can improve the performance of your site by storing copies of your content at locations closer to your users. Services, such as the ones provided by CloudFlare, perform a MitM on your connection and save copies at data centers around the world. The difference from an unauthorized proxy is that you control the configuration, and you can purge the cache whenever you like.

    [ 843 ] WOW! eBook www.wowebook.org

    Addressing Network Performance

    You should be careful because if you don't use the caching features, then this can reduce the responsiveness of your site due to the extra hops involved. Make sure that you monitor the response times with and without a CDN, and you need a fallback plan in case they go down. Another common use case for CDNs is to distribute popular libraries, for example, the jQuery JavaScript library. There are free CDNs from jQuery (MaxCDN), Google, Microsoft, and cdnjs (CloudFlare) that do this. The hypothesis is that a user may already have the library from one of these in their cache. However, you should be extremely careful that you trust the provider and connection. When you load a third-party script into your site, you are effectively giving them full control over it or at least relying on them to always be available. If you choose to use a CDN, then ensure that it uses HTTPS to avoid tampering with scripts. You should use explicit https:// URLs on your secure pages or at least protocol agnostic URLS (//), and never http://. Otherwise, you will get mixed content warnings, which some browsers display as totally unencrypted or even block. You will need a fallback that is hosted on your own servers anyway in case the CDN goes down. If you use HTTP/2, then you may find that there is no advantage to using a CDN. Obviously, always test for your situation. There are some useful new features in ASP.NET Core views to easily enable local fallback for CDN resources. We'll show you how to use them and other features in later chapters.

    Summary In this chapter, you learned how to improve performance at the network level between the edge of your infrastructure and your users. You now know more about the internet protocols under your application and how to optimize your use of them for best effect. You learned how to take advantage of compression to shrink text and image files. This will reduce bandwidth and speed up delivery of assets. We also highlighted caching, and you should now see how important it is. We'll cover caching more in Chapter 7, Learning Caching and Message Queuing. In the next chapter, you will learn how to optimize the performance inside your infrastructure. You will see how to deal with I/O latency, and how to write well-performing SQL.

    [ 844 ] WOW! eBook www.wowebook.org

    5

    Optimizing I/O Performance This chapter addresses issues that often occur when you take your functionally tested application and split it up into parts for deployment. Your web servers host the frontend code, your database is somewhere else in the data center, you may have a Storage Area Network (SAN) for centralized files, an app server for APIs, and the virtual disks are all on different machines as well. These changes add significant latency to many common operations and your application now becomes super slow, probably because it's too chatty over the network. In this chapter, you will learn how to fix these issues by batching queries together, and performing work on the best server for the job. Even if everything runs on one machine, the skills that you'll learn here will help to improve performance by increasing efficiency. The topics covered in this chapter include the following: The operations that can be slow Select N+1 problems in detail Returning only what you need Writing high-performance SQL You will learn about the operations that you shouldn't use synchronously, and how to query for getting only the data that you need in an efficient manner. You'll also see how to tame your O/RM, and learn to write high-performance SQL with Dapper. We briefly covered some of these topics in Chapter 3, Fixing Common Performance Problems, but here we'll dive into greater detail. The first half of this chapter will focus on background knowledge and using diagnostic tools, while the second half will show you solutions to issues you may come across. You'll also learn about some more unusual problems, and how to fix or alleviate them.

    WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    We'll initially focus on understanding the issues, because if you don't appreciate the root cause of a problem, then it can be difficult to fix. You shouldn't blindly apply advice that you read, and expect it to work successfully. Diagnosing a problem is normally the hard part, and once this is achieved, it is usually easy to fix it.

    Input/output I/O is a general name for any operation in which your code interacts with the outside world. There are many things that count as I/O, and there can be plenty of I/O that is internal to your software, especially if your application has a distributed architecture. The recent rise in popularity of the .io Top Level Domain (TLD) can be partly attributed to standing for I/O, but that is not its real meaning. As is the case for some other TLDs, it is actually a country code. Other examples include .ly for Libya and .tv for Tuvalu (which, like the neighboring Kiribati, may soon be submerged beneath the Pacific Ocean due to climate change). The TLD .io is intended for the British Indian Ocean Territory (BIOT), a collection of tiny but strategic islands with a shameful history. The .io TLD is therefore controlled by a UK-based registry. BIOT is nothing more than a military base, and also happens to be a hop on the proposed AWE fiber optic cable between Australia and Djibouti. In this chapter, we will focus on improving the speed of I/O, not on avoiding it. Therefore, we won't cover caching here. Both I/O optimizing and caching are powerful techniques on their own, and when they're combined, you can achieve impressive performance. See Chapter 7, Learning Caching and Message Queuing for more on caching.

    Categories of I/O The first challenge is to identify the operations that trigger I/O. A general rule of thumb in .NET is that if a method has an asynchronous API (MethodAsync() variants), then it is declaring that it can be slow, and may be doing I/O work. Let's take a closer look at some of the different kinds of I/O.

    [ 846 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Disks The first type of I/O we will cover is reading from, and writing to, persistent storage. This will usually be some sort of a disk drive such as a spinning platter Hard Disk Drive (HDD), or as is more common these days, a flash memory-based Solid State Drive (SSD). HDDs are slower than SSDs for random reads and writes, but are competitive for large block transfers. The reason for this is that the arm on the HDD has to physically move the head to the correct location on the magnetic platter before it can begin the read or write operations. If the disk is powered down, then it can take even longer, as the platters will have to Spin-up from a stationary position to the correct revolutions per minute (rpm) beforehand. You may have heard the term "Spin-up" in reference to provisioning a generic resource. This historically comes from the time taken to spin the platters on a rotating disk up to the operational speed. The term is still commonly used, even though these days there may not be any mechanical components present. Terminology like this often has a historical explanation. As another example, a floppy disk icon is normally used to represent the save function. Yet floppy disks are no longer in use, and many younger users may never have encountered one. Knowing what type of drive your code is running on is important. HDDs perform badly if you make lots of small reads and writes. They prefer to have batched operations, so writing one large file is better than many smaller ones. The performance of disks is similar to that of a network, in that there is both latency and throughput, often called bandwidth in networking parlance. The latency of an HDD is high, as it takes a relatively long time to get started, but once started, the throughput can be respectable. You can read data rapidly if it's all in one place on the disk, but it will be slower if it is spread all over, even if the total data is less. For example, copying a single large file disk-to-disk is quick, but trying to launch many programs simultaneously is slow. SSDs experience fewer of these problems as they have lower latency, but it is still beneficial to keep random writes to a minimum. SSDs are based on flash memory (similar to the chips used in memory cards for phones and cameras), and they can only be written to a fixed number of times. The controller on the SSD manages this for you, but the SSD's performance degrades over time. Aggressive writing will accelerate this degradation.

    [ 847 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Multiple disks can be combined to improve their performance and reliability characteristics. This is commonly done using a technology called Redundant Array of Independent Disks (RAID). Data is split across multiple disks to make it quicker to access, and more tolerant to hardware failures. RAID is common in server hardware, but can increase the startup time, as Spin-up is sometimes staggered to reduce the peak power draw. HDDs offer much larger capacity than SSDs, and so are a good choice for storage of infrequently used files. You can get hybrid drives, which combine an HDD with an SSD. These claim to offer the best of both worlds, and are cheaper than SSDs of an equivalent size. However, if you can afford it, and if you can fit all of your data on an SSD, then you should use one. You will also decrease your power and cooling requirements, and you can always add an additional HDD for mass storage or backups.

    Virtual file systems File access can be slow at the best of times due to the physical nature of the disks storing the data, as mentioned previously. This problem can be compounded in a virtualized environment such as a cloud-hosted infrastructure. The storage disks are usually not on the same host server as the virtual machine, and will generally be implemented as network shares even if they appear to be mounted locally. In any case, there is always an additional problem, which is present whether the disk is on the VM host or somewhere else on the network, and that is contention. On a virtualized infrastructure, such as those provided by AWS and Azure, you share the hardware with other users, but a physical disk can only service a single request at once. If multiple tenants want to access the same disk simultaneously, then their operations will need to be queued and timeshared. Unfortunately, this abstraction has much the same detrimental effect on performance as reading lots of random files. Users are likely to have their data on disk stored in locations different from other customers. This will cause the arm on the drive to frequently move to different sectors, reducing throughput and increasing latency for everyone on the system. All this means that on shared virtual hosting, using an SSD can have a bigger positive performance impact than normal. Even better is to have a local SSD, which is directly attached to the VM host, and not to another machine on the network. If disks must be networked, then the storage machine should be as close as possible to the VM using it. You can pay extra for a dedicated VM host where you are the only tenant. However, you may as well then be running on bare metal, and reaping the benefits of reduced costs and higher performance. If you don't require the easy provisioning and maintenance of VMs, then a bare metal dedicated server may be a good option.

    [ 848 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Many cloud hosting providers now offer SSDs, but most only offer ephemeral local disks. This means that the local disk only exists while your VM is running, and vanishes when it is shutdown, making it unsuitable for storing the OS if you want to bring a VM back up in the same state. You have to write your application in a different way to take advantage of an ephemeral local drive, as it could disappear at any time, and so can only be used for temporary working storage. This is known as an immutable server, which means it doesn't change and is disposable. This normally works better when the OS is Linux, as it can be tricky to bootstrap new instances when running Windows.

    Databases Databases can be slow, because they rely on disks for the storage of their data, but there are other overheads as well. However, DBs are usually a better way of storing significant data than flat files on disk. Arbitrary data can be retrieved quickly if it is indexed, much quicker than scanning a file by brute force. Relational databases are a mature and very impressive technology. However, they only shine when used correctly, and how you go about querying them makes a massive difference to performance. DBs are so convenient that they're often overused, and are typically the bottleneck for a web application. An unfortunately common anti-pattern is requiring a database call in order to render the homepage of a website. An example is when you try to visit a website mentioned on live TV, only to discover that it has crashed due to the MySQL DB being overloaded. This sort of a website would be better architected as a static site with the client-side code hitting cached and queued web APIs. The pathological case for a slow DB is where the web server is in one data center, the database server is in another, and the disks for the DB are in a third. Also, all the servers may be shared with other users. Obviously, it's best not to end up in this situation, and to architect your infrastructure in a sane way, but you will always have some latency. There are application programming techniques that allow you to keep your network and DB chatter to a minimum. These help you to improve the performance and responsiveness of your software, especially if it is hosted in a high-latency virtualized environment. We will demonstrate some of these skills later on in this chapter.

    [ 849 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    APIs Modern web application programming generally involves using third-party services and their associated APIs. It's beneficial to know where these APIs are located, and what the latency is. Are they in the same data center, or are they on the other side of the planet? Unless you've discovered some exciting new physics, then light only travels so fast. Today, almost all intercontinental data travels by fiber optics cables. Satellites are rarely used anymore, as the latency is high, especially for geostationary orbits. Many of these cables are under the oceans, and are hard to fix. If you rely on an API on a different continent, not only can it slow you down, but it also exposes you to additional risk. You probably shouldn't build an important workflow that can be disrupted by a fisherman trawling in the wrong place. You also need to further secure your data, as some countries (such as the UK) are known to tap cables and store the communications, if they cross their borders. One of the issues with APIs is that latency can compound. You may need to call many APIs, or maybe an API calls another API internally. These situations are not normally designed this way, but can grow organically as new features are added, especially if no refactoring is performed periodically to tidy up any mess. One common form of latency is startup time. Websites can go to sleep if not used, especially if using the default Internet Information Services (IIS) settings. If a website takes a nonnegligible amount of time to wake up, and all the required APIs also need to wake up, then the delays can quickly add up to a significant lag for the first request. It may even time-out. There are a couple of solutions to this initial lag problem. If you use IIS, then you can configure the application pool to not go to sleep. The defaults in IIS are optimized for shared hosting, so they will need tweaking for a dedicated server. The second option is to keep the site alive by regularly polling it with a health check or uptime monitoring tool. You should be doing this anyway so that you know when your site goes down, but you should also ensure that you are exercising all the required dependencies (such as APIs and DBs). If you are simply retrieving a static page or just checking for a 200 status code, then services may go down without you realizing. Similarly, scaling can have a lag. If you need to scale up, then you should preheat your load balancers and web servers. This is especially important if using an AWS Elastic Load Balancer (ELB). If you're expecting a big peak in traffic, then you can ask AWS to have your ELBs prewarmed. An alternative would be using Azure Load Balancer, Azure Application Gateway, or running HAProxy yourself so that you have more control. You should also be running load tests, which we'll cover in Chapter 9, Monitoring Performance Regressions.

    [ 850 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Network diagnostics tools As we discovered earlier, practically all I/O operations in a virtualized or cloud-hosting infrastructure are now network operations. Disks and databases are rarely local, as this would prevent scaling out horizontally. There are various command-line tools that can help you discover where the API, DB, or any other server you're using is located, and how much latency is present on the connection. While all of these commands can be run from your workstation, they are most useful when run from a server via a Secure Shell (SSH) or Remote Desktop Protocol (RDP) connection. This way, you can check where your databases, APIs, and storage servers are, in relation to your web servers. Unfortunately, it is common for hosting providers to geographically separate your servers, and put them in different data centers. For example, if using AWS, then you would want to configure your servers to be in at least the same region, and preferably in the same Availability Zone (AZ), which usually means the same data center. You can replicate (cluster) your DB or file server across AZs (or even across regions) so that your web servers are always talking to a server on their local network. This also adds redundancy, so in addition to increasing performance, it will make your application more resilient to hardware faults or power supply failures.

    Ping Ping is a simple networking diagnostics tool, available on almost all operating systems. It operates at the IP level and sends an Internet Control Message Protocol (ICMP) echo message to the host specified. Not all machines will respond to pings, or requests may be blocked by firewalls. However, it's good netiquette to allow servers to respond for debugging purposes, and most will oblige. For example, open a command prompt or terminal, and type the following: ping ec2.eu-central-1.amazonaws.com

    This will ping an Amazon Web Services (AWS) data center in Germany. In the response, you will see the time in milliseconds. From the UK, this round-trip time (RTT) may be something like 33ms, but your results will vary. On Windows, by default, ping performs four attempts, then exits. On a Unix-like OS (such as Mac OS X, BSD, or Linux), by default, it continues indefinitely. Press Ctrl+C to stop and quit.

    [ 851 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Try this command next, which will do the same, but for an AWS data center in Australia: ping ec2.ap-southeast-2.amazonaws.com

    From the UK, the latency now goes up, by almost an order of magnitude, to around 300 ms. AWS doesn't have any data centers in the UK, and neither do Microsoft and Google (read into that what you will). So to ping a UK hosting provider, enter the following: ping bytemark.co.uk

    The latency now decreases to an average of 23ms, as our connection has (probably) not left the country. Obviously, your results will vary depending on where you are. Next we'll see how to discover what route our data is taking, as it's not always only distance that's important. The number of hops can likewise be significant. The following image shows the output of the three ping operations that we have just performed to Germany, the UK, and Australia. Note the difference in the timings; however, your results will be different, so try this for yourself.

    [ 852 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    IPv4 addresses starting with 54 (the ones in the form 54.x.x.x) are a clue that the server may be running on an AWS Elastic Compute Cloud (EC2) virtual server. Perform a reverse DNS lookup with nslookup or ping (covered later in this chapter) to confirm if this is the case. AWS provides IP address ranges at the following link: docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html

    Tracert Tracert (or traceroute on a Unix-like OS) is a tool which, as the name suggests, traces the route to a destination host. Enter the following command: tracert www.google.com

    You should be able to see the connection leaving the network of your Internet Service Provider (ISP), and entering the domain 1e100.net, which is Google's domain. 1.0 x 10100 is a googol, which is their namesake. The following image shows the output that you might see for this trace:

    Next, let's trace a route to Australia by running the following command with the same AWS host name as our earlier example, as follows: tracert ec2.ap-southeast-2.amazonaws.com

    [ 853 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    This may take some time to run, especially if some hosts don't respond to pings and traceroute has to timeout. If you get asterisks (* * *), then this could indicate the presence of a firewall. Your results may look something like the following image:

    In the preceding example, we can see the connection leaving the British Telecom (BT) network, and entering the Nippon Telegraph and Telecom (NTT) Global IP Network. We can even see the route taken from London to Sydney, via Amsterdam, Ashburn (east US, in Virginia), and Los Angeles. The hostnames suggest that the connection has gone via the Faraday telephone exchange building, near St. Paul's Cathedral in London (named after electrical pioneer Michael Faraday), and entered Amazon's network in LA. This isn't the whole story as it only shows the IP level. At the physical level, the fiber likely comes back to the UK from the Netherlands (possibly via Porthcurno, Goonhilly Satellite Earth Station, or more likely, Bude where GCHQ conveniently have a base). Between LA and Australia, there will also probably be a stopover in Hawaii (where the NSA base that Edward Snowden worked at is located). There are maps of the connections available at submarinecablemap.com and www.us.ntt.net/about/network-map.cfm. It's good idea to have at least a basic understanding of how the internet is physically structured, in order to achieve high performance.

    [ 854 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    If we now trace the routes to the AWS data centers in Korea and Japan, we can see that, initially, they both take the same route as each other. They go from London to New York and then to Seattle, before reaching Osaka in Japan. The Korean trace then carries on for another eleven hops, but the Japanese trace is done in six, which makes logical sense. The following image shows the typical results of a trace to Korea first, then the results of a second trace to Japan.

    You can use the difference in time between hops to work out the approximate geographic distance. However, there are occasionally anomalies if some systems respond quicker than others.

    [ 855 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    If you're on a flight with free Wi-Fi, then a traceroute is an interesting exercise to perform. The internet connection is likely going via satellite, and you'll be able to tell the orbit altitude from the latency. For example, a geostationary orbit will have a large latency of around 1,000 ms, but a Low Earth Orbit (LEO) will be much smaller. You should also be able to work out where the ground station is located.

    Nslookup Nslookup is a tool for directly querying a DNS server. Dig is another similar tool, but we won't cover it here. Both ping and traceroute have performed DNS lookups, but you can do this directly with nslookup, which can be very useful. You can call nslookup with command-line parameters, but you can also use it interactively. To do this, simply type the name of the tool into a console or a command prompt, as follows: nslookup

    You will now get a slightly different command prompt from within the program. By default, the DNS name servers of the computer you're on are used, and it bypasses any entries in your local hosts file. A hosts file can be very useful for testing changes prior to adding them to DNS. It can also be used as a crude blocker for adverts and trackers by setting their addresses to 0.0.0.0. There are local DNS servers you can run to do this for your whole network. One such project is pi-hole.net, which is based on dnsmasq, but which simplifies setting it up and updating the hosts on Raspberry Pi. Enter the hostname of a server to resolve its IP address; for example, type the following: polling.bbc.co.uk

    The results show that this hostname is a CNAME (an alias for the real Canonical Name) of polling.bbc.co.uk.edgekey.net, which resolves to e3891.g.akamaiedge.net, and this currently has an IP address of 23.67.87.132. We can perform a reverse DNS lookup on the IP address by entering it: 23.67.87.132

    We then get the hostname of that machine, conveniently containing the IP address, which is a23-67-87-132.deploy.static.akamaitechnologies.com. The domain name is owned by Akamai, which is a Content Delivery Network (CDN) used to distribute load.

    [ 856 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    If you are using a DNS server on your local network, possibly with your router running dnsmasq, then it may cache results, and give you stale data. You can see more up-to-date information by changing the server to a core one that propagates changes quicker. For example, to use one of Google's public DNS servers, enter the following (but be aware that Google will log all the internet sites you visit if you use this normally): server 8.8.8.8

    Then run the same query again. Notice how the hostname now resolves to a different IP address. This is a common behavior for CDNs, and the record will change over time, even on the same DNS server. It is often used as a technique to balance network load. Changing DNS servers can also sometimes be used to get around naïve content filters or location restrictions. To exit the nslookup interactive mode, type exit and press return. The following image shows the output for the previous commands:

    [ 857 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    IPv4 is the version of IP that you will probably be most familiar with. It uses 32-bit addresses that are usually represented as four dotted decimal octets, such as 192.168.0.1. However, we have run out of IPv4 addresses, and the world is (slowly) moving to the new version of IP, called IPv6. The address length has increased fourfold, to 128-bit, and is usually represented in hexadecimal such as 2001:0db8:0000:0000:0000:ee00:0032:154d. Leading zeros can be omitted, like 2001:db8::ee00:32:154d, to make them easier to write. Localhost loopback (127.0.0.1) is now simply ::1 in IPv6. On Windows, you can use ping -a x.x.x.x to do a reverse DNS lookup, and resolve IP addresses to hostnames. On Linux (and on other Unix-like systems such as OS X), this feature is not available, and the -a flag serves a different purpose. You will have to use nslookup, or dig for reverse DNS on these OSs.

    Build your own You can build your owntools using C#, and the functions you need are provided by .NET. These are built into the full .NET Framework, but for .NET Core, you will need to add the NuGet package System.Net.Ping for ping, although the DNS name resolution is built-in. The underlying implementations are platform-specific, but the framework will take care of calling the native code on the OS you're using. You won't normally need to programmatically resolve a hostname, as most networking commands will do this automatically. However, it can occasionally be useful, especially if you want to perform a reverse DNS lookup, and find the hostname for an IP address. The .NET Dns class differs from nslookup, as it includes entries from the local hosts file rather than just querying a DNS server. It is, therefore, more representative of the IP addresses that other processes are resolving. To programmatically perform a DNS lookup, follow these steps: 1. Add the System.Net namespace. using System.Net;

    [ 858 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    2. To resolve a hostname to an IP address, use the following static method. This will return an array of IP addresses, although there will usually only be one. var results = await Dns.GetHostAddressesAsync(host);

    3. To resolve an IP address to a hostname, use the following method instead: var revDns = await Dns.GetHostEntryAsync(result);

    4. If successful, then the hostname will be available as revDns.HostName. To programmatically ping a host, follow these steps: 1. Add the System.Net.NetworkInformation namespace. using System.Net.NetworkInformation;

    2. You can then instantiate a new Ping object. var ping = new Ping();

    3. With this object, you can now ping an IP address or hostname (that will perform a DNS lookup internally) by using the following method: var result = await ping.SendPingAsync(host);

    4. You can get the status of the ping with result.Status, and if successful, then you can get the RTT in milliseconds with result.RoundtripTime. The source code for our console application, illustrating how to use the .NET Core Dns and Ping classes, is available for download along with this book.

    [ 859 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    A reverse DNS lookup can usually reveal the hosting company being used by a website. Usually, there is only one IP address per hostname, as shown in the following image of our .NET Core console app output:

    In the preceding screenshot, we can see that emoncms.org is using redstation.com as a host. The low latency suggests that the server is located in the same country as our computer. DNS is often used for load balancing. In this case, you will see many IP addresses returned for a single domain name, as shown in the following screenshot:

    [ 860 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    We can see in the preceding screenshot that the privacy-focused search engine DuckDuckGo (which doesn't track its users like Google does) is using AWS. DNS is being used to balance the load across various instances—in this case, they're all in the Dublin data center, because that's the closest one. Notice how the ping times are now slightly higher than the UK-based host in the previous example. It's likely that they're using the AWS DNS service Route 53 (so named because DNS uses port 53). This can balance load across regions, whereas an ELB (which DuckDuckGo doesn't appear to be using) can only balance inside a region (but both inside and across AZs). Azure offers a similar service called Traffic Manager for DNS load balancing.

    [ 861 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Solutions Now that you understand a bit more about the causes of latency-based problems, and how to analyze them, we can demonstrate some potential solutions. The measurements that you have taken using the previously illustrated tools will help you quantify the scale of the problems, and choose the appropriate fixes to be applied.

    Batching API requests Rendering a typical web page may require calls to many different APIs (or DB tables) to gather the data required for it. Due to the style of object-oriented programming encouraged by C# (and many other languages), these API calls are often performed in series. However, if the result of one call does not affect another, then this is suboptimal, and the calls could be performed in parallel. We'll cover DB tables later in this chapter, as there are better approaches for them. Concurrent calls can be more pertinent if you implement a microservices architecture (as opposed to the traditional monolith, or big ball of mud), and have lots of different distributed endpoints. Message queues are sometimes a better choice than HTTP APIs in many cases, perhaps using a publish and subscribe pattern. However, maybe you're not responsible for the API, and are instead integrating with a third party. Indeed, if the API is yours, then you could alter it to provide all the data that you need in one shot anyway.

    [ 862 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Consider the example of calling two isolated APIs that have no dependencies on each other. The sequence diagram for this may look something like the following:

    [ 863 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    This linear process of calling A, and then calling B when A is done, is simple, as it requires no complex orchestration to wrangle the results of the API calls, but it is slower than necessary. By calling both APIs synchronously and in sequence, we waste time waiting for them to return. A better way of doing this, if we don't require the result of the first API for the request to the second, may be to call them together asynchronously. The sequence diagram for this new flow may look something like the following:

    [ 864 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    There are two changes in this new sequence, one obvious and the other subtle. We call both APIs asynchronously and in parallel so that they are simultaneously in flight. This first change will have the greatest impact, but there is a smaller tweak that can also help, which is calling the slowest API first. In the original sequence diagram, we call API A and then API B, but B is slower. Calling A and B at the same time will have a big impact, but calling B and then A is slightly better. B dominates the timeline, and we will be killing a (relatively) large amount of time waiting for it. We can use this downtime to call A, as there will be some small fixed overhead in any API method call. Both of these changes combined mean that we are now only waiting for B, and effectively get the call to A for free, in terms of time. We can illustrate this principle with a simple console application. There are two methods that simulate the APIs, and both are similar. API A has the following code: private static async Task CallApiA() { Thread.Sleep(10); await Task.Delay(100); }

    Thread.Sleep simulates the fixed overhead, and Task.Delay simulates waiting for the

    API call to return. API B takes twice as long to return as API A, but the fixed overhead is the same, and it has the following code: private static async Task CallApiB() { Thread.Sleep(10); await Task.Delay(200); }

    Now, if we synchronously call the methods in sequence, we discover that all of the delays add up, as expected. CallApiA().Wait(); CallApiB().Wait();

    These operations take a total of around 332 ms on average, as there is about 10 ms of additional intrinsic overhead in the method invocation. If we call both methods simultaneously, the time reduces significantly. Task.WaitAll(CallApiA(), CallApiB());

    [ 865 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    The operations now take an average total of 233 ms. This is good, but we can do better if we swap the order of the methods. Task.WaitAll(CallApiB(), CallApiA());

    This now takes, on an average, a total of 218 ms, because we have swallowed the fixed overheads of API A into the time we are waiting for API B. The full console application which benchmarks these three variants is available for download with this book. It uses the Stopwatch class to time the operations, and averages the results over many runs. The results of these three different approaches are shown in the following image:

    As is always the case, measure your results to make sure there really is an improvement. You may find that if you are making many calls to one API, or reusing the same API client, then your requests are processed sequentially, regardless. This parallelizing of tasks not only works for I/O, but can also be used for computation, as we will discover in the next chapter. For now, we will move on, and take a deeper look at database query optimization.

    Efficient DB operations Although this isn't a book aimed at Database Administrators (DBAs), it's advantageous for you to appreciate the underlying technology that you're using. As is the case with networking, if you understand the strengths and weaknesses, then you can achieve high performance more easily. This is additionally relevant for the recent trend of developers doing more work outside of their specialization, particularly in small organizations or startups. You may have heard the buzzwords full-stack developer or DevOps. These refer to roles that blur the traditional boundaries, and merge development (both frontend and backend), quality assurance, networking, infrastructure, operations, and database administration into one job. Being a developer today is no longer a case of just programming or shipping code. It helps to know something about the other areas too.

    [ 866 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Previously, in Chapter 3, Fixing Common Performance Problems, we covered using the micro O/RM Dapper, and how to fix Select N+1 problems. Now, we'll build on that knowledge, and highlight a few more ways that you can consolidate queries and improve the performance of your DB operations. As detailed in Chapter 2, Measuring Performance Bottlenecks, the first step is to profile your application and discover where the issues lie. Only then should you move on to applying solutions to the problem areas.

    Database tuning We're going to focus more on improving poorly performing queries, so we won't go into DB tuning too much. This is a complex topic, and there is a lot you can learn about indexes and how data is stored. However, if you haven't run the Database Engine Tuning Advisor (DETA), then this is an easy step to identify if you have any missing indexes. Yet, you should be careful when applying the recommended changes, as there are always downsides and tradeoffs to consider. For example, indexes make retrieving data quicker, but also make it slower to add and update records. They also take up additional space, so it is best not to overdo it. It will depend on your particular business use case: if you wish to optimize upfront, or take the hit later on retrieval. The first step is to capture a trace with SQL Server Profiler. See Chapter 2, Measuring Performance Bottlenecks for how to do this, and save the resulting file. This is one of the reasons that the tuning profile is a good choice. You should make sure that the trace is representative of a genuine DB use to get good results. You can launch the DETA from the same menu as the profiler in SSMS.

    [ 867 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    You can then load in your trace as a workload, and after processing, it will give you some recommendations. We won't cover the details of how to do this here for space reasons, and it is pretty self-explanatory. There are many good guides available online, if required.

    Reporting A common use case, which can often bring a database to its knees, is reporting. Reports usually run on large amounts of data, and can take considerable time if poorly written. If you already have a method for retrieving a single item from a DB, then it can be tempting to reuse this for reporting purposes and for calculating metrics inside of the application. However, this is best avoided; reports should generally have their own dedicated queries. Even better is to have another database dedicated to reporting, which is populated from the main DB or from backups. This way, intensive reports won't affect the performance of the main application at the expense of not including real-time data. This is sometimes known as a data warehouse or data mart, and the data is occasionally denormalized for simplicity or performance reasons. Populating another DB can also be a convenient way to test your backups, as database backups should always be tested to ensure that they can be restored correctly and contain all of the required data. In the ensuing examples, we will focus on Microsoft SQL Server (MS SQL) and the Transact-SQL (T-SQL) dialect of SQL. Other databases, such as PostgreSQL, are available as well. PostgreSQL has its own dialect called Procedural Language / PostgreSQL (PL/pgSQL), which is similar to Oracle PL/SQL. These dialects all support basic SQL commands, but use different syntax for some operations, and contain different or additional features. The latest version of MS SQL Server is SQL Server 2016, and it's the first edition to support running on Linux in addition to Windows. It is based on the SQL Azure service code, so there should now be fewer differences between a self-hosted, on-premise DB and Microsoft's cloud offering.

    Aggregates Aggregate functions are an incredibly useful feature of a RDBMS. You can compute values that summarize many records in the database, and only return the result keeping the source data rows in the DB.

    [ 868 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    You will be familiar with the COUNT aggregate function from earlier in the book, if not before. This gives you the total number of records returned by your query, but without returning them. You may have read advice that COUNT(1) performs better than COUNT(*), but this is no longer the case, as SQL Server now optimizes the latter to perform the same as the former. By default, SQL Server will return a message detailing the count of the records returned, along with the result of every query. You can turn this off by prefixing your query with the SET NOCOUNT ON command, which will save a few bytes in the Tabular Data Stream (TDS) connection, and increase performance slightly. This is significant only if a cursor is being used, which is bad practice anyway for locking reasons. It's good practice to re-enable row count after the query, even though it will be reset outside of the local scope anyway. In Chapter 3, Fixing Common Performance Problems, we solved our Select N+1 problem by joining tables in the DB, and using COUNT instead of performing these calculations in our application code. There are many other aggregate functions available, that can be useful in improving performance, especially for reporting purposes. Going back to our earlier example, suppose we now want to find out the total number of blog posts. We also want to find the total number of comments, the average comment count, the lowest number of comments for a single post, and the highest number of comments for a single post. The first part is easy—just apply COUNT to the posts instead of the comments—but the second part is harder. As we already have our list of posts with a comment count for each, it may be tempting to reuse that, and simply work everything out in our C# code. This would be a bad idea, and the query would perform poorly, especially if the number of posts is high. A better solution would be to use a query such as the following: ;WITH PostCommentCount AS( SELECT bp.BlogPostId, COUNT(bpc.BlogPostCommentId) 'CommentCount' FROM BlogPost bp LEFT JOIN BlogPostComment bpc ON bpc.BlogPostId = bp.BlogPostId GROUP BY bp.BlogPostId ) SELECT COUNT(BlogPostId) 'TotalPosts', SUM(CommentCount) 'TotalComments', AVG(CommentCount) 'AverageComments',

    [ 869 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance MIN(CommentCount) 'MinimumComments', MAX(CommentCount) 'MaximumComments' FROM PostCommentCount

    This query uses a Common Table Expression (CTE), but you could also use a nested SELECT to embed the first query into the FROM clause of the second. It illustrates a selection of the aggregate functions available, and the results on the test database from before look like the following:

    The semicolon at the start is simply a good practice to avoid errors and remove ambiguity. It ensures that any previous command has been terminated, as WITH can be used in other contexts. It isn't required for the preceding example, but might be if it were part of a larger query. CTEs can be very useful tools, especially if you require recursion. However, they are evaluated every time, so you may find that temporary tables perform much better for you, especially if querying one repeatedly in a nested SELECT. Now suppose that we wish to perform the same query, but only include posts that have at least one comment. In this case, we could use a query like the following: ;WITH PostCommentCount AS( SELECT bp.BlogPostId, COUNT(bpc.BlogPostCommentId) 'CommentCount' FROM BlogPost bp LEFT JOIN BlogPostComment bpc ON bpc.BlogPostId = bp.BlogPostId GROUP BY bp.BlogPostId HAVING COUNT(bpc.BlogPostCommentId) > 0 ) SELECT COUNT(BlogPostId) 'TotalPosts', SUM(CommentCount) 'TotalComments', AVG(CommentCount) 'AverageComments', MIN(CommentCount) 'MinimumComments', MAX(CommentCount) 'MaximumComments' FROM PostCommentCount

    We have added a HAVING clause to ensure that we only count posts with more than zero comments. This is similar to a WHERE clause, but for use with a GROUP BY. The query results

    [ 870 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    now look something like the following:

    Sampling Sometimes, you don't need to use all of the data, and can sample it. This technique is particularly applicable to any time-series information that you may wish to graph. In SQL Server, the traditional way to perform random sampling was using the NEWID() method, but this can be slow. For example, consider the following query: SELECT TOP 1 PERCENT * FROM [dbo].[TrainStatus] ORDER BY NEWID()

    This query returns exactly 1% of the rows with a random distribution. When run against a table with 1,205,855 entries, it returned 12,059 results in about four seconds, which is slow. A better way may be to use TABLESAMPLE, which is available in any reasonably recent version of SQL Server (2005 onwards), as follows: SELECT * FROM [dbo].[TrainStatus] TABLESAMPLE (1 PERCENT)

    This preceding query is much quicker, and when run against the same data as the previous example, it completes almost instantly. The downside is that it's cruder than the earlier method, and it won't return exactly 1% of the results. It will return roughly 1%, but the value will change every time it is run. For example, running against the same test database, it returned 11,504, 13,441 and 11,427 rows when executing the query three times in a row.

    Inserting data Querying databases may be the most common use case, but you will usually need to put some data in there in the first place. One of the most commonly used features when inserting records into a relational DB is the identity column. This is an auto-incrementing ID that is generated by the database, and doesn't need to be supplied when adding data. For example, in our BlogPost table from earlier, the BlogPostId column is created as INTIDENTITY(1,1). This means that it's an integer value starting at one and increasing by

    [ 871 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    one for every new row. You INSERT into this table, without specifying BlogPostId, like so: INSERT INTO BlogPost (Title, Content) VALUES ('My Awesome Post', 'Write something witty here...')

    Identities can be very useful, but you will usually want to know the ID of your newly created record. It is typical to want to store a row, then immediately retrieve it so that the ID can be used for future editing operations. You can do this in one shot with the OUTPUT clause, like so; INSERT INTO BlogPost (Title, Content) OUTPUT INSERTED.BlogPostId VALUES ('My Awesome Post', 'Write something witty here...')

    In addition to inserting the row, the ID will be returned. You may see SCOPE_IDENTITY() (or even @@IDENTITY) advocated as ways of retrieving the identity, but these are outdated. The recommended way of doing this, on modern versions of SQL Server, is to use OUTPUT. OUTPUT works on the INSERT, UPDATE, DELETE, and MERGE commands. It even works

    when operating on multiple rows at a time, as in this example of bulk inserting two blog posts: INSERT INTO BlogPost (Title, Content) OUTPUT INSERTED.BlogPostId VALUES ('My Awesome Post', 'Write something witty here...'), ('My Second Awesome Post', 'Try harder this time...')

    The preceding query will return a result set of two identities, for example, 3003 and 3004. In order to execute these inserts with Dapper, you can use the following method for a single record. First, let us create a blog post object, which we'll hard code here, but which would normally come from user input: var post = new BlogPost { Title = "My Awesome Post", Content = "Write something witty here..." };

    [ 872 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    To insert this one post, and set the ID, you can use the following code. This will execute the insert statement, and return a single integer value. post.BlogPostId = await connection.ExecuteScalarAsync(@" INSERT INTO BlogPost (Title, Content) OUTPUT INSERTED.BlogPostId VALUES (@Title, @Content)", post);

    You can then assign the returned ID to the post, and return that object to the user for editing. There is no need to select the record again, assuming the insert succeeds and no exceptions are thrown. You can insert multiple records at once with the same SQL by using the execute method. However, the SQL will be executed multiple times, which may be inefficient, and you only get back the number of inserted rows, not their IDs. The following code supplies an array of posts to the same SQL used in the previous example: var numberOfRows = await connection.ExecuteAsync(@" INSERT INTO BlogPost (Title, Content) OUTPUT INSERTED.BlogPostId VALUES (@Title, @Content)", new[] { post, post, post });

    If you want multiple IDs returned, then you will need to use the query method to return a collection of values, as we have done previously. However, it is difficult to make this work for a variable number of records, without dynamically building SQL. The following code performs a bulk insert, using multiple separate parameters for the query: var ids = await connection.QueryAsync(@" INSERT INTO BlogPost (Title, Content) OUTPUT INSERTED.BlogPostId VALUES (@Title1, @Content1), (@Title2, @Content2)", new { Title1 = post.Title, Content1 = post.Content, Title2 = post.Title, Content2 = post.Content });

    This will perform all the work in a single command, and return an enumerable collection of the identities. However, this code is not very scalable (or even elegant).

    [ 873 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    There are many helper methods in the Dapper.Contrib package which can assist you with inserting records and other operations. However, they suffer from the same limitations as the examples here, and you can only return a single identity or the number of rows inserted.

    GUIDs Integer identities can be useful to a database internally, but perhaps you shouldn't be using an identifying key externally, especially if you have multiple web servers or expose the IDs to users. An alternative is to use a Globally Unique Identifier (GUID), referred to as UNIQUEIDENTIFIER in SQL Server. We have already touched on these, as they are generated by the NEWID() function used in the suboptimal sampling example. GUIDs are used ubiquitously, and are 16 bytes long—four times bigger than an integer. The size of GUIDs means that you are unlikely to get a unique constraint conflict when inserting a random GUID into an already populated table. People sometimes worry about GUID collisions, but the numbers involved are so staggeringly huge that collisions are incredibly unlikely. The GUIDs generated by .NET (using a COM function on Windows) are, specifically, Universally Unique Identifiers (UUIDs) version 4. These have 122 bits of randomness, so you would need to generate nearly three billion GUIDs before having even half a chance of a collision. At one new record per second, that would take over 90 billion years. Even at two million GUIDs per second (what an average computer today could produce), it would still take over 45 thousand years (more time than human civilization has been around). If you think that this will be an issue for your software, then you should probably be future-proofing it by using five-digit years, such as 02016, to avoid the Y10K (or YAK in hexadecimal) problem. This uniqueness property allows you to generate a GUID in your application code and store it in a distributed database, without worrying about merging data later. It also allows you to expose IDs to users, without caring if someone will enumerate them or try and guess a value. Integer IDs often expose how many records there are in a table, which could be sensitive information depending on your use case.

    [ 874 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    One thing you have to watch out for with random GUIDs is using them as keys, as opposed to IDs. The difference between these two terms is subtle, and they are often one and the same. The key (primary or clustered) is what the DB uses to address a record, whereas the ID is what you would use to retrieve one. These can be the same value but they don't have to be. Using a random GUID as a key can cause performance issues with indexing. The randomness may cause fragmentation, which will slow down queries. You can use sequential GUIDs, for example NEWSEQUENTIALID() can be the DEFAULT value for a column, but then you lose most of the beneficial qualities of GUIDs that mainly come from the randomness. You now effectively just have a really big integer, and if you're only concerned with the maximum number of rows in a table, and require more than two billion, then a big int (twice an int, but half a GUID) should suffice. A good compromise is to have an int identity as the primary key, but to use a GUID as the ID (and enforce this with a unique constraint on the column). It's best if you generate the GUID in application code with Guid.NewGuid(), as using a default column value in the DB means you have to retrieve the ID after an insert, as shown previously. A table using this strategy may partially look something like the following image. This is a screen capture of part of a table from the MembershipReboot user management and authentication library:

    Advanced DB topics There are many other database performance enhancing techniques that we could cover here, but don't have space for. We'll briefly mention some additional topics in case you want to look into them further. A common requirement when saving data is to insert a record if it doesn't yet exist, or to update an existing record if there is already a row containing that ID. This is sometimes referred to as an upsert, which is a portmanteau word combining update and insert.

    [ 875 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    You can use the MERGE command, to do this in one operation and not worry about choosing between UPDATE or INSERT (wrapped in a transaction inside a stored procedure). However, MERGE does not perform as well as UPDATE or INSERT, so always test the effects for your use case, and use it sparingly. Stored procedures, transactions, and locking are all big subjects on their own. They are important to understand, and not only from a performance perspective. We can't fit all of these in here, but we will touch upon maintaining stored procedures later in the book. Some other techniques you could look into are cross-database joins, which can save you from querying multiple DBs. There is also the practice of denormalizing your data, which involves flattening your relational records ahead of time to provide a single row rather than needing to join across many tables for a query. We briefly mentioned this in relation to data warehouses earlier. You can find lots of useful SQL documentation and information on MSDN (msdn.microsoft.com) and TechNet (technet.microsoft.com). Additionally, there are many good books on advanced SQL techniques. Finally, a note on housekeeping. It's important to have a plan for managing the size of a database. You can't simply keep inserting data into a table, and hope it will continue to perform well forever. At some point, it will get so large that even deleting records from it will cause your DB to grind to a crawl. You should know ahead of time how you will remove or archive rows, and it is better to do this regularly in small batches.

    Simulation and testing To wrap up this chapter, let's reiterate the importance of being able to test your application on realistic infrastructure. Your test environments should be as live-like as possible. If you don't test on equivalent DBs and networks, then you may get a nasty surprise come deployment time. When using a cloud hosting provider (and if you automate your server builds), then this is easy: you can simply Spin-up a staging system that matches production. You don't have to provision it to the exact same scale as long as all the parts are there and in the same place. To reduce costs further, you only need to keep it around for as long as your test.

    [ 876 ] WOW! eBook www.wowebook.org

    Optimizing I/O Performance

    Alternatively, you could create a new live environment, deploy and test it, then switch over, and destroy or reuse the old live environment. This swapping technique is known as bluegreen deployment. Another option is to deploy new code behind a feature switch, which allows you to toggle the feature at runtime, and only for some users. Not only does this allow you to functionally verify features with test users, you can also gradually roll out a new feature, and monitor the performance impact as you do so. We will cover both of these techniques in Chapter 9, Monitoring Performance Regressions.

    Summary We've covered a lot in this chapter, and it should hopefully be clear that being a high performance developer requires more than simply being a capable programmer. There are many externalities to consider around your code. You now understand the basics of I/O, and the physical causes of the intrinsic limitations present. You can analyze and diagnose how a network logically fits together. We have covered how to make better use of the network to reduce overall latency, which is important as more I/O operations now happen over a network. You have also seen how to make better use of a database, and how to do more with fewer operations. In the next chapter, we will dig into code execution in detail, and show you how to speed up your C#. We'll see how .NET Core and ASP.NET Core perform so much better than the previous versions, and how you can take advantage of these improvements.

    [ 877 ] WOW! eBook www.wowebook.org

    6

    Understanding Code Execution and Asynchronous Operations This chapter covers solving performance problems in the execution of your code, which is not normally the first location that speed issues occur, but can be a good additional optimization. We'll discuss the areas where performance is needed, and where it is okay (or even required) to be slow. The merits of various data structures will be compared from the standard built-in generic collections to the more exotic. We will demonstrate how to compute operations in parallel, and how to take advantage of extra instruction sets that your CPU may provide. We'll dive into the internals of ASP.NET Core and .NET Core to highlight the improvements that you should be aware of. The topics covered in this chapter include the following: .NET Core and the native tool chain Common Language Runtime (CLR) services such as GC, JIT, and NGen ASP.NET Core and the Kestrel web server Generic collections and Bloom filters Serialization and data representation formats Relative performance of hashing functions Parallelization (SIMD, TPL, and PLINQ) Poorly performing practices to avoid You will learn how to compute results in parallel and combine the outputs at the end. This includes how to avoid incorrect ways of doing this, which can make things worse. You'll also learn how to reduce server utilization, and to choose the most appropriate data structures in order to process information efficiently for your specific situation.

    WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    Getting started with the core projects There are many benefits of using .NET Core and ASP.NET Core over the old full versions of the frameworks. The main enhancements are the open source development and cross platform support, but there are also significant performance improvements. Open development is important and the source code is not only available, the development work happens in the open on GitHub. The community is encouraged to make contributions and these may be merged in upstream if they pass a code review; the flow isn't just one way. This has led to increased performance and additional platform support coming from outside of Microsoft. If you find a bug in a framework, you can now fix it rather than work around the problem and hope for a patch. The multiple projects that make up the frameworks are split across two organizations on GitHub. One of the driving principles has been to split the frameworks up into modules, so you can just take what you need rather than the whole monolithic installation. The lower level framework, .NET Core, can be found, along with other projects, under github.com/dotnet. The higher level web application framework, ASP.NET Core, can be found under github.com/aspnet. The reason for this split is that .NET Core is managed by the .NET Foundation which is an independent organization, although most of the members are Microsoft staff. ASP.NET Core is managed directly by Microsoft. Let's have a quick look at some of the various .NET Core repositories and how they fit together.

    .NET Core There are a couple of projects that form the main parts of .NET Core and these are CoreCLR and CoreFX. CoreCLR contains the .NET Core CLR and the base core library, mscorlib. The CLR is the virtual machine that runs the .NET code. CoreCLR contains a just-in-time (JIT) compiler, Garbage Collector (GC), and also base types and classes in mscorlib. CoreFX includes the foundational libraries and sits on top of CoreCLR. This includes most built-in components that aren't simple types. You may be familiar with the element used to set a GC mode, more suitable for server use, with shorter pauses. You can read more about the original version at msdn.microsoft.com/enus/library/ms229357. In .NET Core you can set the COMplus_gcServer environment variable to 1 or set System.GC.Server to true as a runtime option in a JSON runtime configuration file.

    [ 879 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    It's worth highlighting another project called CoreRT, which is a runtime that allows aheadof-time (AOT) compilation, instead of the JIT of CoreCLR. This means that you can compile your C# code to native machine code, and run it without any dependencies. You end up with a single (statically linked) binary and don't need .NET installed, which is very similar to how Go operates. With the Roslyn compiler, you no longer always need to compile to Common Intermediate Language (CIL) bytecode then have the CLR JIT compile to native instructions at runtime with RyuJIT. This native tool chain allows for performance improvements, as the compilation does not have to happen quickly, in real time, and can be further optimized. For example, a build can be tuned for execution speed at the expense of compilation speed. It's conceptually similar to the Native Image Generator (NGen) tool, which has been in the full .NET Framework since its inception.

    ASP.NET Core ASP.NET Core runs on top of .NET Core, although it can also run on the full .NET Framework. We will only cover running on .NET Core, as this performs better and ensures enhanced platform support. There are many projects that make up ASP.NET Core and it's worth briefly mentioning some of them. It's useful to not include any framework references to .NET 4.5/4.6 in your project, so that you can be sure you're only using .NET Core and don't accidentally reference any dependencies that are not yet supported. ASP.NET Core includes Model View Controller (MVC), Web API, and Web Pages (a way to make simple pages with Razor, similar to PHP, and the spiritual successor to classic ASP). These features are all merged together, so you don't need to think of MVC and Web API as separate frameworks anymore. There are many other projects and repositories, including EF Core, but the one we will highlight here is the Kestrel HTTP server.

    Kestrel Kestrel is a new web server for ASP.NET Core and it performs brilliantly. It's based on libuv, which is an asynchronous I/O abstraction and support library that also runs below Node.js. Kestrel is blazingly fast, and the benchmarks are very impressive. However, it is pretty basic, and for production hosting, you should put it behind a reverse proxy, so that you can use caching and other features to service a greater number of users. You can use many HTTP or proxy servers for this purpose such as IIS, NGINX, Apache, or HAProxy.

    [ 880 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    You should be careful with your configuration if using NGINX as a reverse proxy, as by default, it will retry POST requests if the response from your web server times out. This could result in duplicate operations being performed, especially if used as a load balancer across multiple backend HTTP servers.

    Data structures Data structures are objects that you can use to store the information you're working on. Choosing the best implementation for how your data is used can have dramatic effects on the speed of execution. Here, we'll discuss some common and some more interesting data structures that you might like to make use of. As this is a book about web application programming and not one on complex algorithm implementation or micro-optimization, we won't be going into a huge amount of detail on data structures and algorithms. As briefly mentioned in the introduction, other factors can dwarf the execution speed in a web context, and we assume you're not writing a game engine. However, good algorithms can help speed up applications, so, if you are working in an area where execution performance is important, you should read more about them. We're more interested in the performance of the system as a whole, not necessarily the speed the code runs at. It is often more important to consider what your code expresses (and how this affects other systems) than how it executes. Nevertheless, it is still important to choose the right tool for the job, and you don't want to have unnecessarily slow code. Just be careful of over-optimizing when it already executes fast enough, and above all, try to keep it readable.

    Lists A .NET List is a staple of programming in C#. It's type safe, so you don't have to bother with casting or boxing. You specify your generic type and can be sure that only objects of that class (or primitive) can be in your list. Lists implement the standard list and enumerable interfaces (IList and IEnumerable), although you get more methods on the concrete list implementation, for example, adding a range. You can also use LanguageIntegrated Query (LINQ) expressions to easily query them, which is trivial when using fluent lambda functions. However, although LINQ is often an elegant solution to a problem, it can lead to performance issues, as it is not always optimized.

    [ 881 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    A list is really just an enhanced, one-dimensional, array. In fact, it uses an array internally to store the data. In some cases, it may be better to directly use an array for performance. Arrays can occasionally be a better choice, if you know exactly how much data there will be, and you need to iterate through it inside a tight loop. They can also be useful if you need more dimensions, or plan to consume all of the data in the array. You should be careful with arrays; use them sparingly and only if benchmarking shows a performance problem when iterating over a list. Although they can be faster, they also make parallel programming and distributed architecture more difficult, which is where significant performance improvements can be made. Modern high performance means scaling out to many cores and multiple machines. This is easier with an immutable state that doesn't change, which is easier to enforce with higher level abstractions than with arrays. For example, you can help enforce a read-only list with an interface. If you are inserting or removing a lot of items in the middle of a large list, then it may be better to use the LinkedList class. This has different performance characteristics to a list, as it isn't really a list—it's more of a chain. It may perform better than a list for some specialized cases, but, in most common cases, it will be slower. For example, access by index is quick with a list (as it is array backed), but slow with a linked list (as you will have to walk the chain). It is normally best to initially focus on the what and why of your code rather than the how. LINQ is very good for this, as you simply declare your intentions, and don't need to worry about the implementation details and loops. It is a bad idea to optimize prematurely, so do this only if testing shows a performance problem. In most common cases, a list is the correct choice unless you need to select only a few values from a large set, in which case, dictionaries can perform better.

    Dictionaries Dictionaries are similar to lists, but excel at quickly retrieving a specific value with a key. Internally, they are implemented with a hash table. There's the legacy Hashtable class (in the System.Collections.NonGeneric package) but this is not type safe, whereas Dictionary is a generic type, so you probably shouldn't use Hashtable unless you're porting old code to .NET Core. The same applies to ArrayList, which is the legacy, nongeneric version of List.

    [ 882 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    A dictionary can look up a value with a key very quickly, whereas a list would need to go through all the values until the key was found. You can, however, still enumerate a dictionary as it is ordered, although this isn't really how it is intended to be used. If you don't need ordering, then you can use a HashSet. There are sorted versions of all of these data structures, and you can again use read-only interfaces to make them difficult to mutate.

    Collection benchmarks Accurate benchmarking is hard and there are lots of things which can skew your results. Compilers are very clever and will optimize, which can make trivial benchmarks less valuable. The compiler may output very similar code for different implementations. What you put in your data structures will also have a large effect on their performance. It's usually best to test or profile your actual application and not optimize prematurely. Readability of code is very valuable and shouldn't be sacrificed for runtime efficiency unless there is a significant performance problem (or if it's already unreadable). There are benchmarking frameworks that you can use to help with your testing, such as BenchmarkDotNet which is available at github.com/PerfDotNet/BenchmarkDotNet. However, these can be an overkill, and are sometimes tricky to set up, especially on .NET Core. Other options include Simple Speed Tester (which you can read more about at theburningmonk.github.io/SimpleSpeedTester) and MiniBench (available from github.com/minibench). We'll perform some simple benchmarks to show how you might go about this. However, don't assume that the conclusions drawn here will always hold true, so test for your situation. First, we will define a simple function to run our tests: private static long RunTest(Func func, int runs = 1000) { var s = Stopwatch.StartNew(); for (int j = 0; j < runs; j++) { func(); } s.Stop(); return s.ElapsedMilliseconds; }

    [ 883 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    We use Stopwatch here as using DateTime can cause problems, even when using UTC, as the time could change during the test and the resolution isn't high enough. We also need to perform many runs to get an accurate result. We then define our data structures to test and prepopulate them with random data. var var var for {

    rng = new Random(); elements = 100000; randomDic = new Dictionary(); (int i = 0; i < elements; i++)

    randomDic.Add(i, rng.NextDouble()); } var randomList = randomDic.ToList(); var randomArray = randomList.ToArray();

    We now have an array, list, and dictionary containing the same set of 100,000 key/value pairs. Next, we can perform some tests on them to see what structure performs best in various situations. var afems = RunTest(() => { var sum = 0d; foreach (var element in randomArray) { sum += element.Value; } return sum; }, runs);

    The preceding code times how long it takes to iterate over an array in a foreach loop and sums up the double precision floating point values. We can then compare this to other structures and the different ways of accessing them. For example, iterating a dictionary in a for loop is done as follows: var dfms = RunTest(() => { var sum = 0d; for (int i = 0; i < randomDic.Count; i++) { sum += randomDic[i]; } return sum; }, runs);

    [ 884 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    This performs very badly, as it is not how dictionaries are supposed to be used. Dictionaries excel at extracting a record by its key very quickly. So quickly in fact that you will need to run the test many more times to get a realistic value. var lastKey = randomList.Last().Key; var dsms = RunTest(() => { double result; if (randomDic.TryGetValue(lastKey, out result)) { return result; } return 0d; }, runs * 10000); Console.WriteLine($"Dict select {(double)dsms / 10000} ms");

    Getting a value from a dictionary with TryGetValue is extremely quick. You need to pass the variable to be set into the method as an out parameter. You can see if this was successful and if the item was in the dictionary by testing the Boolean value returned by the method. Conversely, adding items to a dictionary one by one can be slow, so it all depends on what you're optimizing for. The following image shows the output of a very simple console application that tests various combinatorial permutations of data structures and their uses:

    [ 885 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    These results, shown in the preceding image, are informative, but you should be skeptical, as many things can skew the output, for example, the ordering of the tests. If the gap is small, then there is probably not much to choose between two variants, but you can clearly see the big differences. To get more realistic results, be sure to compile in release mode and run without debugging. The absolute results will depend on your machine and architecture, but the relative measures should be useful for comparisons. You may get higher performance if compiling to a single executable binary with the .NET native tool chain or with a different release of the .NET Core framework. The main lesson here is to measure for your specific application and choose the most appropriate data structure for the work that you are performing. The collections in the standard library should serve you well for most purposes, and there are others that are not covered here, which can sometimes be useful, such as a Queue or Stack. You can find more information about the built-in collections and data structures on MSDN (msdn.microsoft.com/en-us/library/system.collections.generic). You can also read about them on the .NET Core documentation site on the GitHub pages (dotnet.github.io/docs/essentials/collections/Collections-and-DataStructures.html). However, there are some rarer data structures, not in the standard collection, that you may occasionally wish to use. We'll show an example of one of these now.

    Bloom filters Bloom filters are an interesting data structure that can increase performance for certain use cases. They use multiple overlaid hashing functions and can quickly tell you if an item definitely does not exist in a set. However, they can't say with certainty if an item exists, only that it is likely to. They are useful as a pre-filter, so that you can avoid performing a lookup, because you know for sure that the item won't be there.

    [ 886 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    The following image shows how a Bloom filter works. A, B, and C have been hashed and inserted into the filter. D is hashed to check if it is in the filter but, as it maps to a zero bit, we know that it is not in there.

    Bloom filters are much smaller than holding all the data or even a list of hashes for each item in the set. They can also be much quicker, as the lookup time is constant for any size of set. This constant time can be lower than the time to look up an item in a large list or dictionary, especially if it is on the file system or in a remote database. An example application for a Bloom filter could be a local DNS server, which has a list of domains to override, but forwards most requests to an upstream server. If the list of custom domains is large, then it may be advantageous to create a Bloom filter from the entries and hold this in memory. When a request comes in, it is checked against the filter, and if it doesn't exist, then the request is forwarded to the upstream server. If the entry does exist in the filter, then the local hosts file is consulted; if the entry is there, its value is used. There is a small chance that the entry will not be in the list, even if the filter thinks it is. In this case, when it isn't found, the request is forwarded, but this approach still avoids the need to consult the list for every request. Another example of using a Bloom filter is in a caching node, perhaps as part of a proxy or CDN. You wouldn't want to cache resources that are only requested once, but how can you tell when the second request occurs if you haven't cached it? If you add the request to a Bloom filter, you can easily tell when the second request occurs and then cache the resource.

    [ 887 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    They are also used in some databases to avoid expensive disk operations and in Cache Digests, which allow an agent to declare the contents of its cache. HTTP/2 may support Cache Digests in the future, but this will probably use Golomb-coded sets (GCS), which are similar to Bloom filters, but smaller, at the expense of slower queries. There is an open source implementation of a Bloom filter in .NET available at bloomfilter.codeplex.com among others. You should test the performance for yourself to make sure they offer an improvement.

    Hashing and checksums Hashing is an important concept, which is often used to ensure data integrity or lookup values quickly and so it is optimized to be fast. This is why general hashing functions should not be used on their own to securely store passwords. If the algorithm is quick, then the password can be guessed in a reasonably short amount of time. Hashing algorithms vary in their complexity, speed of execution, output length, and collision rate. A very basic error detection algorithm is called a parity check. This adds a single bit to a block of data and is rarely used directly in programming. It is, however, extensively used at the hardware level, as it is very quick. Yet, it may miss many errors where there are an even number of corruptions. A Cyclic Redundancy Check (CRC) is a slightly more complex error detecting algorithm. The CRC-32 (also written CRC32) version is commonly used in software, particularly in compression formats, as a checksum. You may be familiar with the built-in support for hash codes in .NET (with the GetHashCode method on all objects), but you should be very careful with this. The only function of this method is to assist with picking buckets in data structures that use hash tables internally, such as a dictionary, and also in some LINQ operations. It is not suitable as a checksum or key, because it is not cryptographically secure and it varies across frameworks, processes, and time. You may have used the Message Digest 5 (MD5) algorithm in the past, but today its use is strongly discouraged. The security of MD5 is heavily compromised and collisions can be produced easily. Since it is very quick, it may have some non-secure uses, such as nonmalicious error checking but there are better algorithms that are fast enough.

    [ 888 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    If you need a strong but quick hashing function, then the Secure Hash Algorithm (SHA) family is a good choice. However, SHA-1 is not considered future proof, so for a new code, SHA-256 is generally a better choice. When signing messages, you should use a dedicated Message Authentication Code (MAC), such as a Hash-based MAC (HMAC), which avoids vulnerabilities in a single pass of the hashing function. A good option is the HMACSHA256 class built into .NET. Various APIs, such as some of the AWS REST APIs, use HMAC-SHA256 to authenticate requests. This ensures that, even if the request is performed over an unencrypted HTTP channel, the API key can't be intercepted and recovered. As briefly mentioned in Chapter 1, Why Performance Is a Feature, password hashing is a special case and general purpose hashing algorithms are not suitable for it—they are too fast. A good choice is Password-Based Key Derivation Function 2 (PBKDF2), which we used as an example in Chapter 2, Measuring Performance Bottlenecks. PBKDF2 is a particularly popular choice for .NET, as it is built into the framework, and so, the implementation is more likely to be correct. It has been built against an RFC and reviewed by Microsoft, which you can't say for any random piece of code found online. For example, you could download an implementation of bcrypt for .NET, but you have to trust that it was coded correctly or verify it yourself.

    Hashing benchmarks Let's do some simple benchmarking of various hash functions to see how they compare performance wise. In the following code snippet, we define a method for running our tests, similar to the one for the previous collection benchmarks, but taking an Action parameter, rather than a Func, as we don't want to return a value: private static long RunTest(Action func, int runs = 1000) { var s = Stopwatch.StartNew(); for (int j = 0; j < runs; j++) { func(); } s.Stop(); return s.ElapsedMilliseconds; }

    We include the following using statement: using System.Security.Cryptography;

    [ 889 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    Next, we define a short private constant string (hashingData) to hash in the class and get the bytes for it in an 8-bit Unicode (UTF8) format. var smallBytes = Encoding.UTF8.GetBytes(hashingData);

    We also want to get a larger block of bytes to hash to see how it compares performance wise. For this, we use a cryptographically secure random number generator. var largeBytes = new byte[smallBytes.Length * 100]; var rng = RandomNumberGenerator.Create(); rng.GetBytes(largeBytes);

    We need a key for some of our functions, so we use the same technique to generate this. var key = new byte[256]; var rng = RandomNumberGenerator.Create(); rng.GetBytes(key);

    Next, we create a sorted list of the algorithms to test and execute the tests for each one: var algos = new SortedList { {"1. MD5", MD5.Create()}, {"2. SHA-1", SHA1.Create()}, {"3. SHA-256", SHA256.Create()}, {"4. HMAC SHA-1", new HMACSHA1(key)}, {"5. HMAC SHA-256", new HMACSHA256(key)}, }; foreach (var algo in algos) { HashAlgorithmTest(algo); }

    Our test method runs the following tests on each algorithm. They all inherit from HashAlgorithm, so we can run the ComputeHash method on each of them for the small and large byte arrays. var smallTimeMs = RunTest(() => { algo.Value.ComputeHash(smallBytes); }, runs); var largeTimeMs = RunTest(() => { algo.Value.ComputeHash(largeBytes); }, runs);

    [ 890 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    We then calculate the average (mean) time for both sizes. We cast the long integer to a double precision floating point number so that we can represent small values between one and zero. var avgSmallTimeMs = (double)smallTimeMs / runs; var avgLargeTimeMS = (double)largeTimeMs / runs;

    The preceding method then outputs these mean times to the console. We need to test PBKDF2 separately, as it doesn't inherit from HashAlgorithm. var slowRuns = 10; var pbkdf2 = new Rfc2898DeriveBytes(hashingData, key, 10000); var pbkdf2Ms = RunTest(() => { pbkdf2.GetBytes(256); }, slowRuns);

    PBKDF2 is so slow that it would take a considerable amount of time to perform 100,000 runs (this is the point of using it). Internally, this RFC2898 implementation of the keystretching algorithm runs HMAC SHA-1 10,000 times. The default is 1,000, but due to the computing power available today, it is recommended to set this to at least an order of magnitude higher. For example, Wi-Fi Protected Access II (WPA2) uses 4,096 rounds of iterations to produce a 256-bit key with the Service Set Identifier (SSID) as the salt. The output will look something like the following:

    From the preceding output, you can see that the time taken for one hash varies from about 720 nanoseconds for a small MD5 to 32 microseconds for a large HMAC SHA-256 and 125 milliseconds for a small PBKDF2 with typical parameters. Benchmarking results can vary dramatically, so you shouldn't read too much into absolute

    [ 891 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    values. For example, the output from the BenchmarkDotNet tool comparing MD5 and SHA-256 on the same machine looks like this:

    You can see in the last image that the results are different to our homemade benchmark. However, this uses the full .NET Framework, calculates median rather than mean for the average time, and runs in debug mode (which it helpfully warns us of) among other things. A faster machine will have a higher throughput (as can be seen in the BenchmarkDotNet README.md on GitHub). Dedicated hardware such as Graphics Processing Units (GPUs), Field-Programmable Gate Arrays (FPGAs), and Application-Specific Integrated Circuits (ASICs) can be much faster. These tend to be used in the mining of bitcoin (and other crypto currencies), as these are based on hashing as a proof-of-work. Bitcoin uses SHA-256, but other currencies use different hashing algorithms.

    [ 892 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    The same algorithms form the basis of TLS, so faster hardware can handle a greater number of secure connections. As another example, Google built a custom ASIC called a Tensor Processing Unit (TPU) to accelerate their machine learning cloud services. Other benchmarking samples are available in BenchmarkDotNet, and when you first run it, you will be presented with the following menu:

    The previous benchmark was the second option, (number #1 Algo_Md5VsSha256). Benchmarking is hard, so it's a good idea to use a library such as BenchmarkDotNet if you can. The only conclusion we can draw from our benchmarks is that SHA-256 is slower than MD5. However, SHA-256 should be fast enough for most applications and it's more secure for integrity checking. However, it is still not suitable for password storage. SHA-256 can be used to provide signatures for verifying downloaded files, which must be retrieved over HTTPS to be safe, and for signing certificates. When used as part of an HMAC, it can also be used to securely authenticate messages—API requests, for example. You will only connect successfully if you know the correct API key to hash with.

    [ 893 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    Serialization Serialization is the process of turning objects into data suitable for transmission over a network or for storage. We also include deserialization, which is the reverse, under this umbrella. Serialization can have significant performance implications, not only on the network transmission speed but also on computation, as it can make up most of the expensive processing on a web server. You can read more about serialization on MSDN (msdn.microsoft.com/en-us/library/mt656716). Serialization formats can be text-based or binary. Some popular text-based formats are Extensible Markup Language (XML) and JavaScript Object Notation (JSON). A popular binary format is Protocol Buffers, which was developed at Google. There's another binary serialization format (BinaryFormatter) built into the full .NET, but this is not in .NET Core. XML has fallen out of fashion with developers, and JSON is now generally preferred. This is partly due to the smaller size of equivalent JSON payloads, but it may also be due to the use of XML in the originally named Simple Object Access Protocol (SOAP). This is used in Windows Communication Foundation (WCF), but SOAP is no longer an acronym, as developers discovered it is far from simple. JSON is popular due to being compact, human-readable, and because it can easily be consumed by JavaScript, particularly in web browsers. There are many different JSON serializers for .NET, with different performance characteristics. However, because JSON is not as rigidly defined as XML, there can be differences in implementations, which make them incompatible, especially when dealing with complex types such as dates. For example, the very popular Json.NET represents dates in the International Organization for Standardization (ISO) format, whereas the JSON serializer used in old versions of ASP.NET MVC represented dates as the number of milliseconds since the Unix epoch, wrapped in a JavaScript date constructor. The .NET developer community has converged on Json.NET, and compatibility is always preferable to performance. ASP.NET Web API has used Json.NET as the default for a while now, and ASP.NET Core also uses Json.NET. There is a serializer that's part of the ServiceStack framework called ServiceStack.Text, which claims to be faster, but you should probably value compatibility and documentation over speed. The same applies to other JSON libraries such as Jil (github.com/kevin-montrose/Jil) and NetJSON (github.com/rpgmaker/NetJSON), which can be even faster than ServiceStack in benchmarks.

    [ 894 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    If you are after pure performance, and you control all of the endpoints, then you probably would want to use a binary protocol. However, this may limit future interoperability with third-party endpoints which you don't control. Therefore, it's best to only use these internally. It would be a bad idea to build your own custom message protocol on top of UDP. So, if you want to use binary serialization, you should look at something like protobuf-net, which is a Protocol Buffers implementation for .NET. You may also wish to consider Microsoft's Bond framework (github.com/Microsoft/bond) or Amazon's Ion (amznlabs.github.io/ion-docs). You may need to tune these tools for best performance, for example, by changing the default buffer size.

    SIMD CPU instructions Single Instruction Multiple Data (SIMD) is a technique that is available on many modern processors and can speed up execution by parallelizing calculations even in a single thread on one core. SIMD takes advantage of additional instructions available on CPUs to operate on sets of values (vectors) rather than just single values (scalars). The most common instruction set for this is called Streaming SIMD Extensions 2 (SSE2) and it has been around for over 15 years since its debut with the Pentium 4. A newer instruction set called Advanced Vector Extensions (AVX) offers superior performance over SSE2 and has been around for over five years. So, if you're using a reasonably recent x86-64 CPU, then you should have access to these extra instructions. Some ARM CPUs (such as those in the Raspberry Pi 2 and 3) contain a similar technology called NEON, officially known as Advanced SIMD. This is not currently supported in .NET, but may be in the future. An official open source library project in C is hosted at projectne10.org. You can use the following Boolean property to test if SIMD is supported: Vector.IsHardwareAccelerated

    This property is JIT intrinsic, and the value is set by RyuJIT at runtime. You can instantiate a generic typed Vector or use one of the two/three/four dimensional convenience classes. For example, to create a single precision floating point vector, you could use the following generic code: var vectorf = new Vector(11f);

    [ 895 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    To create a single precision floating point 3D vector instead, you could use this code: var vector3d = new Vector3(0f, 1f, 2f);

    A two dimensional double precision floating point vector can be a good substitute for a Complex structure. It will give higher performance on hardware accelerated systems. Vector2 only supports single precision floating point numbers, but you can use the generic type to specify the real and imaginary components of the complex number as a double. Complex only supports double precision floating point numbers, but, if you don't need high precision, you could still use the Vector2 convenience class. Unfortunately, this means that it's not simply a drop in replacement, but the math is different anyway. You can now use standard vector mathematics, but modifying your algorithms to take advantage of vectors can be complex and isn't something you should typically be doing in a web application. It can be useful for desktop applications, but, if a process takes a long time in a web request, it's often best to run it in the background and then it doesn't need to be as quick. We will cover this distributed architecture approach in the next chapter. For this reason, we won't be going into any more detail on SIMD, but you can read more on it if you wish, now that you have a taste of it. You can read some background information at wikipedia.org/wiki/SIMD and you can find the documentation for the .NET implementation on MSDN at msdn.microsoft.com/en-us/library/dn858218. You could also take a look at the example console application, which is available for download along with this book, as a simple starter for ten.

    Parallel programming While SIMD is good at increasing the performance of a single thread running on one core, it doesn't work across multiple cores or processors and its applications are limited. Modern scaling means adding more CPUs, not simply making a single thread faster. We don't just want to parallelize our data as SIMD does; we should actually focus more on parallelizing our processing, as this can scale better. There are various .NET technologies available to help with parallel processing so that you don't have to write your own threading code. Two such parallel extensions are Parallel LINQ (PLINQ), which extends the LINQ operations you're familiar with, and the Task Parallel Library (TPL).

    [ 896 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    Task Parallel Library One of the main features of the TPL is to extend loops to run in parallel. However, you need to be careful with parallel processing, as it can actually make your software slower while doing simple tasks. The overheads involved with marshalling the multiple tasks can dwarf the execution of the workload for trivial operations. For example, take the following simple for loop: for (int i = 0; i < array.Length; i++) { sum += array[i]; }

    The array in the preceding for loop contains 100,000 integers, but adding integers is one of the easiest things a CPU can do and using a for loop on an array is a very quick way of enumerating. This accumulation will complete in under a tenth of a millisecond on a reasonably modern machine. You may think that you would be able to speed this up by parallelizing the operation. Perhaps you could split the array, run the summation on two cores in parallel and add the results. You might use the following code to attempt this: Parallel.For(0, array.Length, i => { Interlocked.Add(ref sum, array[i]); });

    You must use an interlocked add or you will get an incorrect summation result. If you don't, the threads will interfere with each other, corrupting the data when writing to the same location in memory. However, this code actually runs over 42 times slower than the first example. The extra overhead, complexity of running many threads, and locking the variable so that only one thread can write to it at a time is just not worth it in this case. Parallelization can be useful for more complex processes, especially if the body of the loop performs some slow operation such as accessing the file system. However, blocking I/O operations may be better dealt with by using asynchronous access. Parallel access can cause contention, because access may eventually have to be performed in series at some point, for example, at the hardware level.

    [ 897 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    If we want to perform a more processor-intensive operation, such as hashing multiple passwords, then running the tasks in parallel can be beneficial. The following code performs a PBKDF2 hash on each password in a list and then calculates the Base64 representation of the result: foreach (var password in passwords) { var pbkdf2 = new Rfc2898DeriveBytes(password, 256, 10000); Convert.ToBase64String(pbkdf2.GetBytes(256)); }

    We're not using the output in this example, but you may be doing this to upgrade the security of the passwords in your database by migrating them to a more resilient keystretching algorithm. The input may be plaintext passwords or the output of a legacy oneway hash function, for example MD5 or an unsalted SHA. We can improve the speed of this on a multicore system by using the Parallel.ForEach loop, using code such as the following: Parallel.ForEach(passwords, password => { var pbkdf2 = new Rfc2898DeriveBytes(password, 256, 10000); Convert.ToBase64String(pbkdf2.GetBytes(256)); });

    This will speed up the process, but by how much will depend on many factors, such as the number of passwords in the list, the number of logical processors and the number of CPU cores. For example, on a Core i5 CPU with two cores, but four logical processors, having only two passwords in the list does not result in a massive improvement (only 1.5 times quicker). With four passwords (or more) in the list, the improvement is better (about 1.9 times quicker). There is still some overhead, so you can't get double the speed with twice the CPU cores. We can see the reason for this difference by looking at the CPU utilization in the task manager during benchmarking. With only two passwords to hash, the CPU graph looks like the following:

    [ 898 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    In the preceding graph, we can see that initially, when hashing in series, the CPU is running at about 25%, fully using one logical CPU. When hashing two passwords in parallel, it uses 50%, running on two logical processors. This doesn't translate into a twofold increase in speed due to the intrinsic overheads and the nature of hyper-threading. Hyper-threading is a technology that typically exposes two logical processors to the OS for each physical core. However, the two logical CPUs still share the execution resources of their single core. Although there are two cores on the CPU die, hyper-threading exposes four logical CPUs to the OS. As we only have two threads, because we are hashing two passwords, we can only use two processors. If the threads are executed on different cores, then the speed increase can be good. But if they are executed on processors sharing the same core, then performance won't be as impressive. It is still better than single-threaded hashing due to scheduling improvements, which is what hyper-threading is designed for.

    [ 899 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    When we hash four passwords at the same time, the CPU graph looks like the following:

    We can see that now the initial 25% usage jumps to almost full utilization and we are making use of most of the processor. This translates to just under a doubling of the performance as compared to hashing in sequence. There are still significant overheads involved, but, as the main operation is now so much quicker, the tradeoff is worth it.

    Parallel LINQ There are other ways to take advantage of parallel programming, such as LINQ expressions. We could rewrite the previous example as a LINQ expression and it may look something like the following: passwords.AsParallel().ForAll(p => { var pbkdf2 = new Rfc2898DeriveBytes(p, 256, 10000); Convert.ToBase64String(pbkdf2.GetBytes(256)); });

    [ 900 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    You can enable these features with the AsParallel() method. The ForAll() method has the same purpose as the loops in previous examples and is useful if the order is unimportant. If ordering is important, then there is an AsOrdered() method, which can help solve this. However, this can reduce the performance gains due to the extra processing involved. This example performs similarly to the previous one that used a parallel loop, which is unsurprising. We can also limit the number of operations that can occur in parallel, using the WithDegreeOfParallelism() method as follows: passwords.AsParallel().WithDegreeOfParallelism(2).ForAll(p => { var pbkdf2 = new Rfc2898DeriveBytes(p, 256, 10000); Convert.ToBase64String(pbkdf2.GetBytes(256)); });

    This preceding example limits the hashes to two at a time and performs similarly to when we only had two passwords in the list, which is to be expected. This can be useful if you don't want to max out the CPU, because there are other important processes running on it. You can achieve the same effect with the TPL by setting the MaxDegreeOfParallelism property on an instance of the ParallelOptions class. This object can then be passed into overloaded versions of the loop methods as a parameter, along with the main body. It's important, when you're using parallel LINQ to query datasets, that you don't lose sight of the best place for the query to be performed. You may speed up a query in the application with parallelization, but the best place for the query to occur may still be in the database, which can be even faster. To read more on this topic, refer back to Chapter 5, Optimizing I/O Performance, and Chapter 3, Fixing Common Performance Problems.

    Parallel benchmarking Let's have a look at the output of a simple .NET Core console application, which benchmarks the techniques that we have just discussed. It shows one situation where parallelization doesn't help and actually makes things worse. It then shows another situation where it does help.

    [ 901 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    When calculating a sum, by accumulating 100,000 random integers between zero and ten, the quickest way is to use an array in a simple foreach loop. Using parallelization here makes the process much slower, and if used naïvely, without locking, will give an incorrect result, which is much worse. When performing a more computationally intensive workload, such as a PBKDF2 hashing function on multiple passwords, parallelization can help significantly. The time is almost halved, as it is running across two cores. The final operation, which limits the number of threads, can take a varying amount of time on different runs. This is likely due to the threads sometimes sharing a core and sometimes running on different cores. It can be almost as quick as using all logical cores, depending on the work.

    [ 902 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    The CPU graph for the benchmark looks like the following:

    The initial parallel summations max out the CPU and are very inefficient. Next, the singlethreaded hashing uses only one logical processor (25%), but the other, later hashes make almost full use of both the cores. The final hashing, limited to two passwords at a time, only makes use of half the CPU power.

    Parallel programming limitations Performance problems with web applications don't typically mean increasing speed for a single user on the system in isolation. It's easy to make a web app perform when there's only one user, but the challenge lies in maintaining that single user performance as the number of users scales up and you have wall-to-wall requests. The parallel programming techniques discussed in this section have limited use in scaling web applications. You already have a parallelized system, as each user request will be allocated its own resources. You can simply add more instances, agents, or machines to meet demand. The problem then is in distributing work efficiently between them and avoiding bottlenecks for shared resources. We'll cover this more in the next chapter, but first let's look at some things that you should be avoiding.

    [ 903 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    Practices to avoid We've shown some ways of speeding up software, but it's often better to illustrate what not to do and how things can go wrong. Web applications generally perform well if no bad practices have been followed and here we'll highlight a few things you should watch out for.

    Reflection Reflection is the process of programmatically inspecting your code with other code, and digging into its internals at runtime. For example, you could inspect an assembly when it is loaded to see what classes and interfaces it implements so that you can call them. It is generally discouraged and should be avoided if possible. There are usually other ways to achieve the same result that don't require reflection, although it is occasionally useful. Reflection is often bad for performance, and this is well-documented, but, as usual, it depends on what you're using it for. What is new is that there are significant changes to reflection for .NET Core. The API has changed and it is now optional. So, if you don't use reflection, you don't have to pay the performance penalty. This allows the native tool chain to optimize compilation better, as reflection adds restrictions to what can be done with static linking. There is an extra method on the reflection API now, so, whereas previously you would have called something like myObject.GetType().GetMembers(), you now need to call it as myObject.GetType().GetTypeInfo().GetMembers() by inserting the new GetTypeInfo() method, which is in the System.Reflection namespace. If you must use reflection, then it is best not to perform it repeatedly or in a tight loop. However, it would be fine to use it once during the startup of your application. Yet, if you can avoid using it entirely, you can benefit from some of the new improvements in .NET Core, for example, native compilation and the performance boost that it brings.

    Regular expressions A regular expression (regex) can be very useful, but can perform badly and is typically misused in situations where another solution would be better. For example, a regex is often used for e-mail validation when there are much more reliable ways to do this.

    [ 904 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    If reusing a regex repeatedly, you may be better off compiling it for performance by specifying the RegexOptions.Compiled option in the constructor. This only helps if you're using the regex a lot and involves an initial performance penalty. So, ensure that you check if there is actually an improvement and it isn't now slower. The RegexOptions.IgnoreCase option can also affect performance, but it may in fact slow things down, so always test for your inputs. Compiling has an effect on this too and you may want to use RegexOptions.CultureInvariant in addition, to avoid comparison issues. Be wary of trusting user input to a regex. It is possible to get them to perform a large amount of backtracking and use excessive resources. You shouldn't allow unconstrained input to a regex, as they can be made to run for hours. Regexes are often used for e-mail address validation, but this is usually a bad idea. The only way to fully validate an e-mail address is to send an e-mail to it. You can then have the user click a link in the e-mail to indicate that they have access to that mailbox and have received it. E-mail addresses can vary a lot from the common ones that people are regularly exposed to, and this is even truer with the new top-level domains being added. Many of the regexes for e-mail address validation found online will reject perfectly valid email addresses. If you want to assist the user, and perform some basic e-mail address validation on a form, then all you can sensibly do is check that there is an @ symbol in it (and a . after that) so that the e-mail is in the form [email protected]. You can do this with a simple string test and avoid the performance penalty and security risk of a regular expression.

    String concatenation in tight loops As strings are immutable and can't change, when you concatenate a string, a new object is created. This can cause performance problems and issues with memory use if you do it a lot inside a tight loop. You may find it better to use a string builder or another approach. However, don't fret too much about this, as it only applies at a large scale. Always test to see if it is genuinely a problem and don't micro-optimize where you don't need to. It's good general advice to work out where your code is spending most of its time, and focus your optimization there. It's obviously much better to optimize code executed millions of times inside a loop than code that only runs occasionally.

    [ 905 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    Dynamic typing C# is a statically typed language and variable types are checked at compile time, but it does have some dynamic features. You can use the dynamic type and objects such as ExpandoObject to get some of the features of a dynamically typed language. The var type is not in fact dynamic, and is simply inferred at compile time. Dynamic typing has a performance and safety penalty, so it is best avoided if you can find another way to solve your problem. For example, the ViewBag in ASP.NET MVC is dynamic, so it is best not to use ViewBag, and use a well-defined view model instead. This has many other benefits apart from performance, such as safety and convenience.

    Synchronous operations Synchronous methods block execution, and should be avoided if possible, especially if they are slow or access I/O. We've covered asynchronous (async for short) operations in previous chapters. Understanding how to use async is important for modern high performance programming, and new language features make it more accessible than ever. If an async method is available, then it should generally be used in preference to the synchronous blocking version. The async and await keywords make asynchronous programming much easier than it used to be, but, as covered in Chapter 3, Fixing Common Performance Problems, the effects on web applications are not always visible for a lone user. These convenient features allow you to serve more users simultaneously by returning threads to the pool during downtime, while waiting for operations to complete. The threads can then be used to service other users' requests, which allows you to handle more users with less servers than otherwise. Async methods can be useful, but the big gains come not from writing asynchronous code, but from having an asynchronous architecture. We will cover distributed architecture in the next chapter, when we discuss message queuing.

    Exceptions Exceptions should, as the name suggests, be reserved for exceptional circumstances. Exceptions are slow and expensive and shouldn't be used as flow control in business logic if you know that an event is likely to occur.

    [ 906 ] WOW! eBook www.wowebook.org

    Understanding Code Execution and Asynchronous Operations

    This isn't to say that you shouldn't use exception handling—you should. However, it should be reserved for events that are genuinely unexpected and rare. If you can predict ahead of time that a condition may occur then you should handle it explicitly. For example, the disk becoming full and your code not being able to write a file because there is no space is an exceptional situation. You would not expect this to normally happen, and you can just try the file operation and catch any exceptions. However, if you are trying to parse a date string or access a dictionary, then you should probably use the special TryParse() and TryGetValue() methods and check for null values rather than just relying on exception handling.

    Summary In this chapter, we discussed some techniques that can improve the performance of code execution and dug into the projects that make up .NET Core and ASP.NET Core. We explored data structures, serialization, hashing, and parallel programming and how to benchmark for measuring relative performance. Linear performance characteristics are easier to scale and code that does not exhibit this behavior can be slow when the load increases. Code that has an exponential performance characteristic or has erratic outliers (which are rare but very slow when they occur) can cause performance headaches. It is often better to aim for code, that while being slightly slower in normal cases is more predictable and performs consistently over a large range of loads. The main lesson here is to not blindly apply parallel programming and other potentially performance-enhancing techniques. Always test to make sure that they make a positive impact, as they can easily make things worse. We aim for the situation where everything is awesome, but, if we're not careful, we can make everything awful by mistake. In the next chapter, you'll learn about caching and message queuing—two advanced techniques that can significantly improve the performance of a system.

    [ 907 ] WOW! eBook www.wowebook.org

    7

    Learning Caching and Message Queuing Caching is incredibly useful and can be applied to almost all layers of an application stack. However, it's hard to always get caching working correctly, so, in this chapter, we will cover caching at the web, application, and database levels. We will show you how to use a reverse proxy server to store the results of your rendered web pages and other assets. We'll also cover caching at lower levels, using an in-memory data store to speed up access. You will learn how to ensure that you can always flush (or bust) your cache if you need to force the propagation of updates. This chapter also covers asynchronous architecture design using message queuing and abstractions that encapsulate various messaging patterns. You will learn how to perform a long running operation (such as video encoding) in the background, while keeping the user informed of its progress. You will learn how to apply caching and message queuing software design patterns to slow operations so that they don't have to be performed in real time. You'll also learn about the complexity that these patterns can add, and understand the tradeoffs involved. We'll see how to combat these complexities and mitigate the downsides, in Chapter 8, The Downsides of Performance-Enhancing Tools. The topics covered in this chapter include the following: Web caching background JavaScript service workers Varnish proxy and IIS web servers Redis and Memcached in-memory application caching

    WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Message Queuing and messaging patterns RabbitMQ and its various client libraries

    Why caching is hard Caching is hard, but it's not hard because it's difficult to cache something. Caching indefinitely is easy, the hard part is invalidating the cache when you want to make an update. There's a well-used quote from the late Phil Karlton of Netscape that goes: "There are only two hard things in Computer Science: cache invalidation and naming things." There are also many humorous variants on it, as used previously throughout this book. This sentiment may be a slight exaggeration, but it highlights how complex removing your "done-computer-stuff TM " from your "quick-things-box 2.0 TM " is perceived to be. Naming things is genuinely very hard though. Caching is the process of storing a temporary snapshot of some data. This temporary cache can then be used instead of regenerating the original data (or retrieving it from the canonical source) every time it is required. Doing this has obvious performance benefits, but it makes your system more complicated and harder to conceptualize. When you have many caches interacting, the results can appear almost random unless you are disciplined in your approach. When you are reasoning about caching (and message queuing), it is helpful to dispel the idea that data only exists in a single consistent state. It is easier if you embrace the concept that data has a freshness, and is always stale by some amount. This is in fact always the case, but the short timeframes involved in a small system mean that you can typically ignore it in order to simplify your thinking. However, when it comes to caching, the timescales are longer and so, freshness is more important. A system at scale can only be eventually consistent, and various parts of it will have a different temporal view of the data. You need to accept that data can be in motion, otherwise you're just not thinking fourdimensionally! As a trivial example, consider a traditional static website. A visitor loads a page in their browser, but this page is now instantly out-of-date. The page on the server could have been updated just after the visitor retrieved it, but they will not know, as the old version will remain in their browser until they refresh the page.

    [ 909 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    If we extend this example to a database-backed web application, such as an ASP.NET or WordPress website, then the same principle applies. A user retrieves a web page generated from data in the database, but it could be out-of-date as soon as it is loaded. The underlying data could have changed, but the page containing the old data remains in the browser. By default, web apps typically regenerate HTML from the DB for every page load, but this is incredibly inefficient if the data has not changed. It is only done like this so that when a change is made, it shows up immediately as soon as the page is refreshed. However, a user may have an old page in their browser, and you have limited control over this. So you may as well cache this page on the server as well and only remove it when the underlying data in the database changes. Caching the rendered HTML like this is often essential for maintaining performance at a scale beyond simply a small number of users.

    Web caching The first category of caching that we'll discuss is at the web level. This involves storing the final output of your web stack as it would be sent to users so that, when requested again, it's ready to go and doesn't need to be regenerated. Caching at this stage removes the need for expensive database lookups and CPU-intensive rendering at the application layer. This reduces latency and decreases the workload on your servers, allowing you to handle more users and serve each user rapidly. Web caching typically occurs on your web servers or on reverse proxy servers, which you have put in front of your web servers to shield them from excessive load. You might also choose to hand this task over to a third party, such as a CDN. Here we will cover two pieces of web server and proxy server software, IIS and Varnish. However, many more web caching and load balancing technologies are available, for example, NGINX or HAProxy. Caching at the web layer works best for static assets and resources such as JavaScript, CSS, and images. Yet it can also work for anonymous HTML that is rarely updated but regularly accessed, such as a homepage or landing page, which is unauthenticated and not customized for a user. We touched upon proxy servers in Chapter 3, Fixing Common Performance Problems, and covered web layer caching a little in Chapter 4, Addressing Network Performance. However, in this chapter, we'll go into more detail on web caching.

    [ 910 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Caching background Before we delve into the implementation details, it helps to understand a little about how caching works on the web. If you take the time to study the mechanisms at work, then caching will be less confusing and less frustrating than if you just dived straight in. It is helpful to read and understand the relevant HTTP specifications. However, don't assume that software always strictly adheres to these web standards even if it claims to. First, let's look at a typical network setup which you may be traversing with your HTTP traffic. The following diagram illustrates an example of a common configuration for a web application:

    As seen in the preceding diagram, the laptop and tablet users are connecting through a caching forward proxy server (which may be on a corporate network or at an ISP). The mobile user is connecting directly over the internet. However, all users are going through a CDN before reaching your infrastructure. After your firewall (not shown), there is an appliance which terminates the TLS connections, balances the load between web servers, and acts as a caching reverse proxy. These functions are often performed by separate devices, but we've kept things simple here.

    [ 911 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Copies of your resources will be kept on your web servers, your reverse proxy, your CDN, any forward proxies, and in the browsers on all user devices. The simplest way to control the caching behavior of these resources is to use in-band signaling and add HTTP headers to your content, declaring cache control metadata only in a single place. It's good practice to apply the same standard HTTP caching techniques to your own web servers and proxies even though you could customize them and flush their caches at will. This not only cuts down on the amount of configuration that you have to do, and avoids duplicated work, but it also ensures that any caches that you don't control should behave correctly too. Even when using HTTPS, the browser will still perform caching and there may also be transparent corporate proxies or meddling ISP captive portals in the way.

    HTTP headers HTTP caching involves setting cache control headers in your responses. There are many of these headers, which have been added over the years from different standards and various versions of the protocol. You should know how these are used, but you should also understand how the uniqueness of a cacheable resource is determined—for example, by varying the URL or by altering only a part of it, such as query string parameters. Many of these headers can be categorized by function and the version of HTTP that they were introduced with. Some headers have multiple functions and some are non-standard, yet are almost universally used. We won't cover all of these headers, but we will pick out some of the most important ones. There are broadly two types of caching header categories. The first defines an absolute time during which the cache can be reused, without checking with the server. The second defines rules which the client can use to test with the server if the cache is still valid. Most instructional headers (those that issue caching commands) fit into one of these two header categories. In addition to these, there are many purely informational headers, which provide details about the original connection and client, which may otherwise be obscured by a cache (for example, the original client IP address). Some headers, such as Cache-Control, are part of the latest standard, but others, such as Expires, are typically used only for backwards compatibility, in case there is an ancient browser or an old proxy server in the way. However, this practice is becoming increasingly unnecessary as infrastructure and software is upgraded.

    [ 912 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    The latest caching standard in this case is HTTP/1.1 as HTTP/2 uses the same caching directives (RFC 7234). Some headers date from HTTP/1.0, which is considered a legacy protocol. Very old software may only support HTTP/1.0. Standards may not be implemented correctly in all applications. It is a sensible idea to test that any observed behavior is as expected. The Age header is used to indicate how long (in seconds) a resource has been in a cache. On the other hand, the ETag header is used to specify an identifier for an individual object or a particular unique version of that object. The Cache-Control header tells caches if the resource may be cached. It can have many values including a max-age (in seconds) or no-cache and no-store directives. The confusing, yet subtle, difference between no-cache and no-store is that no-cache indicates that the client should check with the server before using the resource, whereas nostore indicates that the resource shouldn't be cached at all. To prevent caching, you should generally use no-store. The ASP.NET Core ResponseCache action attribute sets the Cache-Control header and is covered in Chapter 4, Addressing Network Performance. However, this header may be ignored by some older caches. Pragma and Expires are older headers used for backward compatibility and they perform some of the same functions that the Cache-Control header now handles. The X-Forwarded-* headers are used to provide more information about the original connection to the proxy or load balancer. These are non-standard, but widely-used, and are standardized as the combined Forwarded header (RFC 7239). The Via header also provides some proxy information, and Front-End-Https is a non-standard Microsoft header, which is similar to X-Forwarded-Proto. These protocol headers are useful for telling you if the original connection used HTTPS when this is stripped at the load balancer. If you are terminating the TLS connections at a load balancer or proxy server, and are also redirecting users to HTTPS at the application level, then it is important to check the Forwarded headers. You can get stuck in an infinite redirection loop if your web servers desire HTTPS but only receive HTTP from the load balancer. Ideally, you should check all varieties of the headers, but, if you control the proxy, you can decide what headers to use. There are lots of different HTTP headers that are involved in caching. The following list includes some of the ones that we haven't covered here. The large quantity of headers should give you an idea of just how complicated caching can be.

    [ 913 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    If-Match If-Modified-Since If-None-Match If-Range If-Unmodified-Since Last-Modified Max-Forwards Proxy-Authorization Vary

    Cache busting Cache busting (also known as cache bursting, cache flushing, or cache invalidation) is the hard part of caching. It is easy to put an item in a cache, but, if you don't have a strategy ahead of time to manage the inevitable change, then you may come unstuck. Getting cache busting correct is usually more important with web-level caching. This is because, with server side caching (which we'll discuss later in this chapter), you are in full control and can reset if you get it wrong. A mistake on the web can persist and be difficult to remedy. In addition to setting the correct headers, it is helpful to vary the URL of resources when their content changes. This can be done by adding a timestamp, but often a convenient solution is to use a hash of the resource content and append this as a parameter. Many frameworks, including ASP.NET Core, use this approach. For example, consider the following JavaScript tag in a web page:

    If you make a change to site.js, then the browser (or proxy) won't know that it has altered and may use a previous version. However, it will re-request it if the output is changed to something like the following:

    Here the v (version) parameter is the Base64 URL encoded, SHA-256 hashed content of the site.js file. Making a small change to the file will radically alter the hash due to the avalanche effect.

    [ 914 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Base64 URL encoding is a variant on standard Base64 encoding. It uses different non-alphanumeric characters (+ becomes - while / changes to _) and percent encodes the = character (which is also made optional). Using this safe alphabet (from RFC 4648) makes the output suitable for use in URLs and filenames. In ASP.NET Core, you can easily use this feature by adding the asp-append-version attribute with a value of true in your Razor views like so:

    Service workers If you are writing a client-side web app, rather than a simple dynamic website, then you may wish to exert more control over caching using new browser features. You can do this by writing your cache control instructions in JavaScript (technically ECMAScript 6 (ES6)). This gives you many more options when it comes to a visitor using your web app offline. A service worker gives you greater control than the previous AppCache API. It also opens the door to features such as mobile web app install banners (which prompt a user to add your web app to their home screen). However, it is still a relatively new technology. Service workers are a new experimental technology, and as such, are currently only supported in some recent browsers (partially in Chrome, Firefox, and Opera). You may prefer to use the previous deprecated AppCache method (which is almost universally supported) until adoption is more widespread. Information on current browser support is available at caniuse.com/#feat=serviceworkers and caniuse.com/#feat=offlineapps (for AppCache). A more detailed service worker breakdown is available at jakearchibald.github.io/isserviceworkerready. A service worker can do many useful things (such as background synchronization and push notifications), but the interesting parts, from our point of view, are the scriptable caches, which enable offline use. It effectively acts as an in-browser proxy server and can be used to improve the performance of a web application in addition to allowing interaction without an internet connection (after initial installation, of course). There are other types of web workers apart from service workers (for example, audio workers, dedicated workers, and shared workers), but we won't go into these here. All web workers allow you to offload work to a background task so that you don't make the browser unresponsive (by blocking the main UI thread with your work).

    [ 915 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Service workers are asynchronous and rely heavily on JavaScript promises, which we'll assume you are familiar with. If you're not, then you should read up on them, as they're useful in many other contexts involving asynchronous and parallel scripting. Service workers require the use of HTTPS (yet another good reason to use TLS on your entire site). However, there is an exception for localhost, so you can still develop locally.

    Service worker example To install a service worker, first create a file for it (which is served over HTTPS). In the following example, this file is called service-worker.js. Then inside a

    [ 923 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    The preceding code snippet not only renders the standard "));

    Previously, you would have to do this manually, usually in a hurry when your CDN went down. This new helper also works for other scripts and CSS files. For more examples, take a look at _Layout.cshtml in the default template. It's important to use a secure HTTPS connection to CDN resources in order to avoid mixed content warnings or script loading errors in browsers; most popular CDNs now support HTTPS. For additional information on CDNs, see Chapter 4, Addressing Network Performance.

    When not to cache There are certain situations when you shouldn't cache pages or at least you need to be very careful in how you go about it. As a general rule of thumb, caching the rendered output of an authorized page is a bad idea. In other words, if a user has logged into your site, and you are serving them customized content (which could easily be sensitive), then you need to consider caching very carefully. If you accidentally serve one user's cached content to another, then at best, it will be annoying, as the personalization will be incorrect. At worst, you could expose private information to the wrong person and potentially get into legal trouble. This is similar to the general rule of not universally enabling CORS if you serve authenticated content. It can be done successfully, but you need to understand the mechanisms in order to configure it to work safely. For caching, you would need a unique identifier in the URL that can't be guessed. Some dynamic cache control systems, used by network appliances and CDNs, can make use of cookies for this, but it's beyond normal HTTP-based cache control. It is similar to how you might need sticky sessions on a load balancer, because your application was not designed to be stateless. For authenticated caching, it may be better to not cache at the web level and instead cache at the application level and below. This allows you to cache smaller discrete chunks rather than a whole page, which can enhance reusability.

    [ 924 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Application layer caching Application level (or layer) caching means storing reusable data temporarily inside your infrastructure, but not in the main database. This can be in the memory of your application or web servers, but with multiple servers, this tends to be in a distributed in-memory store such as Memcached or Redis. You can, of course, use both in-memory stores on your web servers and a centralized cache. However, if you have multiple web servers, then you will need a way to synchronize the caches. You can use publish-subscribe (pub/sub) messaging for this, which we will cover later in this chapter. The following diagram shows a simple centralized caching setup. In a real situation, you would probably have multiple clustered cache servers.

    The web servers can now ask the cache if the data they need is in there, before going to the database. This reduces load on the DB and is often quicker, as the data will be in memory if present. If there is a cache miss and the database does need to be queried, then the result can be written to the cache for other servers to use.

    [ 925 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Redis Redis is a popular in-memory store that can also persist data to disk for permanent storage. It runs best on Linux, but a version is available on Windows for development purposes. Redis additionally supports pub/sub messaging, which can be useful for cache invalidation. You can read more about Redis at redis.io. You may wish to use the Windows version of Redis for local development work, but still deploy to the supported version on Linux. You can get the Windows version at github.com/MSOpenTech/redis or you could run the Linux version in a virtual machine, perhaps using Docker or Vagrant. Redis cache is provided as a service on both Azure and AWS (ElastiCache offers both Memcached and Redis). You don't need to manage your own server, but because the technology is not cloud-specific, you won't get locked in if you want to migrate in the future. As Redis keeps the entire dataset in memory, but, is also able to persist to disk, it can be suitable as a primary data store unlike Memcached. However, it is more commonly used only as a cache, especially if a cloud service is used and it's paired with a cloud database such as Azure SQL Database or AWS Relational Database Service (RDS). There are two recommended .NET C# clients for Redis—ServiceStack.Redis and StackExchange.Redis. The Stack Exchange client is used heavily on sites such as Stack Overflow and is easier to use correctly than the the ServiceStack one. You can read more about it at github.com/StackExchange/StackExchange.Redis and install it via NuGet. If using caching at the application layer, then you will probably need to write a significant amount of custom code. You will also need to work out what format to serialize your data into for storage in the cache. If serving directly to browsers, then JSON could be useful. But if is to be used internally, then you may prefer a binary format such as MS Bond or Protocol Buffers. See Chapter 6, Understanding Code Execution and Asynchronous Operations, for more on serialization formats and libraries.

    [ 926 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Database result set caching Caching at the database level is similar to application level caching, and it uses similar infrastructure, but requires less custom code. You can use the caching features built into an O/RM, which may make it easier to retrofit. When we talk about database caching here, we are not referring to caching within the database engine itself. DBs use extensive performance enhancing techniques, such as query caching, and hold lots of their data in memory. However, this is abstracted away from the developer, and the caching we mention here refers to storing the output of a query in an application cache. This is similar to, but subtly different from, the previous section, where you would be storing custom objects. In the context of O/RMs (such as NHibernate and Entity Framework), this is known as second-level caching. First level caching generally already happens per session, by default, and is used to help avoid things like Select N+1 problems. Second-level caching operates at a level higher than individual transactions and allows you to share cached data across multiple database sessions over your entire application.

    Message queuing A message queue (MQ) is an asynchronous and reliable way of moving data around your system. It is useful for offloading work from your web application to a background service, but can also be used to update multiple parts of your system concurrently. For example, distributing cache invalidation data to all of your web servers. MQs add complexity and we will cover managing this in Chapter 8, The Downsides of Performance-Enhancing Tools. However, they can also assist in implementing a microservices architecture where you break up your monolith into smaller parts, interfaced against contracts. This can make things easier to reason about within large organizations, where different teams manage the various parts of the application. We will discuss this in more detail in the next chapter, as queues aren't the only way of implementing this style of architecture. For example, HTTP APIs can also be used to do this.

    [ 927 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Coffee shop analogy If using MQs, then you may need to implement extra reconciliation logic for errors occurring in the background. This is best explained with a coffee shop analogy. If you purchase a takeaway coffee, perhaps in a branch of a popular multinational chain of franchised caffeinated beverage outlets (that dislikes paying tax), then your drink is prepared asynchronously to the payment processing. Typically, you place your order and a barista will start to prepare your coffee, before you have paid for it. Additionally, you will normally pay before receiving your drink. There are many things that could go wrong here, but they are rare enough for the extra cost to be worth it, as it speeds up the ordinary workflow. For example, you may find that you are unable to pay after placing your order, but the coffee creation process has already begun. This would result in wasted stock, unless there is another customer waiting whom it could be used for. Or perhaps, after you have paid, the barista discovers that a key ingredient for your order is missing. They could either offer you a refund, or negotiate a different drink. Although more complex, this process is clearly superior to performing the actions in series. If you had to demonstrate that you had the means to pay, your drink was made, and then you completed paying, only one customer could be served at a time. Assuming there are enough staff, payment processing and drink preparation can be performed in parallel, which avoids a long queue of customers. Running a coffee shop in this way makes intuitive sense and yet, in a web application, it is common to have a relatively long running transaction complete before informing the user of the result. In some situations, it may be better to assume the action will succeed, inform the user of this immediately and have a process in place in case it goes wrong. For example, payment processing gateways can be slow and unreliable, so it may be better to charge a user's credit card after accepting an order. However, this means that you can no longer handle a failure by showing the user an error message. You will have to use other methods of communication. When you order items on Amazon, they take payment details immediately, but they process the payment in the background and send you e-mails with the results. If the payment failed, they would need to cancel the order fulfilment and notify you. This requires extra logic, but is quicker than processing the payment transaction and checking stock before confirming the order.

    [ 928 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Message queuing styles There are, broadly, two styles of message queuing—with and without a central broker. With a broker, all messages go through a hub, which manages the communication. Examples of this style include RabbitMQ, ActiveMQ, and MS BizTalk. There are also brokerless styles (that don't use a broker), where communication between nodes is direct. An example of this style includes ZeroMQ (ØMQ), which has a native C# port called NetMQ. Cloud queuing services, including Azure Service Bus, Azure Queue storage, and AWS Simple Queue Service (SQS), are also available. However, as with all non-generic cloud services, you should be wary of getting locked in. There are cloud providers of standard RabbitMQ hosting, which makes migration to your own infrastructure easier down the line if you don't initially want to run your own server. For example, CloudAMQP offers RabbitMQ hosting on multiple cloud platforms. RabbitMQ implements the Advanced Message Queuing Protocol (AMQP), which helps to ensure interoperability between different MQ brokers, for example, to allow communication with the Java Message Service (JMS). Azure Service Bus also supports AMQP, but a big benefit of RabbitMQ is that you can install it on your development machine for local use, without an internet connection. There is also Microsoft Message Queuing (MSMQ), which is built into Windows. While this is useful for communication between processes on a single machine, it can be tricky to get it working reliably between multiple servers.

    Common messaging patterns There are two types of common messaging patterns: point-to-point unicast and publishsubscribe. These send messages to a single recipient and many recipients respectively.

    Unicast Unicast is the standard message queuing approach. A message is sent from one service process or software agent to another. The queuing framework will ensure that this happens reliably and will provide certain guarantees about delivery. This approach is dependable, but doesn't scale well as a system grows, because each node would need to know about all its recipients. It would be better to loosely couple system components together so that they don't need to have knowledge about any of the others.

    [ 929 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    This is often achieved by using a broker, which has three main advantages: By using a broker, you can decouple processes from each other so that they aren't required to know about the system architecture or be alive at the same time. They only care about the message types and the broker takes care of routing the message to the correct destination. Broker queues enable an easy distribution of work pattern, especially when combining multiple producers. You can have multiple processes consuming the same queue and the broker will allocate messages to them in a round-robin fashion. This is a simple way of building a parallel system, without having to do any asynchronous programming or worrying about threads. You can just run multiple copies of your code, perhaps on separate machines if constrained by hardware and they will run simultaneously. You can easily broadcast or multicast a particular type of message, perhaps to indicate that an event has occurred. Other processes that care about this event can listen to the messages without the publisher knowing about them. This is known as the pub/sub pattern.

    Pub/sub Pub/sub, as the name suggests, is where a software agent publishes a message onto a queue, and other agents can subscribe to that type of message to receive it. When a message is published, all subscribers receive it, but crucially the publisher does not require any knowledge of the subscribers or even need to know how many there are (or even if there are any at all). Pub/sub is best done with the broker style of message queuing architecture. It can be done without a broker, but it is not particularly reliable. If your use case can tolerate message loss, then you may be able to get away without a broker. But, if you require guaranteed delivery, then you should use one. Using the RabbitMQ broker also allows you to take advantage of exchanges which can perform complex routing of messages. If you don't want to lose messages, then you need to carefully design your pub/sub system (even if using a broker). A published message that has no subscribers may simply disappear into the ether without a trace and this might not be what you want.

    [ 930 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    The following diagram shows the differences between simple message forwarding, work distribution, and pub/sub:

    Clearly, if you require a reliable broker, then it needs to be highly available. Typically, you would cluster multiple brokers together to provide redundancy. Using a broker also allows you to write custom rules to define which subscribers receive what messages. For example, your payment system may only care about orders, but your logging server may want to get all messages from all systems. You will want to monitor not only your broker servers but also the length of the queues. In other words, the number of messages in each queue should always be steady and close to zero. If the number of messages in a queue is steadily growing, this might indicate a problem which your operations team will need to resolve. It may be that your consumers can't process messages faster than your producers are sending them and you need to add more consumers. This could be automated and your monitoring software could Spin-up an extra instance to scale your system, meeting a temporary spike in demand.

    [ 931 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    RabbitMQ RabbitMQ is a free and open source message queuing server. It's written in Erlang, which is the same robust language that WhatsApp uses for its messaging backend. RabbitMQ is currently maintained by Pivotal (whose labs also make the Pivotal Tracker agile project management tool), but it was originally made by LShift. It was then acquired by VMware before being spun out as a joint venture. It's distributed under the Mozilla Public License (MPL) v1.1, an older version of the license that the Firefox web browser uses. The messaging server can be used from many different languages and frameworks such as Java, Ruby, and .NET. This can make it helpful for linking diverse applications together, for example, a Rails app that you want to interface with an ASP.NET Core app or C# service. You can read more about RabbitMQ and download builds from rabbitmq.com.

    RabbitMQ is more modern than systems such as MSMQ and includes features such as an HTTP API and a web admin management interface. Along with HTTP and AMQP, it also supports Simple Text Orientated Messaging Protocol (STOMP) and MQTT, which is useful for lightweight Internet of Things (IoT) hardware applications. All of these protocols improve interoperability with other messaging systems and they can normally be secured using standard TLS. The web management interface shows you how many messages are flowing through your queues, and how they are configured. It also allows you to administer your queues (tasks such as purging or deleting messages) and looks something like the following screenshot:

    [ 932 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    Queuing frameworks and libraries You will typically want to use a prebuilt client library or framework to interact with your message queues. There are official libraries in many different languages for the various queuing systems, which offer low-level access. For example, RabbitMQ has an official .NET/C# client library. However, there are also other opinionated clients and frameworks, which offer a higher level of abstraction for common messaging tasks. For example, NServiceBus (NSB), which supports RabbitMQ, MSMQ, SQL Server, and Azure, is a commercial offering. A free alternative to NSB is MassTransit (masstransit-project.com), which is a lightweight service bus and distributed application framework. It has also spun out the super convenient Topshelf framework (topshelf-project.com), which makes creating Windows services really easy. Neither yet runs on .NET Core, but support for both of these projects is in progress.

    [ 933 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    One interesting feature of MassTransit (and NSB) is support for sagas. A saga is a complex state machine that allows you to model the story of an entire workflow. Rather than defining individual messages and documenting how they all fit together, you can implicitly capture the golden path and error flows within a saga. There is also the excellent open source library EasyNetQ, which makes implementing pub/sub on RabbitMQ trivial. You can read about it at EasyNetQ.com. Unfortunately, neither the official RabbitMQ client nor EasyNetQ support .NET Core at the time of writing. However, work is in progress for the official client and EasyNetQ has the issue logged. The RabbitMQ team is working on a new, asynchronous official .NET client, which will only target .NET Core and the Task Parallel Library (TPL). We covered the TPL, which is part of the parallel extensions that also include PLINQ, in Chapter 6, Understanding Code Execution and Asynchronous Operations. Hopefully, by the time you read this, things will have stabilized and you will be able to use RabbitMQ with .NET Core. Otherwise, you can use the traditional .NET Framework, which may currently be more suitable for the enterprise style applications that normally require message queuing. However, you could look into using RestBus, which is a RabbitMQ library that supports ASP.NET Core. You can read more about it at restbus.org; it also supports both Web API and ServiceStack. Library and framework support for .NET Core and ASP.NET Core can change rapidly, so check ANCLAFS.com for the latest information. Feel free to help out and contribute to this list or to any of the open source projects that need porting.

    Summary In this chapter, we have investigated the various tools and methods used for caching and message queuing. These two techniques offer different ways of improving the performance of your system by moving data to other locations and not having one massive monolith do everything. These are both advanced topics and difficult to cover in such a small space. Hopefully, you have been introduced to some fresh ideas that can help you with solving problems in original ways. If you have discovered a new technology that you think will assist, you're encouraged to read the documentation and specifications for all of the implementation details.

    [ 934 ] WOW! eBook www.wowebook.org

    Learning Caching and Message Queuing

    However, before you dive in, you should understand that advanced techniques are complex and have downsides, which can reduce your development speed. In the next chapter, we'll learn about these downsides and discover approaches for managing complexity such as microservices.

    [ 935 ] WOW! eBook www.wowebook.org

    8

    The Downsides of Performance-Enhancing Tools A lot of the topics that we covered in this book improve performance at a cost. Your application will become more complicated and harder to understand or reason about. This chapter discusses these trade-offs and how to mitigate their impact. You should implement many of the approaches that you learned so far in this book only if you require them and not just because they are interesting or challenging. It's often preferable to keep things simple if the existing performance is good enough. You will learn how to make pragmatic choices about what technologies and techniques you should use. You'll also see how to manage the complexities if you choose to use advanced methods. Topics covered in this chapter include the following: Managing complexity with frameworks and architecture Building a healthy culture to deliver high performance Distributed debugging and performance logging Understanding statistics and stale data Many books and guides only focus on the positives of new tools and frameworks. However, nothing comes for free, and there is always a penalty, which may not be immediately obvious. You may not feel the effects of the choices that you make, particularly in technical architecture, for a long time. You might not discover that a decision was bad until you try to build on it, perhaps years later.

    WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Managing complexity One of the main problems with performance-enhancing techniques is that they typically make a system more complicated. This can make a system harder to modify and it may also reduce your productivity. Therefore, although your system runs faster, your development is now slower. We commonly find this complexity problem in enterprise software, although usually for different reasons. Typically, many unnecessary layers of abstraction are used, supposedly to keep the software flexible. Ironically, this actually makes it slower to add new features. This may seem counterintuitive, until you realize that simplicity makes change easier. There's a satirical enterprise edition of the popular programmer interview coding test FizzBuzz, which is available at fizzbuzz.enterprises. It's good inspiration for how not to do things. If you don't need a feature yet, then it's often best to leave it out rather than building it just in case you might need it in the future. The more code you write, the more bugs it will have, and the harder it will be to understand. Over-engineering is a common negative psychological trait that is easy to fall victim to if you aren't aware of it, and marketers often exploit this. For a non-software example, four-wheel drive SUVs are sold to people who will never need their off-road capabilities on the false premise that it may potentially come in useful someday. Yet the financial, safety, environmental, and parking convenience costs outweigh this supposed benefit because it's never used. We often term this development advice from the extreme programming (XP) philosophy, You Aren't Going to Need It (YAGNI). Although we sometimes use slightly different words, the meaning is the same. YAGNI advocates keeping things simple and only building what you immediately need. This doesn't mean that you should make your software hard to modify. It's still important to stay flexible, just don't add features before you need them. For example, adding an abstraction interface when there is only a single implementation may be overkill. You could easily add it along with the second implementation if and when you build it. It's difficult to move fast yet not break things when doing so. How you achieve high reliability in addition to a consistently good development speed will depend on many things that are specific to your situation, such as your team size, organizational structure, and company culture.

    [ 937 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    One method is to embrace change and develop a system where you can refactor your code in confidence. Using a statically-compiled language, such as C#, is a good start, but you should also have a comprehensive test suite to avoid regressions. You should design a system so that it is loosely coupled, which means that you can change parts in isolation without a lot of knock-on effects. This also makes it easier to unit test and unit tests are invaluable to refactor in confidence and prevent functional regressions. We will cover testing and automation with a Continuous Integration (CI) workflow in the next chapter. In this chapter, we will talk more about various architectural styles that can help you maintain your application.

    Understanding complexity When learning about new ways of doing things, you should avoid doing them without understanding the reasons. You should know the benefits and downsides and then measure the changes to prove that they are what you expect. Don't just blindly implement something and assume it improves the situation. Try to avoid cargo cult programming and always objectively evaluate a new approach. Cargo cult programming is the practice of emulating something successful but failing to understand the reasons why it works. Its name comes from the cargo cults of the Pacific who built false airstrips after the Second World War to encourage cargo delivery. We use it to describe many things where correlation has been confused with causation. One example is a company encouraging long hours to deliver a project because they have heard of successful projects where employees worked long hours. However, they fail to understand that the successful project and long hours are both independent byproducts of a highly motivated and competent workforce, and they are not directly related. It's important to keep code readable, not just for others on your team or new members but also for yourself in the future (when you will forget how something works and why you wrote it in this way). This doesn't simply mean writing helpful explanatory comments in the code, although this is a very good practice. It also applies to source control comments and keeping documentation up to date. Readability also involves keeping things simple by only making them as complex as they need to be and not hiding functionality in unnecessary layers of abstraction. For example, not using clever programming techniques to reduce the file line-count when a standard structure (for example, a loop or if statement) would be more readable and only slightly longer.

    [ 938 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    It helps to have a standard way of doing things in your team to avoid surprises. Using the same method everywhere can be more valuable than finding a better way of doing it, and then having lots of different ways. If there is consensus, then you can go back and retrofit the better method everywhere where you need it.

    Complexity reduction There are various solutions to manage the complexity that performance-enhancing techniques can add. These usually work by reducing the amount of logic that you need to think about at any one time by hiding the complications. One option is to use frameworks that standardize how you write your application, which can make it easier to reason about. Another approach is to use an architecture that allows you to only think about small parts of your code base in isolation. By breaking up a complex app into manageable chunks, it becomes easier to work with. This idea of modularity is related to the Single Responsibility Principle (SRP), which is the first of the SOLID principles (the others are Open/Closed, Liskov substitution, Interface segregation, and Dependency inversion). It is also similar to the higher level Separation of Concerns (SoC) and to the simplicity of the Unix philosophy. It is better to have many tools that each do one thing well, rather than one tool that does many things badly.

    Frameworks Frontend frameworks, such as React (created at Facebook), are designed to reliably build web application views in JavaScript. These help large teams work on a project by simplifying the data flow and standardizing the approach. React can be integrated with ASP.NET Core using the ReactJS.NET project (reactjs.net). We can use React Native to build cross-platform apps that share code across iOS, Android, and the Universal Windows Platform (UWP), targeting phone, desktop, and Xbox (github.com/ReactWindows). There's also CodePush to let you live update your JavaScript apps (including Cordova), without going through an app store (microsoft.github.io/code-push). If you prefer coding in C#, then you can build your cross-platform mobile apps with Xamarin (which is now free after Microsoft acquired it). However, we won't go further into any of these technologies in this book.

    [ 939 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    On the backend, we have the server-side frameworks of .NET Core and ASP.NET Core. Along with C# features, these provide convenient ways of simplifying historicallycomplicated features. For example, the async and await keywords hide a lot of the complicated logic associated with asynchronous programming, and lambda functions concisely express intent. We covered many of these features earlier in this book, so we won't go over them again here. We also highlighted libraries that can make your life easier by hiding boilerplate code for complex operations, for example, EasyNetQ and RestBus. Hiding a complex process is never perfect, and you will occasionally come across abstractions that leak some of their implementation detail. For example, when handling exceptions, you may find that the issue you're interested in is now wrapped in an aggregate exception. If you're not careful, then your error logs may no longer contain the detail that you desire. What we have yet to talk about in detail is the architecture of a web application. Splitting a monolithic system up into discrete parts can not only improve performance, but if done right, it can also make it easier to maintain.

    Architecture In the previous chapter, when discussing message queuing, we briefly covered the microservices architecture. This style is a more modern reimagining of the traditional Service Oriented Architecture (SOA), and although using reliable MQ communication is preferred, we can also perform this with representational state transfer (RESTful) HTTP APIs. Typically, we build a traditional web app as a single application or monolith. This is common if the app has grown organically over an extended period of time, and this is a perfectly acceptable practice. It's a poor decision to over-engineer too early before there is any need, which may never materialize. Excessive popularity is a nice problem to have, but don't optimize for this prematurely. This isn't an excuse to make things unnecessarily slow, so be sure to understand the tradeoffs involved.

    [ 940 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Using a monolithic architecture is not an excuse to build something badly, and you should plan for expansion, even if you do not implement it immediately. You can keep things simple while still allowing for future growth. Although the application is a single unit, you should split the code base into well-organized modules, which are linked together in a simple, logical, and easy-to-understand way. Refer to the SOLID principles, mentioned previously. If a monolithic application can't easily scale to meet user demand and is performing poorly as a result, then you can split it up into smaller separate services. You may also wish to split an app up if it has become too cumbersome to iterate on quickly and development speed has slowed.

    Monolith versus microservices The following diagram shows some differences between a typical monolith and a microservices architecture:

    [ 941 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Here, the user makes a request to an application running on a web farm. We have omitted firewalls, load balancers, and databases for clarity, but multiple web servers are shown to illustrate that the same codebase runs on multiple machines. In the initial monolith architecture, the user communicates directly with a single web server. This is ideally per request/response pair. However, if the application was poorly designed and holds state in memory, then sticky sessions may cause the load to pool on certain servers. The second example in the diagram of a microservices architecture is obviously more complicated but also more flexible. The user again sends a request to a web server, but instead of doing all of the work, the server puts a message into a queue. The work in this queue is distributed between multiple backend services of which the first one is busy, so a second service picks up the message. When the service completes, it sends a message to an exchange on the message broker, which uses a pub/sub broadcast to inform all of the web servers. One added piece of complexity is that the architecture should have already sent the response to the user's original web request, so you need to consider the user experience (UX) more carefully. For example, you can display a progress indicator and update the status of this with an asynchronous WebSocket connection.

    Architecture comparison The monolith approach is simple, and you can just add more web servers to handle additional users. However, this approach can become cumbersome as an application (and development team) grows larger because the slightest change requires a full redeployment to each web server. In addition, a monolith is easy to scale vertically (up) but hard to scale horizontally (out), which we covered previously. However, a monolith is easier to debug, so you need to be careful and have good monitoring and logging. You don't want any random outage investigation to turn into a murder mystery hunt because of unnecessarily implemented microservices.

    [ 942 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Historically, Facebook had a deployment process that consisted of slowly compiling their PHP code base to a gigantic gigabyte-scale binary (for runtime performance reasons). They then needed to develop a modified version of BitTorrent to efficiently distribute this huge executable to all of their web servers. Although this was impressive infrastructure engineering, it didn't address the root cause of their technical debt problem, and they have since moved on to better solutions, such as the HipHop Virtual Machine (HHVM), which is similar to the .NET CLR. If you wish to practice continuous delivery and deploy multiple times a week (or even many times a day), then it's advantageous to break your web application up. You can then maintain and deploy each part separately, communicating with each other using messages against an agreed API. This separation can also help you use agile development methodologies—for example, using many smaller teams—rather than big teams because smaller teams perform better. Your instrument of control to scale a monolith is very crude, as all work is done in one place. You can't scale parts of your app independently to the other components. You can only scale the whole thing even if the high load is concentrated in a small part. This is analogous to a central bank only having control of a single interest rate as a lever, which affects many things at once. If your app is distributed, then you only need to scale the part that requires it, avoiding over provisioning and reducing costs. A well-used (and often rephrased) quote from Abraham Maslow goes: "I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." This is known as the law of the instrument and is related to the confirmation bias. If you only have one tool, then you are likely to use this tool for everything and only see things that support your existing ideas. This commonly applies to tools and frameworks, but it can also apply to scaling techniques. The first step towards modularity may be to split a big web application into many smaller web apps so that you can deploy them separately. This strategy can have some benefits, but it also has many limitations. Some logic may be unsuitable to host in a web app, for example, long-running processes, such as those monitoring a filesystem or used to manipulate media.

    [ 943 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    You may also find that you need to duplicate code that was shared in the original monolith. If you adhere to the Don't Repeat Yourself (DRY) doctrine, then you might extract this functionality to a library. Yet, now you have dependencies and versioning to manage. You'll also need processes to build, package and host your library in a private repository, all of which can slow down development and reduce your agility. Sometimes, we also refer to DRY as Duplication Is Evil (DIE), and this is the sensible idea that an implementation should only occur in a unique location. If other code requires this functionality, then it should call the original function and not have copied code pasted in. This means that you only need to make a change in a single place to apply it everywhere. A more advanced approach is to extract functionality into a separate service, which can handle many different web applications. If designed correctly, then this service won't need to have any knowledge of these apps and will simply respond to messages. This allows you to add a new web app without making changes to the code of other applications or services.

    Refactoring Refactoring a monolith into services can be tricky if the application has become a tightlycoupled big-ball-of-mud, but if it's been well-built (with plenty of test coverage and looselycoupled code), then it shouldn't be too taxing. There's a big difference between a well-built monolith and a big-ball-of-mud made with little thought. It's worth expanding on test coverage, as unit tests are essential for successful refactoring without introducing regressions or new bugs. Unit tests allow you to refactor and tidy up in confidence, and they prevent the creation of code that is fragile, which developers are afraid to touch. We will cover testing in greater detail, including automation, in Chapter 9, Monitoring Performance Regressions. Both of these design patterns may appear superficially similar, but internally, it's a different story. The following diagram illustrates the differences between a well-architected monolith and a messy big-ball-of-mud:

    [ 944 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    The large boxes represent the code running on each web server (the Web Server N boxes from the monolith example in the earlier diagram). From the outside, they look the same, but the difference is in the internal coupling of the code. The big-ball-of-mud is a tangled mess with code referencing other functions throughout the application. The monolith is well-structured with a clear separation of concerns between different modules. Changes to code in the big-ball-of-mud can have unexpected side effects because other parts of it may rely on the implementation details of what you are modifying. This makes it brittle, difficult to alter, and developers may be afraid of touching it. The well-built monolith is easy to refactor and split out into separate services because the code is neatly-organized. It uses abstract interfaces to communicate between modules, and code doesn't reach into another class's concrete implementation details. It also has excellent unit test coverage, which runs automatically.

    [ 945 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Although both are a single code base, the quality of the monolith is much higher because it is well-distributed internally. Good design is important for future-proofing, as we build in and allow for expansion. The monolith hasn't been split up too early (before scaling was required), but the design makes this easy to do later. In contrast, the big-ball-of-mud has accumulated a large amount of technical debt, which it needs to pay off before we can make any further progress. Technical debt (or tech debt for short) is the concept of not finishing a job or cutting corners, which could lead to more difficulties until it is paid back. For example, failing to properly document a system will make altering it later more difficult. Tech debt is not necessarily a bad thing if it is deliberately taken on in full knowledge, logged, and paid back later. For example, waiting to write documentation until after a release can speed up delivery. However, tech debt that is not paid back or is accumulated without knowledge (simply due to sloppy coding), will get worse over time and cause bigger issues later. The best way to deliver a high-quality and flexible application (that's easy to refactor) is to have a competent and conscientious development team. However, these attributes can often be more about culture, communication, and motivation than simply skill or raw talent. Although this isn't a book about team management, having a healthy culture is very important, so we'll cover a little of this here. Everyone in an operations and software development department can help create a positive and friendly culture, even if you usually need buy-in from higher up as well. Tools are also very useful in refactoring and testing. We covered the Visual Studio IDE and ReSharper plugin previously, but there are many more tools for testing. We will cover more testing tools, including automation, in Chapter 9, Monitoring Performance Regressions.

    A culture of high performance If you want to achieve high performance, then it's important to foster a company culture that encourages this and recognizes performance as vital. Culture can't just come from the bottom up only involving engineers, it also needs to come from the top-down and management must buy in to the performance prerogative. This section is not very technical, so feel free to skip it if you don't care about management or the human side of software development.

    [ 946 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    A blameless culture The most important attributes of a high-performance culture are that it should be open and blameless. Everyone needs to be focused on achieving the best possible outcomes through measuring and learning. Attributing fault to individuals is toxic to delivering great software, and this is not only the case when it comes to performance. If something goes wrong, then it is a process problem and the focus should be on improving it and preventing repeat mistakes in the future, for example, by automating it. This is similar to how safety-critical industries, such as air travel, behave because they recognize that blaming people discourages them from raising issues early before a disaster occurs. A related philosophy is the Japanese process of Kaizen, which encourages continuous improvement by everyone. The car manufacturer Toyota pioneered Kaizen practices to improve the efficiency of their production line, and most automotive companies and many other different industries have since adopted them. Some industries also have processes to encourage whistle-blowing that protect the individuals raising concerns. However, if this is required in web application development, then it's a sure sign that the culture needs work. Developers should feel that they are able to directly raise concerns bypassing their line manager without consequence. If everyone's opinion is respected, then this shouldn't even be necessary.

    Intellectual dishonesty If team members get defensive when ideas are challenged, then this is a sign that things may not be working well. Everybody makes mistakes and has gaps in their knowledge, a truth the best engineers embrace. You should strive for a culture where everyone is open to new ideas and is always asking questions. If people are unable to accept constructive criticism and have their ideas challenged, then they may lack confidence and be covering up a lack of competence. Experienced developers know that you never stop learning, they admit their ignorance and are always open to offers of improvements.

    [ 947 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Being closed to suggestions alters the behavior of others, and they will stop raising small issues early. This results in open secrets about poor quality, and the first that is known about a problem is at release time, at which point everything is much worse (or on fire). This is not an excuse to be nasty to people, so always try to be nice and gently explain the reasons behind your criticism. It's easy to find fault in anything, so always propose an alternative approach. If you are patient, then a reasonable person will be grateful for the learning opportunity and appreciate gaining experience. A good rule to follow is "don't be a jerk" and treat others as you would like to be treated, so be kind and think about how you would feel if the situation was reversed. Just remember that being nice is not always compatible with doing the right thing. A little self-deprecation can go a long way to making you more approachable, rather than simply dictating the one-true-way. However, you should make it clear when something is a joke or tongue in cheek, especially when dealing with cultures that are more direct or when communicating textually. For example, North Americans are often less subtle than and not as sarcastic as the British (who also spell some word differently, and some would say more correctly). Obviously, use your own judgment because this may be terrible advice and could cause offense or, even worse, a full-on diplomatic incident. Hopefully, it is self-evident that this whole paragraph is tongue in cheek. People who have integrity and confidence in their ideas can afford to be modest and selfdeprecating, but internal company culture can influence this too. A particularly bad practice is to conduct performance reviews by laddering (also known as stack ranking), which involves putting everyone in order, relative to everybody else. This is toxic because it rewards people who focus more on marketing themselves than those who recognize the deficiencies in their technical skills and try to improve them. In the pathological case, all of the best people are forced out, and you end up with a company full of sociopathic sharp suits, who are technically illiterate or even morally bankrupt.

    Slow down to go faster Sometimes, the company must allow the development team to slow down on feature delivery in order to focus on performance and resolving technical debt. They should be given time to be thoughtful about design decisions so that they can cogitate avoiding premature generalization or careless optimization.

    [ 948 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Performance is a significant selling point of software, and it is much easier to build quality throughout the development process than in a polishing phase at the end. There is a significant body of evidence that suggests that good performance improves Return on Investment (RoI) and creates customers. This is especially true on the web, where poor performance decreases conversion rate and search engine ranking. Having a healthy culture is not only important for the runtime performance of your software but also for the speed at which you can develop it. The team should be encouraged to behave rigorously and write precise, but also concise, code. You get what you measure, and if you only measure the rate of feature delivery, or even worse simply Lines of Code (LoC) written, then quality will suffer. This will hurt you in the long run and is a false economy. False economies are when you make short term cost saving measures that actually lose you more money in the long term. Examples of this are skimping on hardware for developers or interrupting someone in the middle of coding with something trivial, which could easily wait until later, and forcing them to switch contexts. Another nonsoftware example is a shortsighted government making cuts to investment in research by sacrificing long-term growth for short-term gain, possibly due to the lack of any long-term economic plan. Hardware is significantly cheaper than developer time. Therefore, if everyone on your team doesn't have a beefy box and multiple massive monitors, then productivity is being needlessly diminished. Even over a decade ago, Facebook's software developer job adverts listed dual 24-inch widescreen monitors as a perk.

    From the ground up In a healthy and progressive culture, it can be tempting to rewrite poor quality software from scratch, perhaps in the latest trendy framework, but this is usually a mistake. Released software is battle-hardened (no matter how badly it was built), and if you rewrite it, then you will probably make the same mistakes again, for example, re-implementing bugs that you already patched. This is especially true if the software was not built with good unit test coverage, which can help prevent regressions.

    [ 949 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    The only case where you could reasonably rewrite an application from the ground up is if you had deliberately made a prototype to explore the problem space with the sole intention of throwing it away. However, you should be very careful because if you've actually built a Minimum Viable Product (MVP) instead, then these have a habit of sticking around for a long time and forming the foundations of larger applications. A better approach to a full rewrite is to add tests to the application (if it doesn't already have them) and gradually refactor it to improve the quality and performance. If you built an application in such a way as to make it difficult to unit test, then you can start with User Interface (UI) tests, perhaps using a headless web browser. We will cover testing more, including performance testing, in the next chapter.

    Shared values Culture is really just a set of shared values—things such as openness, sustainability, inclusivity, diversity, and ethical behavior. It can help having these values formally documented so that everyone knows what you stand for. You may have a progressive open salary policy so that others can't use secret earning information as a tool to pay people less. However, this would need to apply universally, as it's unhealthy to have multiple conflicting cultures because this can precipitate an usversus-them attitude. There's plenty more to say about culture, but, as you can see, there are many competing concerns to balance. The most important idea is to make intentional and thoughtful tradeoffs. There isn't one correct choice, but you should always be conscious of the consequences and appreciate how tiny actions can alter team performance.

    The price of performance Developers should have an idea of the available budget for performance and understand the cost of the code that they write, not just in execution throughput but in readability, maintainability, and power efficiency. Throwing more cores at a unit of work is not nearly as good as refactoring it to be simpler. Efficiency has become increasingly important, especially with the rise of mobile devices and cloud computing time-based usage billing. Parallelizing an inefficient algorithm may solve a performance problem in the time domain, but it's a crude brute-force approach and altering the underlying implementation may be better.

    [ 950 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Less is often more and sometimes doing nothing is the best approach. Software engineering is not only about knowing what to build but what not to build. Keeping things simple helps others on your team use your work. You should aim to avoid surprising anyone with nonobvious behavior. For example, consider that you build an API and then give it conventional defaults; if this could potentially take a long time, then make the methods asynchronous to indicate this fact. You should aim to make it easy to succeed and hard to fail when building on your code. Make it difficult to do the wrong thing, for example, if a method is unsafe, name it to document this fact. Being a competent programmer is only a small part of being a good developer, you also need to be helpful and proficient at communicating clearly. In fact, being an expert programmer can be a downside if you don't deliberately keep things simple to aid the understanding of others on your team. You should always balance performance improvements against the side effects, and you shouldn't make them at the expense of future development efficiency without good reason.

    Distributed debugging Distributed systems can make it difficult to debug problems, and you need to plan for this in advance by integrating technology that can help with visibility. You should know what metrics you want to measure and what parameters are important to record. As you run a web application, it isn't a case of deploy and forget as it might be with mobile apps or desktop software. You will need to keep a constant automated eye on your application to ensure that it is always available. If you monitor the correct performance metrics, then you can get early warning signs of problems and can take preventative action. If you only measure uptime or responsiveness, then the first that you may know of a problem is an outage notification, probably at an unsociable hour. You may outsource your infrastructure to a cloud-hosting company so that you don't have to worry about hardware or platform failures. However, this doesn't completely absolve you of responsibility and your software will still need continuous monitoring. You may need to architect your application differently to work in harmony with your hosting platform and scale or self-heal when issues arise.

    [ 951 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    If you design your system correctly, then your web application will run itself, and you'll rarely get notified of actions that require your attention. If you can successfully automate all of the things, rather than babysitting a live deployment, then that's more time you can use to build the future. In a distributed architecture, you can't simply attach a debugger to the live web server, not that this is a good idea even when possible. There is no live server anymore, there are now many live servers and even more processes. To get a holistic picture of the system, you will need to simultaneously examine the state of multiple modules. There are many tools that you can use to help with centralizing your debug information. You can retrofit some of them. However, to get the most out of them, you should decide what to measure upfront and build telemetry capabilities into your software from the start.

    Logging Logging is vital to a high-performance application, so while it is true that logging adds some overhead and can slow down execution, omitting it would be short-sighted and a false economy. Without logging, you won't know what is slow and requires improvement. You will also have other concerns, such as reliability, for which logging is essential.

    Error logging You may have used (or at least be familiar with) the excellent ASP.NET package called Error Logging Modules and Handlers (ELMAH) to catch unhandled exceptions (elmah.github.io). ELMAH is great for already existing applications, as you can drop it into a live running web app. However, it's preferable to have error logging built into your software from the start. Unfortunately, ASP.NET Core does not support ELMAH, but there is a similar package called Error Logging Middleware (ELM). Adding this to your web application is just as simple as installing Glimpse, but it is really just a prototype and doesn't have all the features of ELMAH. First, add the Microsoft.AspNetCore.Diagnostics.Elm NuGet package to your project, as shown in the following image:

    [ 952 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Then, in the ConfigureServices method of the Startup class, add the following line of code: services.AddElm();

    You also need to add the following lines to the Configure method of the same class: app.UseElmPage(); app.UseElmCapture();

    To start with, you may want to put these in the env.IsDevelopment() if statement so that they are only active on your workstation. If you use ELM on production, then these logs will definitely need securing. You can now visit the /Elm path of your web application in your browser to see the logs, as shown in the following image. Use the search box to filter results, click on v at the end of each line (not shown) to expand the entry's details, and click on ^ to collapse it again:

    [ 953 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    When using ELM, ELMAH, or even the default error pages, you should be careful to not show detailed exceptions or stack traces to real users. This isn't simply because they're unfriendly and unhelpful, they are a genuine security concern. Error logs and stack traces can reveal the internal workings of your application, which malicious actors can exploit. ELM is fairly basic, but it can still be useful. However, there are better solutions already built into the default templates—things, such as the integrated support for logging and Application Insights.

    [ 954 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Application Insights Application Insights allows you to monitor the performance of your application and view requests or exceptions. You can use it with Azure, but it will also work locally without needing an Azure account. You can easily enable Application Insights by keeping the relevant checkbox ticked, as shown in the following image, when you create a new ASP.NET Core web application project in Visual Studio, and you don't need to sign in to Azure to do this:

    The Application Insights template option is only available under the Web templates not the .NET Core templates.

    Build and run the new project, then open the Application Insights Search window to see the output. Navigate around the web app in your browser, and you will see records start to appear, which should look something like the following:

    [ 955 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    If you don't see any records in the output, then you may need to update the NuGet package (Microsoft.ApplicationInsights.AspNetCore) or click on the Search icon.

    [ 956 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    You can filter by event type, for example, requests or exceptions, and there are more detailed filters to refine your search. You can select an individual record to see more details, which will look something like the following:

    Although you can access this information locally without Azure, you can choose to select Configure Application Insights and set up logging of data to your Azure account.

    [ 957 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    If you know what you're doing, then you can use Advanced Mode and enter your Subscription ID and Instrumentation Key manually:

    Secret configuration information, such as this, is now stored outside of the source tree to help prevent it leaking. You can now define secrets as environment variables or in a secrets store held in your user profile. Once you set up Azure, then you can view the telemetry results online and aggregate information from many diverse systems. We don't have space to cover this in more detail here, but you can find more information online at azure.microsoft.com/enus/documentation/services/application-insights.

    Integrated logging Logging is now built into ASP.NET Core (in the Microsoft.Extensions.Logging

    [ 958 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    package) as well as Dependency Injection (DI), and both are included in the default templates. This reduces the barrier to entry, and it's now trivial to use these helpful technologies in even the smallest of projects. Previously, you would have to add logging and DI libraries before you started, which could put many people off using them. If you didn't already have a standard choice for both, then you would need to wade through the plethora of projects and research the merits of each. You can still use your preferred libraries if you are already acquainted with them. It's just that there are now sensible defaults, which you can override. Logging is configured in the Startup class, and if you use the standard web application template, then this will already be included for you. The logger factory reads settings from the appsettings.json file. In here, you can configure the logging level, and by default, the relevant section looks like the following: "Logging": { "IncludeScopes": false, "LogLevel": { "Default": "Debug", "System": "Information", "Microsoft": "Information" } }

    This sets a very chatty log level, which is useful for development, but you will probably want to only log warnings, and errors when you run this in production. Along with Application Insights and logging, useful debugging tools included by default in the Startup class include a developer exception page, which is like an advanced version of the old Yellow Screen Of Death (YSOD) ASP.NET error page. Also included is a database error page, which can helpfully apply EF migrations and mitigates a common pitfall of previous EF code-first migration deployments. To add the logger to your MVC home controller via constructor injection, you can use the following code (after adding using Microsoft.Extensions.Logging; to your using statements at the top of the file): private readonly ILogger Logger; public HomeController(ILoggerFactory loggerFactory) { Logger = loggerFactory.CreateLogger();

    [ 959 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools }

    After this, you can use Logger to log events inside action methods, as follows: Logger.LogDebug("Home page loaded");

    There are many more logging levels available and overloaded methods so that you can log additional information, for example, exceptions. We won't go into any more detail here, but you don't simply have to log text events, you can also record execution times (perhaps with a stopwatch) or increment counters to see how often certain events occur. Next, we'll see how to view these numbers centrally and read them correctly. For more examples of how to use logging, you can examine the default account controller (if you included individual user accounts authentication in the template). For more information, you can read the documentation at docs.asp.net/en/latest/fundamentals/logging.html.

    Centralized logging Logging is great. However, in a distributed system, you will want to feed all of your logs and exceptions into a single location, where you can easily analyze them and generate alerts. One potential option for this is Logstash (elastic.co/products/logstash), which we fleetingly mentioned in Chapter 2, Measuring Performance Bottlenecks. If you prefer a more modular approach and want to record performance counters and metrics, then there is StatsD, which listens for UDP packets and pushes to Graphite for storage and graphing. You can get it at github.com/etsy/statsd and there are a few .NET clients listed on the wiki, along with example C# code in the main repository. You may wish to use message queuing for logging so that you can quickly put a logged event into a queue and forget about it, rather than directly hitting a logging server. If you directly call an API (and aren't using UDP), then make sure that it's asynchronous and nonblocking. You don't want to slow down your application by logging inefficiently. There are also cloud options available, although the usual caveats about lock-in apply. AWS has CloudWatch, which you can read more about at aws.amazon.com/cloudwatch. Azure Diagnostics is similar, and you can integrate it with Application Insights, read more at azure.microsoft.com/en-us/documentation/articles/azure-diagnostics.

    [ 960 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    There are other cross-platform cloud services available, such as New Relic or Stackify, but these can be quite expensive, and you may wish to keep your logging within your own infrastructure. You could shoehorn the data into analytics software, such as Google Analytics or the privacy-focused Piwik (which is open source and can be self-hosted), but these are less suitable because they're designed for a slightly different purpose.

    Statistics When interpreting your collected metrics, it helps to know some basic statistics in order to read them correctly. Taking a simple mean average can mislead you and may not be as important as some other characteristics. When instrumenting your code to collect metadata, you should have a rough idea of how often particular logging statements will be called. This doesn't have to be exact, and a rough order of magnitude approximation (or Fermi estimate) will usually suffice. The question you should try to answer is how much of the data should be collected, all of it or a random sample? If you need to perform sampling, then you should calculate how big the sample size should be. We covered sampling in relation to SQL in Chapter 5, Optimizing I/O Performance, and the idea is similar here. Performance statistics require the same level of rigor as benchmarking does, and you can easily be misled or draw incorrect conclusions. StatsD includes built-in support for sampling, but there are many other approaches available if you want to investigate them. For example, online streaming algorithms and reservoirs are two options. The important thing to keep in mind for performance is to use a fast random number generator (RNG). As, for sampling, this doesn't need to be cryptographically secure, a pseudorandom number generator (PRNG) is fine. In .NET, you can use new Random() for a PRNG, rather than the more secure option of RandomNumberGenerator.Create(). See Chapter 6, Understanding Code Execution and Asynchronous Operations, for more examples of how to use both of these. When looking at your results, the outliers may be more interesting than the average. Although the median is more valuable than the mean, in this case, you should really look at the percentiles, for example, the 90th, 95th, and 99th percentiles. These data points can represent only a small fraction of your data, but at scale, they can occur frequently. You want to optimize for these worst case scenarios because if your users experience pages loads taking over five seconds ten percent of the time (even though the average looks fast), then they may go elsewhere.

    [ 961 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    There's much more to say on statistics, but beyond the basics, there are diminishing returns. If your math is rusty, then it is probably wise to have a refresher (Wikipedia is great for this). Then, you can explore some more advanced techniques, for example, the highperformance HyperLogLog (HLL) algorithm, which can estimate the size of a large set of elements using very little memory. Redis supports the HLL data structure (with the PFADD, PFCOUNT, and PFMERGE commands). For more on different data structures, refer to Chapter 6, Understanding Code Execution and Asynchronous Operations. This is only a brief introduction to performance logging, but there is much more for you to explore. For example, if you want to standardize your approach, then you can look into APDEX (apdex.org), which sets a standard method to record the performance of applications and compute scores.

    Managing stale caches It's worth providing a quick reminder to still consider simple issues after all of this complexity. It is far too easy to get lost in the details of a complicated bug or performance tweak and miss the obvious. A good technique to help with this is rubber duck debugging, which gets its name from the process of explaining your problem to a rubber duck on your desk. Most of us have experienced solving a problem after asking for help, even though the other person hasn't said anything. The process of explaining the problem to someone (or something) else clarifies it, and the solution becomes obvious. If something appears to not be working after a fix, then check simple things first. See whether the patch has actually been delivered and deployed. You may be seeing stale code from a cache instead of your new version. When managing caches, versioning is a useful tool to help you identify stale assets. You can alter filenames or add comments to include a unique version string. This can be Sematic Versioning (SemVer), an ISO date and timestamp, or a hash of the contents. For more on cache busting, refer to Chapter 7, Learning Caching and Message Queuing. SemVer, is a great way to version your code because it implicitly captures information on compatibility and breaking changes. You can read more about SemVer at semver.org.

    [ 962 ] WOW! eBook www.wowebook.org

    The Downsides of Performance-Enhancing Tools

    Summary In this chapter, we saw how there are always downsides to every decision and every choice has a cost attached because nothing comes for free. There are always tradeoffs involved, and you need to be aware of the consequences of your actions, which may be small and subtle. The key lesson is to take a thoughtful and rigorous approach to adding any performanceenhancing technique. Measurement is crucial to achieving this, but you also need to know how data can mislead you if you collect or interpret it incorrectly. In the next chapter, we will continue with the measurement theme and learn how to use tests to monitor for performance regressions. You will see how to use testing (including unit testing), automation, and continuous integration to ensure that once you solve a performance problem, it stays this way.

    [ 963 ] WOW! eBook www.wowebook.org

    9

    Monitoring Performance Regressions This chapter will cover writing automated tests to monitor performance along with adding these to a Continuous Integration (CI) and deployment system. By constantly checking for regressions, you'll avoid accidentally building a slow application. We'll also cover how to safely load test a system without forcing it offline and how to ensure that tests mimic real life usage as far as possible. Topics covered in this chapter include the following: Profiling Load testing Automated testing Performance monitoring Continuous integration and deployment Realistic environments and production-like data UI testing with selenium and phantom headless browsers A/B testing for conversion optimization Cloud services and hosting DevOps

    WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    You will see how to automate performance monitoring and testing so that you don't need to remember to keep doing it manually. You'll learn how to catch regressions early before they cause trouble and how to safely back them out for rework.

    Profiling and measurement We started this book by highlighting the importance of measurement and profiling by covering some simple techniques in Chapter 2, Measuring Performance Bottlenecks. We continued this theme throughout, and we'll end the book on it as well because it's impossible to overstate how important measuring and analyzing reliable evidence is. Previously, we covered using Glimpse to provide insights into the running of your web application. We also demonstrated the Visual Studio diagnostics tools and the Application Insights Software Development Kit (SDK). There's another tool that's worth mentioning and that's the Prefix profiler, which you can get at prefix.io. Prefix is a free web-based ASP.NET profiler that supports ASP.NET Core. However, it doesn't support .NET Core yet (although this is planned), so you'll need to run ASP.NET Core on .NET Framework 4.6 for now. There's a live demo on their website (at demo.prefix.io) if you want to quickly check it out and it looks like the following:

    [ 965 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    ASP.NET Core can run on top of either .NET Core or the existing stable framework. For a refresher on how everything fits together, refer back to Chapter 1, Why Performance Is a Feature, or for more details, refer to Chapter 6, Understanding Code Execution and Asynchronous Operations. You may also want to look at the PerfView performance analysis tool from Microsoft, which is used in the development of .NET Core. You can download PerfView from microsoft.com/download/details.aspx?id=28567 as a ZIP file that you can just extract and run. It is useful to analyze the memory of .NET applications among other things, and it looks like this when you launch it:

    [ 966 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    You can use PerfView for many debugging activities, for example, to take a snapshot of the heap or force GC runs. We don't have space for a detailed walkthrough here, but the included instructions are good, and there are blogs on MSDN with guides and many video tutorials on Channel 9 at channel9.msdn.com/Series/PerfView-Tutorial if you need more information. Sysinternals tools (technet.microsoft.com/sysinternals) can also be helpful, but as they do not focus much on .NET, they are less useful in this context. While tools such as these are great, what would be even better is to build performance monitoring into your development workflow. Automate everything that you can, and this makes performance checks transparent, routine, and run by default.

    [ 967 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Manual processes are bad because you can skip steps, and you can easily make errors. You wouldn't dream of developing software by e-mailing files around or editing code directly on a production server, so why not automate your performance tests too. Change control processes exist to ensure consistency and reduce errors. This is why using a Source Control Management (SCM) system, such as Git or Team Foundation Server (TFS) is essential. It's also extremely useful to have a build server and perform CI or even fullyautomated deployments. Source control allows multiple people to work on a file simultaneously and merge the changes later. It's like Word's track changes feature, but actually usable. We assume that we're preaching to the converted and you already use source control. If not, stop reading right now and go install an SCM system. If the code deployed in production differs from what you have on your local workstation, then you have very little chance of success. This is one of the reasons why SQL Stored Procedures (SPs/sprocs) are difficult to work with, at least without rigorous version control. It's far too easy to modify an old version of an SP on a development DB, accidentally revert a bug fix, and end up with a regression. If you must use SPs, then you will need a versioning system, such as ReadyRoll (which Redgate has now acquired). As this isn't a book on Continuous Delivery (CD), we will assume that you are already practicing CI and have a build server, such as JetBrains TeamCity, ThoughtWorks GoCD, CruiseControl.NET, or a cloud service, such as AppVeyor. Perhaps you're even automating your deployments using a tool, such as Octopus Deploy, and you have your own internal NuGet feeds, using software such as The Motley Fool's Klondike, or a cloud service, such as MyGet (which also supports npm, bower, and VSIX packages). NuGet packages are a great way of managing internal projects. In new versions of Visual Studio, you can see the source code of packages and debug into them. This means no more huge solutions containing a ludicrous number of projects. Bypassing processes and doing things manually will cause problems, even if you are following a script. If it can be automated then it probably should be, and this includes testing.

    [ 968 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Testing Testing is essential to producing high-quality and well-performing software. The secret to productive testing is to make it easy, reliable, and routine. If testing is difficult or tests regularly fail because of issues unrelated to the software (for example, environmental problems), then tests will not be performed or the results will be ignored. This will cause you to miss genuine problems and ship bugs that you could have easily avoided. There are many different varieties of testing, and you may be familiar with the more common cases used for functional verification. In this book, we will mainly focus on tests pertaining to performance. However, the advice here is applicable to many types of testing.

    Automated testing As previously mentioned, the key to improving almost everything is automation. Tests that are only run manually on developer workstations add very little value. Of course, it should be possible to run the tests on desktops, but this shouldn't be the official result because there's no guarantee that they will pass on a server (where the correct functioning matters more). Although automation usually occurs on servers, it can be useful to automate tests that run on developer workstations too. One way of doing this in Visual Studio is to use a plugin, such as NCrunch. This runs your tests as you work, which can be very useful if you practice Test-Driven Development (TDD) and write your tests before your implementations. You can read more about NCrunch and see the pricing at ncrunch.net, or there's a similar open source project at continuoustests.com. One way of enforcing testing is to use gated check-ins in TFS, but this can be a little draconian, and if you use an SCM, such as Git, then it's easier to work on branches and simply block merges until all the tests pass. You want to encourage developers to check-in early and often because this makes merges easier. Therefore, it's a bad idea to have features in progress sitting on workstations for a long time (generally no longer than a day).

    Continuous integration CI systems automatically build and test all of your branches and feed this information back to your version control system. For example, using the GitHub API, you can block the merging of pull requests until the build server reports a successfully tested merge result.

    [ 969 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Both Bitbucket and GitLab offer free CI systems, called pipelines, so you may not need any extra systems in addition to one for source control because everything is in one place. GitLab also offers an integrated Docker container registry, and there is an open source version that you can install locally. .NET Core and Visual Studio support Docker well, and we'll cover this again later in the chapter. You can do something similar with Visual Studio Team Services for CI builds and unit testing. Visual Studio also has Git services built into it. This process works well for unit testing because unit tests must be quick so that you get feedback early. Shortening the iteration cycle is a good way of increasing productivity, and you'll want the lag to be as small as possible. However, running tests on each build isn't suitable for all types of testing because not all tests can be quick. In this case, you'll need an additional strategy so as not to slow down your feedback loop. There are many unit testing frameworks available for .NET, for example, NUnit, xUnit, and MSTest (Microsoft's unit test framework) along with multiple graphical ways of running tests locally, such as the Visual Studio Test Explorer and the ReSharper plugin. People have their favorites, but it doesn't really matter what you choose because most CI systems will support all of them.

    Slow testing Some tests are slow, but even if each test is fast, they can easily add up to a lengthy time if you have a lot of them. This is especially true if they can't be parallelized and need to be run in sequence, so you should always aim to have each test stand on its own without any dependencies on others. It's good practice to divide your tests into rings of importance so that you can at least run a subset of the most crucial on every CI build. However, if you have a large test suite or some tests that are unavoidably slow, then you may choose to only run these once a day (perhaps overnight) or every week (maybe over the weekend). Some testing is simply slow by nature and performance testing can often fall into this category, for example, load testing or User Interface (UI) testing. We usually class this as integration testing, rather than unit testing because they require your code to be deployed to an environment for testing and the tests can't simply exercise the binaries.

    [ 970 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    To make use of such automated testing, you will need to have an automated deployment system in addition to your CI system. If you have enough confidence in your test system, then you can even have live deployments happen automatically. This works well if you also use feature switching to control the rollout of new features. We won't go into the implementation details of Continuous Integration or automated deployments in this book. However, we will cover feature switching, how to apply performance testing to CI processes, and what to do when you discover a regression.

    Fixing performance regressions If you discover a performance issue at the unit testing stage, then you can simply rework this feature, but it's more likely that these problems will surface in a later testing phase. This can make it more challenging to remedy the problem because the work may already have been built upon and have other commits on top of it. The correct course of action is often to back-out regressions immediately or at least as soon as possible upon discovery. Delays will only make the issue harder to fix later, which is why it's important to get fast feedback and highlight problems quickly. It's important to be disciplined and always remove regressions, even though it may be painful. If you let the occasional minor regression in, then you can easily become sloppy and let more serious ones in over time because of the precedent it sets.

    Load testing Load testing is the process of discovering how many concurrent users your web app can support. You generally perform it on a test environment with a tool that gradually ramps up a simulated load, for example, JMeter (jmeter.apache.org). Perhaps, you'd prefer using a JMeter compatible cloud service, such as BlazeMeter, or an alternative, such as Loader.io, if your test systems are web-facing. Load testing can take a significant amount of time depending on the scale of your service because it can be configured to continue until the test environment gets unacceptably slow for users or falls over and becomes unresponsive. You need to be extremely careful with load testing and not only from the point of view of accidentally testing your live systems to destruction while they're in use.

    [ 971 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    You also need to be wary of getting false results, which may mislead you into concluding that your system can handle more load than it actually will. Balancing these two competing concerns of safety and realism can be difficult. It's important to get realistic results, but you need to balance this against not stressing your production environment and impacting the experience of real users.

    Realism Keeping it real is an important principle of performance testing. If you don't use a realistic environment and workload, then your results may be worse than having no data because they could mislead you into bad decisions. When you have no information, you at least know that you're in the dark and just guessing. We'll cover workloads and feature switching shortly, including an example of how to implement your own simple version from scratch. First, let's look at how to make your test environments representative of production.

    Realistic environments Using a test environment that is as close to production (or as live-like) as possible is a good step toward ensuring reliable results. You can try and use a smaller set of servers and then scale your results up to get an estimate of live performance. However, this assumes that you have an intimate knowledge of how your application scales and what hardware constraints will be the bottlenecks. A better option is to use your live environment or rather what will become your production stack. You first create a staging environment that is identical to live, then you deploy your code to it, and then you run your full test suite, including a comprehensive performance test, ensuring that it behaves correctly. Once you are happy, then you simply swap staging and production, perhaps using DNS or Azure staging slots.

    [ 972 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    The following diagram shows how you first release to staging and go live simply by making staging become production:

    Your old live environment now either becomes your test environment or if you use immutable cloud instances, then you can simply terminate it and Spin-up a new staging system. This concept is known as blue-green deployment, but unfortunately specific implementation instructions are beyond the scope of this book. You don't necessarily have to move all users across at once in a big bang, and you can move a few over first to test whether everything is correct. We'll cover this shortly in the section on feature switching.

    [ 973 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Realistic workloads Another important part of performing realistic testing is to use real data and real workloads. Synthetic data often won't exercise a system fully or find exotic edge cases. You can use fuzzing to generate random data, but if you have a production system, then you can simply use the data from this and parse your logs to generate a realistic workload to replay. Obviously, don't replay actions onto a live system that could modify user data and be wary of data privacy concerns or any action that could generate an external event, for example, sending a mass e-mail or charging a credit card. You can use a dedicated test system, but you still need to be careful to stub out any external APIs that don't have a test version. Another approach is to use dummy users for your testing if you don't mind your users discovering the fakes and possibly interacting with them. One case of this approach is Netflix's cult Example Show, which is a homemade video of an employee running around their office grounds with a laptop.

    Feature switching An alternative to fake users or test environments is to run performance tests on your real live environment using genuine users as they interact with your web application. You can use feature switching to gradually roll out a new feature to a small subset of your user base and monitor the system for excessive load. If everything seems normal, then you can continue the rollout; otherwise, you can rollback. You may hear similar ideas referred to by other names, such as Feature Flags, Feature Toggles, Canary Releases, or Branch by Abstraction. Some of these may have subtly different meanings but the general guiding principle of gradually releasing a change is much the same. The following diagram shows how you could progressively migrate users to a new feature by deploying it but not initially enabling it. Your first users could be members of staff so that any issues are detected early:

    [ 974 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Once all users are consuming the new feature, then you can safely remove the old feature (and the feature switch itself). It's a good idea to regularly tidy up like this to avoid clutter and make later work easier. If you discover that your system experiences excessive load as you slowly increase the percentage of users on the new feature, then you can halt the rollout and avoid your servers becoming overloaded and unresponsive. You then have time to investigate and either back out of the change or increase available resources. One variation on this theme is when a new feature requires a data migration and this migration is computationally or networking-intensive, for example, migrating usersubmitted videos to a new home and transcoding them in the process. In this case, the excessive load that you are monitoring will only be transient, and you don't need to back out of a feature. You only need to ensure that the rate of migration is low enough to not excessively tax your infrastructure.

    [ 975 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Although the new feature is usually branched to in code, you can instead perform switching at the network level if you use blue-green deployment. This is known as a canary release and can be done at the DNS or load balancer level. However, specific implementation details are again beyond the remit of this book. You can find many open source feature switching libraries online or you could write your own, which we will show you how to do later. A couple of .NET feature-switching libraries are github.com/mexx/FeatureSwitcher and github.com/Jason-roberts/FeatureToggle. Unfortunately, neither of these work on .NET Core yet, although support is planned for FeatureToggle. There are also paid cloud services available, which offer easy management interfaces, such as LaunchDarkly. Library and framework support for .NET Core and ASP.NET Core change rapidly, so check ANCLAFS.com for the latest information.

    To illustrate feature switching, let's build our own extremely simple implementation for an ASP.NET Core web application. To start with, we take the default existing homepage view (Index.cshtml) and make a copy (IndexOld.cshtml). We then make our changes to Index.cshtml, but these aren't important for the purposes of this demo. In HomeController.cs, we change the logic to return the new view a relatively small percentage of the time, and the old one otherwise. The original code was simply the following: public IActionResult Index() { return View(); }

    We change this action to return the new view in a quarter of the cases, like the following: public IActionResult Index() { var rand = new Random(); if (rand.Next(99) < 25) { return View(); } return View("IndexOld"); }

    [ 976 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    This code picks a random number out of a hundred, and if it is less than 25, then it loads the new view. Clearly, you wouldn't hardcode values like this, and you would probably use a database to store configuration and control it with a web admin page. If you load the page in a browser, then three out of four times you should get the original page, which looks like the following:

    However, roughly one every four page loads, you will get the new page, which looks like the following:

    We removed the carousel and cut the number of image loads in half. You can see the difference in the browser developer tools, for example, the Firefox performance analysis for the original looks like the following:

    [ 977 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Whereas for the new version it looks like this.

    [ 978 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    You can see that the number of images decreased, which has reduced the page size and the total number of requests required. These measurements were taken with the hosting environment set to Production. If you remain with the default of Development, then your results will differ. Refer to the following documentation page for how to change this: docs.asp.net/en/latest/fundamentals/environments.html. Once the new view has been progressively rolled out to all users (with no performance problems), then the action method code can be reverted to the original state and the old view deleted. Obviously, this is a trivial example. Hopefully, you can now see how feature switching can be performed. Using feature switching to rollout in this way works well for performance testing, but you need to be confident that your new code is functionally correct or at least safe to deploy. However, in certain circumstances you can also test functionality by extending this approach and performing experiments.

    Experimenting for science If you take the feature switching rollout to its logical conclusion, then you can switch on a new refactored version for a small percentage of users, but not actually expose the output. You will run the existing version in parallel as a control and show this to the user. However, you will collect the results of both versions, including performance data. You can then compare them to ensure that the new (hopefully higher performance) version is correct and consistent with the existing behavior. GitHub have an open source Ruby library, called Scientist, which they used to successfully refactor their permissions and merging code. In the process, they discovered existing legacy bugs, found missing features, optimized database queries, and swapped a search system. Scientist not only displays the correctness of new code but also its performance relative to the old implementation. An important concern with Scientist (or any method using this experimentation approach) is to not change data. All operations should be read-only and have no side effects; otherwise, you will perform the modification more than once, which could have undesirable consequences.

    [ 979 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    There's a .NET port of Scientist by GitHub employee Phil Haack, called Scientist.NET, and you can get it at github.com/Haacked/Scientist.net. You could also install it via NuGet. However, it's currently a prerelease as it's still a work in progress, but it should support .NET Core. Check ANCLAFS.com for the latest information. This idea of experimenting on users is similar to the marketing concept of A/B testing, which is used to discover conversion effectiveness. However, with Scientist, you don't typically show the users different outputs or even intend the output to change, you just record the new output for the subset of users that you have it enabled for.

    A/B testing A/B testing is similar to feature switching, but we usually use it to test the effectiveness of different web page designs and how they affect the conversion funnel analytics. We normally use it for digital marketing rather than for software development because it's less about what is correct and more about what customers prefer. However, the underlying technical principals are comparable to feature switching and experimenting. In a simple example, you serve half of your visitors the old version of a page and the other half a new version designed to increase engagement. You then record how many visitors click through or perform some action on each variant. If the new version performs better, then you keep it and roll it out to everyone. However, if it converted worse, you roll back to the old version and try again. You can see that this is very similar to how feature switching works, and you can use the same tools to do both. The only difference is what you are measuring, user analytics or the health of your infrastructure. A/B testing is not normally used for backend features and is simply to assess UI alternatives. However, it is different to functionally testing your web interface, so let's cover the basics of UI testing now.

    User interface testing There's one area of an application that is traditionally difficult to test, and that's the UI. This is particularly the case for GUIs, as used in web applications. One reason for this is that users typically have a lot of flexibility in how they display their interface, and this is especially true for web browsers. A naïve pixel-based testing approach is going to be extremely fragile.

    [ 980 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    You need to design your application to be easily testable, and the UI is no exception. While it is possible to test the UI of a poorly-designed legacy application and this may even be the easiest testing approach, if you consider UI testing up front, then life will be a lot easier. For example, including sensible IDs on your HTML Document Object Model (DOM) elements makes testing much more straightforward and less fragile. Checking your application from a web browser perspective can be a useful additional step on top of automated unit and integration testing. You can use it not only to test for functional correctness and regressions but also to measure client side performance, which is increasingly important. There are many UI testing tools available, most of which you can integrate into your CI pipeline to automate testing.

    Web UI testing tools One of the most popular web-testing tools is Selenium, which allows you to easily write tests and automate web browsers using WebDriver. Selenium is useful for many other tasks apart from testing, and you can read more about it at docs.seleniumhq.org. WebDriver is a protocol to remote control web browsers, and you can read about it at w3c.github.io/webdriver/webdriver-spec.html.

    Selenium uses real browsers, the same versions your users will access your web application with. This makes it excellent to get representative results, but it can cause issues if it runs from the command line in an unattended fashion. For example, you may find your test server's memory full of dead browser processes which timed out. You may find it easier to use a dedicated headless test browser, which, while not exactly the same as what your users will see, is more suitable for automation. Of course, the best approach is to use a combination of both, perhaps running headless tests first and then running the same tests on real browsers with WebDriver. One of the most well-known headless test browsers is PhantomJS. This is based on the WebKit engine, so it should give you similar results to Chrome and Safari. PhantomJS is useful for many things apart from testing, such as capturing screenshots, and you can drive it with many different testing frameworks. As the name suggests, you can control PhantomJS with JavaScript, and you can read more about it at phantomjs.org.

    [ 981 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    WebKit is an open source engine for web browsers, which was originally part of the KDE Linux desktop environment. It is mainly used in Apple's Safari browser, but a fork called Blink is used in Google Chrome, Chromium, and Opera. You can read more at webkit.org. Other automatable testing browsers, based on different engines, are available, but they have some limitations. For example, SlimerJS (slimerjs.org) is based on the Gecko engine used by Firefox, but is not fully headless. You probably want to use a higher-level testing utility rather than scripting browser engines directly. One such utility that provides many useful abstractions is CasperJS (casperjs.org), which supports running on both PhantomJS and SlimerJS. Another tool is Capybara, which allows you to easily simulate user interactions in Ruby. It supports Selenium, WebKit, Rack, and PhantomJS (via Poltergeist), although it's more suitable for Rails apps. You can read more at jnicklas.github.io/capybara. There is also TrifleJS (triflejs.org), which uses the .NET WebBrowser class (the Internet Explorer Trident engine), but this is a work in progress. Additionally, there's Watir (watir.com), which is a set of Ruby libraries targeting Internet Explorer and WebDriver. However, neither have been updated in a while, and IE has changed a lot recently. Microsoft Edge (codenamed Spartan) is the new version of IE, and the Trident engine was forked to EdgeHTML. The JavaScript engine (Chakra) was open sourced as ChakraCore (github.com/Microsoft/ChakraCore). It shouldn't matter too much what browser engine you use, and PhantomJS will work fine as a first pass for automated tests. You can always test with real browsers after using a headless one, perhaps with Selenium or with PhantomJS using WebDriver. When we refer to browser engines (WebKit/Blink, Gecko, or Trident/EdgeHTML), we generally mean only the rendering and layout engine, not the JavaScript engine (SFX/Nitro/FTL/B3, V8, SpiderMonkey, or Chakra/ChakraCore). You'll probably still want to use a utility such as CasperJS to make writing tests easier, and you'll likely need a test framework, such as Jasmine (jasmine.github.io) or QUnit (qunitjs.com) too. You can also use a test runner that supports both Jasmine and QUnit, such as Chutzpah (mmanela.github.io/chutzpah).

    [ 982 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    You can integrate your automated tests with many different CI systems, for example, Jenkins or JetBrains TeamCity. If you prefer a cloud-hosted option, then there's Travis CI (travis-ci.org) and AppVeyor (appveyor.com), which is also suitable to build .NET apps. You may prefer to run your integration and UI tests from your deployment system, for example, to verify a successful deployment in Octopus Deploy. There are also dedicated, cloud-based web-application UI testing services available, such as BrowserStack (browserstack.com).

    Automating UI performance tests Automated UI tests are clearly great to check for functional regressions, but they are also useful to test performance. You have programmatic access to the same information provided by the network inspector in the browser developer tools. You can integrate the YSlow (yslow.org) performance analyzer with PhantomJS, enabling your CI system to check for common web-performance mistakes on every commit. YSlow came out of Yahoo!, and it provides rules used to identify bad practices, which can slow down web applications for users. It's a similar idea to Google's PageSpeed Insights service (which you can automate via its API). However, YSlow is pretty old, and things have moved on in web development recently, for example, HTTP/2. A modern alternative is "the coach" from sitespeed.io, and you can read more at github.com/sitespeedio/coach. You should check out their other open source tools too, such as the dashboard at dashboard.sitespeed.io, which uses Graphite and Grafana. You can also export the network results (in industry standard HAR format) and analyze them however you like, for example, visualizing them graphically in waterfall format, as you might do manually with your browser developer tools. The HTTP Archive (HAR) format is a standard way of representing the content of monitored network data to export it to other software. You can copy or save as HAR in some browser developer tools by right-clicking a network request.

    [ 983 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Staying alert Whenever you perform testing, particularly UI or performance testing, you will get noisy results. Reliability is not perfect, and there will always be failures that are not due to bugs in the code. You shouldn't let these false positives cause you to ignore failing tests, and although the easiest course of action may be disabling them, the correct thing to do to make them more reliable. The scientifically minded know that there is no such thing as a perfect filter in binary classification, and always look at the precision and recall of a system. Knowing the rate of false positives and negatives is important to get a good idea of the accuracy and tradeoffs involved. To avoid testing fatigue, it can be helpful to engage developers and instill a responsibility to fix failing tests. You don't want everyone thinking that it's somebody else's problem. It should be pretty easy to see who broke a test by the commit in version control, and it's then their job to fix the broken test. You can have a bit of fun with this and create something a bit more interesting than a wall mounted dashboard. Although having information radiators is useful if you don't get desensitized to them. There's plenty of cheap Internet of Things (IoT) hardware available today, which allows you to turn some interesting things into build or test failure alarms. For example, an Arduino-controlled traffic light, an ESP8266-operated foam-missile turret, or a Raspberry Pi-powered animatronic figure.

    DevOps When using automation and techniques such as feature switching, it is essential to have a good view of your environments so that you know the utilization of all the hardware. Good tooling is important to perform this monitoring, and you want to easily be able to see the vital statistics of every server. This will consist of at least the CPU, memory, and disk space consumption, but it may include more, and you will want alarms set up to alert you if any of these stray outside allowed bands. The practice of DevOps is the culmination of all of the automation we covered previously with development, operations, and quality-assurance testing teams all collaborating. The only missing pieces left now are to provision and configure infrastructure and then monitor it while in use. Although DevOps is a culture, there is plenty of tooling that can help.

    [ 984 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    DevOps tooling One of the primary themes of DevOps tooling is defining infrastructure as code. The idea is that you shouldn't manually perform a task, such as setting up a server, when you can create software to do it for you. You can then reuse these provisioning scripts, which will not only save you time but will also ensure that all of the machines are consistent and free of mistakes or missed steps.

    Provisioning There are many systems available to commission and configure new machines. Some popular configuration-management automation tools are Ansible (ansible.com), Chef (chef.io), and Puppet (puppet.com). Not all of these tools work great on Windows servers, partly because Linux is easier to automate. However, you can run ASP.NET Core on Linux and still develop on Windows, using Visual Studio while testing in a VM. Developing for a VM is a great idea because it solves the problems in setting up environments and issues where it "works on my machine" but not in production. Vagrant (vagrantup.com) is a great command line tool to manage developer VMs. It allows you to easily create, Spin-up, and share developer environments. The successor to Vagrant, Otto (ottoproject.io) takes this a step further and abstracts deployment too so that you can push to multiple cloud providers without worrying about the intricacies of CloudFormation, OpsWorks, or anything else. If you create your infrastructure as code, then your scripts can be versioned and tested, just like your application code. We'll stop before we get too far off topic, but the point is that if you have reliable environments, which you can easily verify, instantiate, and perform testing on, then CI is a lot easier.

    Monitoring Monitoring is essential, especially for web applications, and there are many tools available to help with it. A popular open source infrastructure-monitoring system is Nagios (nagios.org). Another more modern open source alerting and metrics tool is Prometheus (prometheus.io).

    [ 985 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    If you use a cloud platform, then there will be monitoring built in, for example, AWS CloudWatch or Azure Diagnostics. There are also cloud services to directly monitor your website, such as Pingdom (pingdom.com), UptimeRobot (uptimerobot.com), Datadog (datadoghq.com), and PagerDuty (pagerduty.com). You probably already have a system in place to measure availability, but you can also use the same systems to monitor performance. This is not only helpful to ensure a responsive user experience, but it can also provide early warning signs that a failure is imminent. If you are proactive and take preventative action, then you can save yourself a lot of trouble reactively fighting fires. This isn't a book on operations, but it helps to consider application support requirements at design time. Development, testing, and operations aren't competing disciplines, and you will succeed more often if you work as one team rather than simply throwing an application over the fence and saying it "worked in test, ops problem now."

    Hosting It's well worth considering the implications of various hosting options because if developers can't easily access the environments they need, then this will reduce their agility. They may have to work around an availability problem and end up using insufficient or unsuitable hardware, which will hamper progress and cause future maintenance problems. Or their work will simply be blocked and delivery set back. Unless you are a very large organization, hosting in-house is generally a bad idea for reliability, flexibility, and cost reasons. Unless you have some very sensitive data, then you should probably use a data center. You can co-locate servers in a data center, but then you need staff to be on call to fix hardware problems. Or you can rent a physical server and run your application on "bare metal", but you may still need remote hands to perform resets or other tasks on the machine. The most flexible situation is to rent self-service virtual machines, commonly known as cloud hosting. There are many hosting companies that offer this, but the big players are Amazon, Google, and Microsoft.

    [ 986 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Microsoft's Azure is the obvious choice for a .NET shop and it has improved immensely since launch compared to its original offering. Its Platform as a Service (PaaS) to host .NET applications is the most polished and the easiest to get running on quickly. It also offers lots of extra services that go beyond simple VMs. However, .NET Core and ASP.NET Core are new types of framework, which are not aimed solely at C# developers, who would usually pick the default option offered by Microsoft. The targeted market is developers who may be used to choosing alternative platforms and open source frameworks. Therefore, it makes sense to cover other options, which people new to .NET may be more familiar with. The difference between PaaS and Infrastructure as a Service (IaaS) is that PaaS is higher level and provides a service to run applications, not computers. IaaS simply provides you with a VM. However, with PaaS, this is abstracted away, and you don't need to worry about setting up and updating an instance. Amazon Web Services (AWS) is the most established cloud host, and it started by only offering IaaS, although they now offer PaaS options. For example, Elastic Beanstalk supports .NET. Even though Windows (and SQL Server) support has improved, it is still not first class, and Linux is clearly their main platform. However, now that .NET Core and SQL Server 2016 run on Linux, this may be less of a problem. You will pay a premium for Windows server instances, but you're also charged more for some enterprise Linux distributions (Redhat and SUSE). However, you can run other Linux distributions, such as Ubuntu or CentOS without paying a premium. Google Cloud Platform used to consist of App Engine (GAE), which was quite a restrictive PaaS, but it is getting more flexible. They now offer a generic IaaS called Compute Engine (GCE), and there's a new flexible version of GAE, which is more like GCE. These new offerings are useful, as you couldn't traditionally run .NET on GAE, and they also provide other products such as pre-emptible VMs and a cloud CDN (with HTTPS at no extra cost). Azure may still be the best choice for you, and it integrates well with other Microsoft products, such as Visual Studio. However, it is worth having healthy competition because this keeps everyone innovating. It's definitely a good idea to frequently look at the pricing of all options, (which changes regularly) because you can save a lot of money depending on the workload that you run. You can avoid vendor lock-in by avoiding custom services. If you are eligible for Microsoft's BizSpark program, then you can get three years of Azure credits (suitable to host a small application). You also get an MSDN subscription for free software licenses.

    [ 987 ] WOW! eBook www.wowebook.org

    Monitoring Performance Regressions

    Whatever hosting provider you choose to use, it is sensible to avoid vendor lock-in and use services that are generic, which you can easily swap for an alternative. For example, using hosted open source software, such as PostgreSQL, Redis, or RabbitMQ, rather than an equivalent custom cloud provider product. You can also take a resilient multicloud approach to protect yourself from an outage of a single provider. Docker is a great technology for this purpose because many different cloud services support it. For example, you can run the same container on Azure Container Service, Docker Cloud, AWS EC2 Container Service, and Google's Kubernetes Container Engine. Docker also runs on Windows (using Hyper-V), and in the new version of Visual Studio you can deploy to and debug into a container. This can run ASP.NET Core on Linux, and when you are ready to deploy, you can just push to production and have confidence that it will work as it did on your machine. You can read more about this at docker.com/Microsoft, and there are some interesting videos on the Docker blog. When choosing a cloud (or even a region), it's important to not only consider the monetary cost. You should also factor in environmental concerns, such as the main fuel used for the power supply. For example, some regions can be cheaper, but this may be because they use dirty coal power, which contributes to climate change and our future security.

    Summary In this chapter, you saw how you might integrate automated testing into a CI system in order to monitor for performance regressions. You also learned some strategies to roll out changes and ensure that tests accurately reflect real life. We also briefly covered some options for DevOps practices and cloud-hosting providers, which together make continuous performance testing much easier. In the next chapter, we'll wrap-up everything that we covered throughout this book and suggest some areas for further investigation. We'll reinforce our learnings so far, give you some interesting ideas to think about, contemplate possible futures for .NET, and consider the exciting direction the platform is taking.

    [ 988 ] WOW! eBook www.wowebook.org

    10

    The Way Ahead This chapter sums up what you learned by reading this book. It refreshes the main tenets of performance and it reminds you that you should always remain pragmatic. We'll recap why you shouldn't optimize just for optimization's sake and why you should always measure the problems and results. This chapter also introduces more advanced and exotic techniques that you may wish to consider learning about if you need more speed or are a serious performance enthusiast. Topics covered in this chapter include the following: A summary of previously-covered topics Platform invoke and native code Alternative architectures, such as ARM Advanced hardware (GPUs, FPGAs, ASICs, SSDs, and RAM SANs) Machine learning and AI Big data and MapReduce The Orleans virtual actor model Custom transport layers Advanced hashing functions Library and framework support The future of .NET Core We'll reinforce how to assess and solve performance issues by refreshing your memory of the lessons in the previous chapters. You'll also gain an awareness of other advanced technology that's available to assist you with delivering high performance, which you may wish to investigate further. Finally, we'll highlight what libraries and frameworks support .NET Core and ASP.NET Core, and try to hypothesize possible future directions for these exciting new platforms.

    WOW! eBook www.wowebook.org

    The Way Ahead

    Reviewing what we learned Let's briefly recap what we covered earlier in this book to serve as an aide-mémoire. In Chapter 1, Why Performance Is a Feature, we discussed the basic premise of this book and showed you why you need to care about the performance of your software. Then, in Chapter 2, Measuring Performance Bottlenecks, we showed you that the only way you can solve performance problems is to carefully measure your application. In Chapter 3, Fixing Common Performance Problems, we looked at some of the most frequent performance mistakes and how to fix them. After this, we went a little deeper in Chapter 4, Addressing Network Performance, and dug into the networking layer that underpins all web applications. Then in Chapter 5, Optimizing I/O Performance, we focused on input/output and how this can negatively affect performance. In Chapter 6, Understanding Code Execution and Asynchronous Operations, we jumped into the intricacies of C# code and looked at how its execution can alter performance. Then in Chapter 7, Learning Caching and Message Queuing, we initially looked at caching, which is widely regarded to be quite hard. Then, we investigated message queuing as a way to build a distributed and reliable system. In Chapter 8, The Downsides of Performance Enhancing Tools, we concentrated on the negatives of the techniques that we previously covered, as nothing comes for free. Then in Chapter 9, Monitoring Performance Regressions, we looked at measuring performance again but, in this case, from an automation and Continuous Integration (CI) perspective.

    Further reading If you've read this far, then you will probably want some pointers for other things to research and read up on. For the rest of this chapter, we'll highlight some interesting topics that you may want to look into further, but we couldn't cover in more detail in this book.

    Going native One of the problems with the old ASP.NET is that it was really slow, which is why one of the main guiding principles of ASP.NET Core has been performance. Impressive progress has already been made, but there are plenty more opportunities for further enhancements. One of the most promising areas is the native tool chain, which we briefly mentioned in Chapter 6, Understanding Code Execution and Asynchronous Operations. It's still in its early

    [ 990 ] WOW! eBook www.wowebook.org

    The Way Ahead

    days but it looks like it could be very significant. Previously, if you wanted to call unmanaged native code from managed .NET code, you needed to use Platform Invoke (PInvoke), but this had performance overheads and safety concerns. Even if your native code was faster, the overheads often meant it was not worth bothering about. The native tool chain should give native levels of performance, but with the safety and convenience of a managed runtime. Ahead-of-time compilation is fascinating and very technical, but it can offer a performance boost along with simplified deployments if we know the architecture. It's also possible to optimize for different processors that may offer special performance features and instructions. For example, targeting low-energy ARM chips instead of the usual Intel style processors.

    Processor architecture Typically, when writing desktop or server software, you will target an Intel-based architecture, such as x86 or x64. However, ARM-based chips are gaining popularity, and they can offer fantastic power efficiency. If software is specially optimized for them, then they can also offer excellent performance. For example, the Scratch graphical programming language, used to teach computing, has been optimized for a Raspberry Pi 3 and it now runs roughly twice as quick as it does on an Intel Core i5. Other software has also been optimized for the ARM processor, for example, the Kodi open source media player. ARM Holdings is simply an intellectual property company and they don't make any processors themselves. Other companies, such as Apple and Broadcom, license the designs or architecture and fabricate their System on a Chip (SoC) products. This means that there are many different chips available, which run multiple different versions of the ARM architecture and instruction set. This fragmentation can make it harder to support unless you pick a specific platform. Windows 10 IoT Core runs on the Raspberry Pi (version 2 and 3) and it can now be set up using the standard New Out Of the Box Software (NOOBS) installer. Windows 10 IoT Core is not a full desktop environment to run normal applications, but it does allow you to make hardware projects and program them with C# and .NET. However, for web applications, you would probably run .NET Core on Linux, such as Raspbian (based on Debian).

    [ 991 ] WOW! eBook www.wowebook.org

    The Way Ahead

    Hardware is hard We previously mentioned additional hardware that can be used for computation in Chapter 6, Understanding Code Execution and Asynchronous Operations, including Graphics Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), and Application Specific Integrated Circuits (ASICs). Not only can these devices be used for specific processing tasks, but their storage can be used as well. For example, you can borrow the RAM from a GPU if main memory is exhausted. However, this technique is not required as much as it used to be when memory was more limited. You may have heard of RAM SANs, which were SANs using standard RAM for permanent storage (with a battery backup). However, these have become less relevant as SSDs (based on flash memory) improved in speed and increased in capacity to the point of replacing mechanical drives for many common tasks. You can still purchase high performance SANs, but they will probably be based on flash memory rather than RAM. If you use RAM as your main storage (for example, with Redis), then it is important to use error-correcting code memory (ECC). ECC RAM is more expensive, but it is better suited for server use. However, some cloud instances don't use it or it's hard to find out whether it's even offered because it isn't listed in the specification. One application of custom computation hardware is Machine Learning (ML), specifically the deep learning branch of ML, using multilevel neural networks. This technology saw impressive advances in recent years and this led to products, such as self-driving vehicles. ML applications can make very good use of non-CPU processing, particularly GPUs, and NVIDIA provides many tools and libraries to help with this. Google also recently announced that they built a custom ASIC, called a Tensor Processing Unit (TPU), to accelerate their TensorFlow machine learning library and cloud services. You can read more at tensorflow.org and cloud.google.com/ml.

    Machine learning You don't only have to use Artificial Intelligence (AI) to replace drivers and other jobs, such as call center staff or even doctors. You can use some basic ML in your own web applications to provide product suggestions relevant to your customers or analyze marketing effectiveness, just like Amazon or Netflix do. You don't even need to build your own ML systems, as you can use cloud services, such as Azure ML. This allows you to use a graphical drag and drop interface to build your ML

    [ 992 ] WOW! eBook www.wowebook.org

    The Way Ahead

    systems, although you can also use Python and R. You still need to know a little bit about data science, such as the elementary principles of binary classification and training data, but even so, it significantly reduces the barrier to entry. Yet, if you want to fully explore ML and big data possibilities, then you probably need a dedicated data scientist. You can try out Azure ML at studio.azureml.net and you don't even need to register. The following screenshot shows an example of what it looks like:

    Big data and MapReduce Big data is probably an overused term these days and what is sometimes described as big is usually more like medium data. Big data is when you have so much information that it is difficult to process, or even store, on a single machine. Traditional approaches often break down with big data, as they are not adequate for the huge automatically-acquired datasets that are common today. For example, the amount of data constantly collected by IoT sensors or by our interactions with online services can be vast.

    [ 993 ] WOW! eBook www.wowebook.org

    The Way Ahead

    One caveat of big data and ML is that, although they are good at finding correlations in large sets of data points, you cannot use them to find causations. You also need to be mindful of data privacy concerns and be very careful of not judging someone before they have acted, based only on a predicted predisposition. Anonymizing data is incredibly difficult and is not nearly as simple as removing personal contact details. There have been many cases of large "anonymized" datasets being released where individuals were later easily identified from the records. One technology that is useful to analyze big data is MapReduce, which is a way of simplifying an operation so that it is suitable to run in parallel on a distributed infrastructure. A popular implementation of MapReduce is Apache Hadoop and you can use this in Azure with HDInsight, which also supports related tools, including Apache Spark and Apache Storm. Other options to handle large datasets include Google's Cloud Bigtable or BigQuery. You can see the available options for Azure HDInsight in the portal, as shown in the following image. Some features are still in preview or have restrictions, but they may have moved to general availability by the time you read this.

    You can see from this image that Spark is in preview and is currently available only on Linux. Hadoop is more established and is also available on Windows, as shown in the following screenshot:

    [ 994 ] WOW! eBook www.wowebook.org

    The Way Ahead

    The next image shows that Storm is also available but not on the premium preview (including Microsoft R Server):

    [ 995 ] WOW! eBook www.wowebook.org

    The Way Ahead

    Orleans Another interesting project is an open source framework from Microsoft called Orleans, which is a distributed virtual actor model that was used to power the cloud services of Halo Xbox games. What this means is that, if you build your system by separating your logic into separate actors, this allows it to easily scale, based on demand. In Orleans, actors are known as grains and you can write them in C# by inheriting from an interface. These are then executed by an Orleans server, called a silo. Grains can be persisted to storage, such as SQL or Azure Tables, to save their state and to reactivate later. Orleans can also make use of the Bond serializer for greater efficiency. Unfortunately, Orleans does not currently support .NET Core, but the porting work is in progress. Orleans allows you to write simple and scalable systems with low latency and you can read more at dotnet.github.io/orleans.

    Custom transports In Chapter 4, Addressing Network Performance, we started with an introduction to TCP/IP and briefly mentioned UDP. We also covered TLS encryption and how to minimize the impact of secure connections while still reaping performance benefits. UDP is simpler and quicker than TCP, but you either need to not care about reliable delivery (multiplayer games and voice/video chat) or build your own layer to provide this. In Chapter 8, The Downsides of Performance-Enhancing Tools, we highlighted StatsD, which uses UDP to avoid blocking latency while logging to a remote central server. There are alternatives to TLS if you aren't constrained to a browser, but if you're developing a web application, this will probably only apply inside of your server infrastructure. For example, the WhatsApp messaging app uses Noise Pipes and Curve25519 from the Noise Protocol Framework (noiseprotocol.org) between the smartphone app and their servers in addition to the Signal Protocol for end-to-end encryption. Using Noise Pipes instead of TLS increases performance because fewer round trips are required to set up a connection. Another option with similar benefits is the secure pipe daemon(spiped), as used by and created for the secure Linux backup software, Tarsnap. However, you do need to preshare the keys, but you can read more about this at www.tarsnap.com/spiped.html.

    [ 996 ] WOW! eBook www.wowebook.org

    The Way Ahead

    Advanced hashing We covered hashing functions a fair amount in this book, especially in Chapter 6, Understanding Code Execution and Asynchronous Operations. This area is constantly advancing and it is useful to keep an eye on the future to see what's coming. Although today it is reasonable to use a member of the SHA-2 family for quick hashing and PBKDF2 for slow (password) hashing, this is unlikely to always be the case. For fast hashing, there is a new family of algorithms called SHA-3, which should not be confused with SHA-384 or SHA-512 (which are both in the SHA-2 family). SHA-3 is based on an algorithm called Keccak, which was the winner of a competition to find a suitable algorithm for the new standard. Other finalists included Skein (skein-hash.info) and BLAKE2 (blake2.net), which is faster than MD5, but actually secure. An algorithm called Argon2 won a similar competition for password hashing (passwordhashing.net). To see why this matters, you can visit haveibeenpwned.com (run by .NET security guru Troy Hunt) to see whether your details are in one of the large number of data breaches. For example, LinkedIn was breached and didn't use secure password hashing (only an unsalted SHA-1 hash). Consequently, most of the plain text passwords were cracked. Therefore, if a LinkedIn account password was reused on other sites, then these accounts can be taken over. It's a very good idea to use a password manager and create strong unique passwords for every site. It is also beneficial to use two-factor authentication (sometimes also called twostep verification) if available. For example, you can do this by entering a code from a smartphone app in addition to a password. This is particularly useful for e-mail accounts, as they can normally be used to recover other accounts.

    Library and framework support There have been some significant changes to .NET Core and ASP.NET Core between RC1 and RC2, more than normal for release candidates. Sensibly, many popular libraries and frameworks were waiting for RC2, or later, before adding support. Obviously, a book is a bad place to keep up with the changes, so the author has put together a website to display the latest compatibility information. You can find the ASP.NET Core Library and Framework Support list at ANCLAFS.com. If you would like to update anything or add a library or framework, then please send a pull request. The repository is located at github.com/jpsingleton/ANCLAFS and it includes lots of useful tools, libraries, frameworks, and more. We mentioned many of these earlier in this

    [ 997 ] WOW! eBook www.wowebook.org

    The Way Ahead

    book and the following sample is just a small selection of what is listed because package support will grow over time: Scientist.NET FeatureSwitcher FeatureToggle MiniProfiler Glimpse Prefix Dapper Simple.Data EF Core Hangfire ImageResizer DynamicImage ImageProcessorCore RestBus EasyNetQ RabbitMQ Client MassTransit Topshelf Nancy SignalR Orleans

    The future A quote often attributed to the physicist Niels Bohr goes, as follows: "Prediction is very difficult, especially about the future." However, we'll have a go at this anyway, starting with the more straightforward bits. The official ASP.NET Core roadmap lists SignalR, Web Pages, and Visual Basic support shipping after the version 1.0 Release To Manufacturing/Marketing (RTM). After this, it is fairly safe to assume that features will be added to bring the Core line closer to the existing frameworks. This includes Entity Framework, which is currently missing

    [ 998 ] WOW! eBook www.wowebook.org

    The Way Ahead

    some of the big features of the full EF, for example, lazy loading. There is also the move towards the .NET Platform Standard, to enhance portability across .NET Core, the .NET Framework and Mono. For example, .NET Core 1.0 and .NET Framework 4.6.3 both implement .NET Platform Standard 1.6 (netstandard1.6). However, you probably don't need to worry about this unless you are writing a library. Refer to the documents at dotnet.github.io/docs/core-concepts/libraries if you are. Microsoft has said it will listen to user feedback and use it to drive the direction of the platforms, so you have a voice. Even if you don't, then, as the code is all open source, you can help shape the future by adding the features that you want. Further into the future is harder to predict, but there has been a steady stream of projects being open sourced by Microsoft from the early offerings of ASP.NET MVC to the Xamarin framework for cross-platform app development. It's an exciting time to be working with C# and .NET, especially if you want to branch out from web development. The Unity game engine is now part of the .NET foundation, and there are some interesting recent developments in Virtual Reality (VR) or Augmented Reality (AR) hardware. For example, Microsoft Hololens, Oculus Rift, Samsung Gear VR, and HTC Vive are all unbelievably better than the basic VR that came out a couple of decades ago. It's also a great time to be looking at IoT, which, while it may still be looking for its killer app, has so much more cheap and powerful hardware available. A Raspberry Pi Zero costs only $5 and it now supports an upgraded camera module. With a computer such as the Raspberry Pi 3, which offers almost desktop class performance for $35, anyone can now easily learn to code (perhaps in C# or .NET) and make things, especially children. Following the wisdom of Alan Kay: "The best way to predict the future is to invent it." So, get out there and make it! And make sure that you share what you've done.

    Summary We hope that you enjoyed this book and learned how to make your web applications highperforming and easy to maintain, particularly when you use C#, ASP.NET Core and .NET Core. We tried to keep as much advice as possible applicable to general web app development, while gently introducing you to the latest open source frameworks from Microsoft and others.

    [ 999 ] WOW! eBook www.wowebook.org

    The Way Ahead

    Clearly a topic such as this changes quite rapidly, so keep your eyes open online for updates. There is an announcements repository on GitHub that is worth watching at github.com/aspnet/announcements. Hopefully, a lot of the lessons in this book are generally good ideas and they will still be sensible for a long time to come. Always keep in mind that optimizing for its own sake is pointless, unless it's just an academic exercise. You need to measure and weigh the benefits against the downsides; otherwise, you may end up making things worse. It is easy to make things more complex, harder to understand, difficult to maintain, or all of these. It's important to instill these pragmatic performance ideas into team culture, or else teams won't always adhere to them. However, above all, remember to still have fun!

    [ 1000 ] WOW! eBook www.wowebook.org

    Bibliography This learning path has been prepared for you to explore ASP.NET and learn to use them in your web applications.It comprises of the following Packt products: Learning ASP.NET Core MVC, Mugilan T. S. Ragupathi ASP.NET Core and Angular 2, Valerio De Sanctis ASP.NET Core 1.0 High Performance, James Singleton

    WOW! eBook www.wowebook.org

    Thank you for buying

    ASP.NET Core: Cloud-ready, Enterprise Web Application Development About Packt Publishing

    Packt, pronounced 'packed', published its first book, Mastering phpMyAdmin for Effective MySQL Management, in April 2004, and subsequently continued to specialize in publishing highly focused books on specific technologies and solutions. Our books and publications share the experiences of your fellow IT professionals in adapting and customizing today's systems, applications, and frameworks. Our solution-based books give you the knowledge and power to customize the software and technologies you're using to get the job done. Packt books are more specific and less general than the IT books you have seen in the past. Our unique business model allows us to bring you more focused information, giving you more of what you need to know, and less of what you don't. Packt is a modern yet unique publishing company that focuses on producing quality, cutting-edge books for communities of developers, administrators, and newbies alike. For more information, please visit our website at www.packtpub.com.

    Writing for Packt

    We welcome all inquiries from people who are interested in authoring. Book proposals should be sent to [email protected]. If your book idea is still at an early stage and you would like to discuss it first before writing a formal book proposal, then please contact us; one of our commissioning editors will get in touch with you. We're not just looking for published authors; if you have strong technical skills but no writing experience, our experienced editors can help you develop a writing career, or simply get some additional reward for your expertise.

    WOW! eBook www.wowebook.org

    Index . .NET Compiler Platform 305 .NET Core 1.0 749 .NET Core benefits 879 projects 879, 880 .NET Framework 4.6.1 318

    5 500 Internal Server Error 50

    A Access denied Page 285 AccountsController about 691 Add request 693 adding 661, 664 BaseController 662 class, declaration 665 constructor, using 665, 666 Delete method 697 ExternalLogin 666 ExternalLoginCallBack 667, 670 Get request 692 logout 670 namespaces 665 Update method 695 action filters 60 action method securing, in Controller 298, 299 additional lists adding 407 multiple component instances 408, 409, 411 Address Resolution Protocol (ARP) 781 Advanced Encryption Standard (AES) 816 advanced hashing 997

    Advanced Message Queuing Protocol (AMQP) 929 Advanced SIMD 895 Advanced Vector Extensions (AVX) 895 aggregate functions 868, 869 ahead-of-time (AOT) compilation 880 Amazon CloudFront 805 Amazon Web Services (AWS) 210, 598, 760, 851 Angular 2 client adapting 494 GUI, updating 497 ItemService class, improving 494 Angular 2 login form root module file, updating 615, 616 styles, adding 614 UI and validation test 617, 618, 619 Angular 2 RC5 345 Angular 2 adding 337 API documentation, reference 555, 574 CDN, using 342 components 345 developer blog, reference 616 features 304 installing, NPM used 337, 340 modules 345 Typings, adding 340, 342 Angular Material reference 547 AngularJS 16, 98 Animated PNG (APNG) 836 Apache License reference 641 APDEX URL 962 API requests batching 862, 863, 864, 865, 866

    WOW! eBook www.wowebook.org

    APIs 850 app, refactoring AppRouting scheme, adding 418, 419 HomeComponent, adding 420 master-detail strategy, revising 426 new components, adding 422 root module, updating 424 AppCache 915 Application Insights about 954, 955 URL 958 application layer caching about 925 database result set, caching 927 Redis 926 Application Pool configuring 741 Application Specific Integrated Circuits (ASICs) 892, 992 Application-Layer Protocol Negotiation (ALPN) 829 application configuration, for using identity 288 profiling 769 profiling, with Glimpse 770 profiling, with Visual Studio (VS) 773, 774, 775 ApplicationDbContext updating 644 ApplicationUser updating 644 appsettings.json moving, literal values to 646 AppSettings updating 731 Argon2 754, 997 ARM chips 991 Artificial Intelligence (AI) 992 ASP.Net 5 application creating 41, 43, 44 project structure 32, 33 ASP.NET 5 about 19 features 20 installation 28, 30, 31, 32 reference 19, 28 ASP.NET Core 1.0

    about 749 future 998, 999 ASP.NET Core application creating 93 deploying, in Azure 193, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 ASP.NET Core documentation reference 659 ASP.NET Core Identity 639 ASP.NET Core IIS publishing settings references 739 ASP.NET Core module installing, for Internet Information Services (IIS) 738, 739 ASP.NET Core revolution 305 ASP.NET Core web application project creating 312 package manager, adding 315 packages, installing 316 resource manager, adding 315 tsconfig.json file 326 tsconfig.json file, adding 324 typescriptServices.js file, upgrading 342, 343 ASP.NET Core web application, deploying in Linux environment .NET Core, installing in Linux machine 226, 227 about 210 Linux machine, creating 211, 214, 215, 216, 217, 218, 219, 220 PuTTY client, installing 220, 221, 222, 223, 224, 225 ASP.NET Core benefits 879 Kestrel 880 modifications 835, 836 projects 880 rewrites, handling 435 ASP.NET Identity 285 ASP.NET MVC 6 NuGet package installing, in application 45, 46, 48 ASP.NET MVC 6 about 8 unified MVC 18 Web API Controller 18

    [ 1002 ] WOW! eBook www.wowebook.org

    ASP.NET MVC applications Bootstrap, using 171 Controller 37 Entity Framework, using 118, 126 ASP.NET MVC about 16 features 17 reference 10 ASP.NET Web Forms 15 ASP.NET Web Pages 16 ASP.NET about 14 programming models 15 versions 761, 762 AspNet.Security.OpenIdConnect.Server (ASOS) 639 AspNetCore.Identity installing 584 package, adding 585, 586 project classes, updating 586 assembler 751 associated action methods creating 291 Async Data Requests 362 asynchronous (async) methods 789 asynchronous operations about 789, 790 background queuing 790 Hangfire 791 tools, using 790 attribute routing 369 attribute-based routing 152, 153, 154 Augmented Reality (AR) 999 Authenticated Encryption with Additional Data (AEAD) 828 authentication methods about 594 Session-Based Authentication Flow 594 Token-Based Authentication Flow 596, 597 authentication mode Azure Active Directory 584 individual user accounts 584 no authentication 584 selecting 584 signatures 598

    two-factor 598 Windows authentication 584 authentication about 284, 579, 580, 581 third-party authentication 581 AuthHttp Wrapper 627 authorization filters 60 authorization rules adding 629 client, adapting 629, 631 client, testing 632, 634 server, protecting 634, 635 user ID, retrieving 635, 636 authorization test 636 authorization about 285, 579, 582 third-party authorization 582 AuthService 698 AuthService component about 620 AppModule, updating 622 login test 625, 626 LoginComponent, updating 623 auto-migration feature 592 AutoMapper URL 793 Availability Zone (AZ) 851 avalanche effect 914 Azure AD Authentication Library (ADAL) 584 Azure Application Gateway 843, 850 Azure CDN 805 Azure deployment prerequisites 191, 192, 193 Azure Load Balancer 850 Azure ML about 992 URL 993 Azure Service Bus 929 Azure SQL Database 926 Azure Tables 996

    B background queuing 790 bandwidth 753 Base64 URL encoded 914

    [ 1003 ] WOW! eBook www.wowebook.org

    bcrypt 754 BenchmarkDotNet URL 883 big data 993, 995 BigQuery 994 bitmap (BMP) 808 BizSpark 759 BLAKE2 URL 997 Bloom filters about 886, 887 URL 888 blue-green deployment 877 Bond framework URL 895 Bootstrap forms about 164 inline forms 164 Bootstrap grid system reference 559 Bootstrap about 162, 314 adding 548 buttons 169 contextual colors 170 disadvantages 550 form-styling documentation, reference 577 forms 163, 164 GitHub projects, reference link 550 grid system 162 native directives module, selecting 549, 550 ng2-bootstrap, installing 550, 551 reference 162 table CSS classes 166 using, in ASP.NET MVC application 171 Bootswatch references 549 Bower about 35, 316 installing 173, 174 BPG about 838 URL 838 British Indian Ocean Territory (BIOT) 846 British Telecom (BT) network 854

    Brotli about 833 URL 833, 834 browsers caching 841 Chrome 775, 776, 777 Firefox 777 built-in collections URL 886 Bundler & Minifier Visual Studio extension reference 272 bundling 834, 835 buttons about 169 size, modifying 170

    C cache busting 914 Cache Digests 888 Cache-Control header 276 caching about 275, 840 application layer caching 925 client-side caching 276, 277 in proxy servers 843 in server 842 in web browser 841 limitations 909, 910 reference link 843 response caching 277 web caching 910 cargo cult programming avoiding 938 Cascading Style Sheets (CSS) about 158, 160, 161, 530 reference 533 sample 531 centralized logging 960 Certificate Authority (CA) 818 certificate pinning 843 ChaCha/Poly 827 ChaCha20 827 characteristics, application consistency 161 customizable 161

    [ 1004 ] WOW! eBook www.wowebook.org

    easy start 161 mobile-friendly 161 responsive 161 checksum 888, 889 Chrome using 775, 776, 777 class code 520, 521 client code, setting up about 343 bootstrap file 347 component file 343, 344 index.html file 350, 351 module file 345 module loader configuration file 347 Client Hints URL 840 client, Facebook App AppComponent, updating 675 AuthService, updating 674 LoginComponent, updating 672 zones 677 client-side routing @angular/router library, adding 414, 415, 416 about 414 app, refactoring 417, 418 PathLocationStrategy, versus HashLocationStrategy 416 client-side validation 133, 139, 140 client-side of web application 11 CloudFlare about 805 URL 805 CloudFront 269 Cloudinary URL 807 CloudWatch URL 960 Code Lens 23 code-first approach about 448 cons 449 pros 449 collection benchmarks 883, 884, 885, 886 Common Intermediate Language (CIL) 880

    Common Table Expression (CTE) 870 competitive SPA features 306 complexity reduction architecture 940, 941 frameworks 939 refactoring 944, 946 complexity about 938 managing 937 reducing 939 Component Object Model (COM) 839 components, User Registration logic menu, updating 708 Register link, adding 706 root module, updating 705 route, implementing 705, 706 styles, defining 707 compression artifacts 808 compression about 832 bundling 834 image optimization 836 lossless compression algorithms 833, 834 minification 834 concatenation aggregator 802 console applications creating, with Entity Framework 103 Content Delivery Network (CDN) about 263, 548, 805, 843, 844, 856, 923 using 269 Continuous Integration (CI) 760, 938 controller level route attribute 154 Controller/Action/Id pattern 38 Controller action method, securing 298, 299 creating 48, 49, 50, 51, 291 IActionResult 52 role, in ASP.NET MVC applications 37 routing values, passing in HTTP action verb 155 convention-based routing about 147, 148 examples 148, 149, 151, 152 Coordinated Universal Time (UTC) 782

    [ 1005 ] WOW! eBook www.wowebook.org

    core requirements, sample single-page application project Angular components 311 authentication 311 data model 311 routing 311 Web API controllers 311 CoreCLR 305, 879 CoreFX 879 CoreRT 880 Cross Origin Resource Sharing (CORS) 815 cross-database joins 876 CSS files minifying 272, 273 Curve25519 996 custom network diagnostics tools building 858, 859, 860 custom Tag Helpers creating 87, 88, 90 custom transports 996 Customer Experience Improvement Program (CEIP) 780 customized performance measurement writing 781, 784 Cyclic Redundancy Check (CRC) 888

    D Dapper about 845 URL 796 using 796 Dart 323 data access layer models 793 Data Annotation attributes MaxLength 135 MinLength 135 range 135 RegularExpression 135 required 135 used, for updating View models 135 data flow 362, 363 data mart 868 Data Model 441 Data Model entity classes 448 data modeling approaches

    about 445 code-first approach 448 database-first approach 447 model-first approach 445 selecting 450 data provider testing 488 Data Requests 363 data seed strategy DbSeeder class, creating 469, 475 DbSeeder, adding to Startup.cs 477 implementing 468 data structures about 881 Bloom filters 886, 887 collection benchmarks 883, 884, 885, 886 dictionaries 882, 883 lists 881, 882 URL 886 data warehouse 868 data passing, from Controller to View 57, 58 database (DB) query 788 Database Administrators (DBAs) 866 database configuration, SQL Server Management Studio about 726 application settings file, creating for production 730, 731 authentication mode, modifying 727 external providers, updating 732 launchSettings.json, updating 733, 734 login, mapping to database 728, 729 OpenGameList database, adding 727 OpenGameList login, adding 727 publishOptions, configuring 732 SQL Server connection string, adding 729, 730 Startup class, checking 733 Database Engine Tuning Advisor (DETA) 867 database engine appsettings.json, updating 464 selecting 463, 464 database seeding about 479, 654 ItemsController, updating 480

    [ 1006 ] WOW! eBook www.wowebook.org

    database-first approach about 447 cons 448 pros 448 database creating 465 EF tools, configuring 466 initial migration, adding 466, 467 migration 127, 129 Startup.cs, updating 465 updating 592, 593, 594 databases 849 DB layer about 280 classic ADO.NET, using 281 correct column size of database columns, using 282 correct column type of database columns, using 282 correlated subqueries, avoiding 282 indices, fine tuning 282 queries generated, ORM 281 required data, returning 281 DB operations data, inserting 871, 873 database, tuning 867, 868 enhancing 875, 876 Globally Unique Identifier (GUID) 874 performing 866, 867 reporting 868 DbContext database initialization strategies 462, 463 setting up 460, 462 DbSeeder class adding, to Startup.cs 477 constructor 475 creating 469 exceptions, handling 478, 479 private members 475 public methods 475 seed methods 476 utility methods 476 DbSeeder literal values, moving to appsettings.json 646 updating 645

    debugging tools using 764 DEFLATE 833 DELETE method about 240, 260 Delete Web API method, use for deleting resource 260 Web Repository layer code, used for deleting employee data 260 Dependency Injection (DI) about 250, 251, 252, 253, 255, 256, 257, 258, 259, 959 DELETE method 260 dev tools 775 DevOps 866 dictionaries 882, 883 Diffie-Hellman (D-H) 818 Dig 856 Discrete Cosine Transform (DCT) 838 disk latency 788 disks 847, 848 distributed debugging about 951, 952 logging 952 statistics 961, 962 dnsmasq 856 Do it yourself approach about 545 cons 546 pros 545 versus framework-based styling 545, 547 docs advantages 529 Domain Name System (DNS) 777, 812 domain-specific language (DSL) 922 Don't Repeat Yourself (DRY) 661 DotnetGD 840 downsides, Session-Based Authentication Flow cross-domain issues 596 memory issues 596 scalability issues 596 security issues 596 DuckDuckGo 861 dummy data provider 379 dynamic typing 906

    [ 1007 ] WOW! eBook www.wowebook.org

    DynamicImage URL 839

    E e-mail confirmation 694, 695 ECMAScript 6 (ES6) 915 Edit Account view about 712 final test 719, 720, 721 EF Core 880 Elastic Compute Cloud (EC2) 853 Elastic Load Balancer (ELB) 850 Elasticsearch URL 781 Electronic Frontier Foundation (EFF) 815 Elliptic Curve Diffie-Hellman Ephemeral key exchange (ECDHE) 823 Elliptic Curve Digital signature Algorithm (ECDSA) 826 email using 297 engine firing up 742, 744 entities comments 453, 454, 455 creating 450 items 450, 451, 453 users 456, 457 Entity Data Model (EDM) 442 Entity Designer Model XML (EDMX) 445 Entity Framework (EF) about 34, 101, 102, 760, 765, 789 commands, installing 109 independent of ASP.NET MVC 102 migration, creating 111, 113, 114, 115 record, deleting 117 record, updating 116 SaveChanges method, working 116 used, for creating console applications 103 using, in ASP.NET MVC applications 118 Entity Framework 6 308 Entity Framework 7 NuGet package installation about 103 NuGet Package Manager, using 104, 105, 107 Package Manager Console, using 107, 109

    Entity Framework command installation about 109 DbContext class, creating 110 Model classes, creating 110 Entity Framework Core 1.0 792 EntityFramework Core installing 442, 443, 444, 445 error logging 952 Error Logging Middleware (ELM) 952 Error Logging Modules and Handlers (ELMAH) 952 error-correcting code memory (ECC) 992 Ethernet 804 exception filters 60 exceptions 906, 907 Exchangeable image file format (Exif) 837 Extensible Markup Language (XML) 894 extension methods 605 external authentication providers about 654 Facebook 655 Google 680, 681, 682, 683, 684 OAuth2 authorization flow 655 troubleshooting 687 Twitter 684, 685, 686 extreme programming (XP) 307, 937

    F Facebook App about 655 AccountsController, adding 661 client, configuring 672 creating 656, 657, 658 Facebook middleware, adding 659 ItemsController, updating 670, 671 keys, storing in appsettings 658, 659 testing 678 Facebook Developer reference 656 favicon reference link 554 features, ASP.NET MVC convention 17 generated HTML control 18 Separation of concerns 18

    [ 1008 ] WOW! eBook www.wowebook.org

    unit testing, better support 18 features, ASP.NET client-side development, better support 20 cross-platform support 20 Fiddler about 240, 242, 243, 244, 245, 246, 247, 248 reference 241 URL 779 used, for monitoring HTTP 779 Field-Programmable Gate Arrays (FPGAs) 892, 992 File System Publish Profile 736 File Transfer Protocol (FTP) 812 file-based project structure .NET core support 33 controllers 34 full .NET support 33 migrations 34 models 34 other files 35 Project.json package 34 views 34 wwwroot folder 35 filters about 58, 59 action filters 60 authorization filters 60 exception filters 60 resource filters 60 result filters 60 Firefox using 777 FizzBuzz URL 937 flash memory 847 fluent lambda functions 881 framework-based approach about 546 cons 547 pros 546 Free Lossless Image Format (FLIF) about 838 URL 839 FTP Publish profile 736 full routing test

    performing 432, 433, 435 full-stack developer 866

    G Garbage Collector (GC) 752, 879 GET method 238 Gigabit 804 GIMP URL 807 GitHub Pages URL 803 Glimpse about 770 code, adding 771 installing 770 web application, executing 771, 772, 773 Globally Unique Identifier (GUID) 874, 875 Golomb-coded sets (GCS) 888 Google 680, 681, 682, 683, 684 Google Analytics 961 Google Identity Provider 581 googol 853 grains 996 Graphics Interchange Format (GIF) 808, 836 Graphics Processing Units (GPUs) 892, 992 GraphicsMagick 840 Graphite 960 Greenwich Mean Time (GMT) 782 Grunt about 749 using, instead of Gulp 334 GUI Add new menu 497, 500 Delete command 500, 501 delete test menu 510 menu, testing 502, 503, 504, 505 new test menu, adding 506, 507 new test, adding 505 test, deleting 509 test, updating 507, 508, 509 Update command 500, 501 updating 497 Gulp about 329, 749 dependent tasks 333

    [ 1009 ] WOW! eBook www.wowebook.org

    script, updating 539 working with 329, 330, 332, 333 Gulpfile 330 gulpfile.js 35 gzip 833

    H Hadoop 994 Hangfire about 791 URL 791 Hard Disk Drive (HDD) 756, 847 hardware approach changes, scaling 758, 759 importance 756 pragmatic solutions 803 storage access speeds 756, 758 using 992 Hash-based MAC (HMAC) 889 Hashbang technique 306 hashing about 888, 889 benchmarks 889, 890, 891, 893 HDInsight 994 high DPI 807 high performance culture achieving 946 application, rewriting 949, 950 blameless culture 947 budget, planning 950 intellectual dishonesty 947, 948 shared values 950 slow down 948, 949 HipHop Virtual Machine (HHVM) 943 HMAC-SHA256 889 HTML doctype 174, 177 HTML pages 532 HTML5 Application Cache (AppCache) 841 HTTP action verbs routing values, passing 155 HTTP encryption delay diagnostics 820, 821, 823, 825 key exchange 817, 818, 819, 820 performance tweaks 825, 827 HTTP Public Key Pinning (HPKP) 843

    HTTP Strict Transport Security (HSTS) 816 HTTP verbs 238 HTTP/1.1 814 HTTP/2 814, 829, 830 HTTP/HTTPS implementation standards 580 Hugo about 803 URL 803 Hyper Text Markup Language (HTML) about 158, 160, 161 generating 71, 72 helpers 73, 74, 75 Hyper Text Transfer Protocol (HTTP), methods about 12 GET method 13 POST method 14 Hyper Text Transfer Protocol (HTTP) about 9, 814 basics 237 browsers 775 DELETE method 240 encryption 816 Fiddler 240 GET method 238 headers 814 headers, caching 912, 913 methods 815 monitoring 775 monitoring, with Fiddler 779 POST method 239 PUT method 239 request-response pattern 9 stateless nature 10 status code 815 WebSockets 831 hyper-threading 758, 899 Hyper-V 775 HyperLogLog (HLL) 962

    I identity using, by application configuration 288 IIS instance configuring 739, 740 image optimization

    [ 1010 ] WOW! eBook www.wowebook.org

    about 836 image formats 838, 839 images, resizing 839, 840 JPEG 837 PNG 836, 837 ImageEngine 840 ImageMagick URL 807 ImageProcessor URL 840 ImageProcessorCore 840 ImageResizer URL 839 imgmin URL 838 immutable server 849 implementation class considerations 249 improvement tips, generic performance about 282 Response.Redirect method, avoiding 283 string builder, using 283 input/output (I/O) about 846 APIs 850 categories 846 databases 849 disks 847, 848 network diagnostics tools 851 virtual file systems 848 installations ASP.NET 5 28 Visual Studio Community 24 Integrated Development Environment (IDE) about 21, 760 purpose 21 integrated logging 959 Intel style processors 991 IntelliSense 75 IntelliTrace 774 internal membership provider 584 International Organization for Standardization (ISO) 894 Internet Engineering Task Force (IETF) 9 Internet Information Services (IIS)

    about 182, 734, 842, 850, 922 ASP.NET Core module, installing for 738, 739 configuring 738 Internet of Things (IoT) 760 internet protocols about 812 HTTP 814 Transmission Control Protocol/Internet Protocol (TCP/IP) 812, 813 Internet Service Provider (ISP) 853 Inversion of Control 250 Ion URL 895 IP address ranges URL 853 IPv6 858 issues caching 354 Item class 389 Item Detail View Display mode 565, 566, 568 Edit mode 568, 571, 573, 574, 575, 577 ItemDetail component ItemDetailView component, adding 512, 514 outdated references, tracking 517, 518 refactoring, into ItemDetailEdit 514 root module, updating 515 routes, updating 515 splitting 510, 511, 512 tab menu, implementing 518 testing 524, 525, 526, 527 ItemDetailComponent class @angular/forms library, adding 402 about 400 dots, connecting 404 root module, updating 403 ItemDetailView component adding 512, 514 ItemListComponent class about 393 implementation test 396, 397, 398 ItemsController about 367, 369 mapping, implementing 482 TinyMapper, installing 481

    [ 1011 ] WOW! eBook www.wowebook.org

    ItemService class about 390 improving 494 ItemViewModel 366

    K

    J Java Message Service (JMS) 929 JavaScript 324 JavaScript files minifying 272, 273 JavaScript Object Notation (JSON) 179, 180, 894 JavaScript Task Runner 316 JavaScriptServices 358 Jekyll 803 JetBrains 775 Jil URL 894 JPEG 808, 837 JPEG Archive about 838 URL 838 JPEG2000 838 jQuery 16 JSON Responses 363 JSON web token authentication Angular 2 login form 611, 612, 613 AuthHttp wrapper 627 authorization rules, adding 629 AuthService component 620 implementing 599 JWT provider 600 middleware, adding to pipeline 609 JSON Web Tokens (JWT) about 598 reference 599 just-in-time (JIT) compiler 879 JWT provider about 600 constructor 602 full source code 606 private members 601 private methods 603 public methods 603 static members 601, 602

    Kestrel 880 Kestrel test 745, 746 key features, competitive SPA efficient routing 306 no server-side round-trips 306 performance and flexibility 306 key stretching 754 Kibana 781 Knockout 98 KnockoutJS 16, 314 Kodi 991

    L lag 788 Language-Integrated Query (LINQ) 881 latency about 753, 788 disk latency 788 example 789 network latency 788 Latest News Partial View 77 layout \_Layout.cshtml, creating 68 _ViewStart.cshtml, creating 68 about 66 building 67 page specific View, adding 69 Lempel-Ziv-Markov chain Algorithm (LZMA/LZMA2) 833 LESS configuration about 539 Gulp script, updating 540, 543 LESS file, adding 539, 540 stylesheet, linking 543 testing 544 LESS about 530, 532 advantages 532 alternative, Systematically Awesome Style Sheets (Sass) 538 Cascading Style Sheets (CSS) 531 configuring 539 directives, importing 534

    [ 1012 ] WOW! eBook www.wowebook.org

    docs and support 538 extend pseudo-class 537 mixins 536 nested selectors 535 reference link 538 style sheet languages 530 using 532, 533 variables 533 LibGD URL 840 libuv 880 licensing costs 759 lifetime services Scoped 251 Transient 251 Line of Business (LoB) applications 803 LINQ commands 798 lists 881, 882 literal values moving, to appsettings.json 646 Local Area Network (LAN) 756 logging about 952 Application Insights 955, 958 centralized logging 960, 961 error logging 952 integrated logging 958 URL 960 Login Page 285 Logstash about 781 URL 960 lossless compression algorithms 833, 834 Low Earth Orbit (LEO) 856

    M Machine Learning (ML) 992 about 992 main navigation bar about 552 active route, detecting 555, 558 Man in the Middle attack (MitM) 843 MapReduce 994, 995 master-detail binding about 389

    Item class 389, 390 ItemDetailComponent class 400, 401 ItemListComponent class 393, 395, 396 ItemService class 390, 391, 393 testing 406 master-detail strategy ItemDetail route, adding 426 ItemDetailComponent, updating 429 ItemListComponent, changing 427 revising 426 max-age response header 276 MD5 754 media queries reference 718 Message Authentication Code (MAC) 889 Message Digest 5 (MD5) 888 message queue (MQ) about 927 coffee shop analogy 928 publish-subscribe 929 styles 929 unicast 929 using 791 microservices architecture about 862 versus monolith architecture 941, 942, 943, 944 Microsoft Azure about 182 ASP.NET Core application, deploying in 193, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209 reference 183 signing up to 183, 184, 185, 186, 187, 188, 189 Microsoft Developer Network (MSDN) 759 Microsoft Message Analyzer 779 Microsoft Message Queuing (MSMQ) 760, 929 Microsoft publishing and deployment documentation reference 734 Microsoft SQL Server (MS SQL) 868 migration process 111 migrations 467, 468 MiniBench URL 883

    [ 1013 ] WOW! eBook www.wowebook.org

    minification 835 Minimum Viable Product (MVP) 950 MiniProfiler about 768 URL 769 MIT 759 Model binding 100, 101 model state verifying, by controller action method updation 137, 138, 139 Model View Controller (MVC) 308, 749, 880 model-first approach about 445 cons 447 pros 446 Model-View-Controller (MVC) pattern about 16 Controller 17 Model 16 View 16 Model about 92, 98 adding 55, 56 data flow 99, 100 specific, to View component 98 ViewModels 98 module loader configuration file dynamic module loader, using 349 Moore's Law 758 mozjpeg about 838 URL 838 MSDN URL 876 multi-page application (MPA) 307 MVVM 308

    N Native Image Generator (NGen) tool 880 native tool chain 990 native web application Publish profile, creating 735 publishing 734 Native Web Applications (NWAs) 307 navigation menu

    account route, connecting with 717, 718 styles, adding 718 NEON 895 NetMQ 929 network diagnostics tools about 851 custom network diagnostics tools, building 858, 859, 860 Nslookup 856, 858 Ping 851, 852 Tracert 853, 855 network latency 788 Network Monitor (Netmon) about 779 Microsoft Message Analyzer 779 Wireshark 779, 781 new ASP.NET 5 project creating 229, 230, 231, 232, 233 new ASP.NET Core configuration pattern 355 new components, app refactoring AboutComponent 422 adding 422 LoginComponent 423 PageNotFoundComponent 423 New Out Of the Box Software (NOOBS) 991 NGINX 805 Nginx server configuring 233, 234, 235 NHibernate 281 Nippon Telegraph and Telecom (NTT) 854 Node Package Manager (npm) 35 Node.js 749 Noise Pipes 996 Noise Protocol Framework URL 996 Non-Functional Requirement (NFR) 750 NPM about 316, 327, 328 using 327 Nslookup 856, 858 NuGet 316 NuGet package manager 641 NuGet packages, adding to project HTTP request pipeline, testing 319, 320, 321, 322

    [ 1014 ] WOW! eBook www.wowebook.org

    methods, for adding 316 Newtonsoft.Json 322, 323 project.json 317, 318 startup.cs 318, 319 number of HTTP requests, reducing about 265, 266, 267, 268 Content Delivery Network (CDN), using 269 CSS stylesheets, using 270 GZip compression, using 269 JavaScript, using 270

    tracking 517, 518 oversized images image format, optimizing 808, 809 optimizing 805, 806 resolution, optimizing 806, 807

    P

    O OAuth 582 OAuth2 authorization flow about 655 reference 656 OAuth2/OpenID connect servers about 640 reference 640 Object Relational Mapping (ORM) 101, 281, 442, 760 Online Certificate Status Protocol (OCSP) 827 Open Web Interface for .Net (OWIN) 769 OpenGameList about 310 reference link 554 OpenGameListWebApp about 313 conclusion 688 OpenIddict ApplicationDbContext, updating 644 ApplicationUser, updating 644 database seeding 654 DbSeeder, updating 645 installing 640, 641, 643 migration, adding 644 MyGet, adding to package source 641, 642 reference 641 Startup class, configuring 649 web.config rewrite rules, updating 653, 654 Oracle PL/SQL 868 Orleans about 996 URL 996 outdated references

    package source MyGet, adding 641, 642 paging implementing 798, 799, 800, 801, 802 Paint.NET URL 806 parallel programming about 896 limitations 903 parallel benchmarking 901, 903 parallel LINQ 900, 901 Task Parallel Library (TPL) 897, 900 parity check 888 Partial View about 76, 77, 78 calling 79 password hashing about 754 URL 997 Password-Based Key Derivation Function 2 (PBKDF2) 754, 889 PBKDF2 algorithm 782 Perfect Forward Secrecy (PFS) 818 performance optimization via slow processes 754 performance problems about 751 issues, identifying 755 language considerations 751, 752 measurement 755 planning 755, 756 types 752, 753 performance as feature 750, 751 importance 754 PHP 532 Ping 851, 852 Piwik 961

    [ 1015 ] WOW! eBook www.wowebook.org

    Platform Invoke (PInvoke) 991 PNGOUT tool 837 POCO (Plain Old C# Objects) 92 points of presence (PoP) 923 Poly1305 Message Authentication Code (MAC) 827 Portable Network Graphics (PNG) 808, 836 POST method 239 PowerShell command 644 pragmatic solutions, hardware desktop example 803, 804 for web applications 804, 805 prerequisites, for development of web application addition of items ease 22 build/publish 22 code editor 21 Intellisense 22 templates 22 presentation layer code and structure 530 Procedural Language / PostgreSQL (PL/pgSQL) 868 processor architecture 991 product owner expectations about 307, 308 adaptability 309 early releases 308 fast completion 308 GUI over backend 308 programming models, ASP.NET ASP.NET MVC 16 Web Forms 15 Web Pages 16 project classes, AspNetCore.Identity ApplicationDbContext.cs 586 ApplicationUser.cs 587, 588 DbSeeder.cs 589, 590 Startup.cs 588, 589 updating 586 project structure, ASP.Net 5 application about 32 file-based project 33 project.json file about 181 ApplicationDbContext class, adding 287

    ApplicationUser class, adding 287 dependencies node 181 frameworks node 182 relevant dependencies, adding 286 protobuf-net 895 Protocol Buffers 894 proxy servers about 805, 922 caching 843 Content Delivery Networks (CDNs) 843, 844 Varnish 922 pseudorandom number generator (PRNG) 961 Public Key Infrastructure (PKI) 818 Publish profile, native web application creating 735 File System Publish profile 736 FTP Publish profile 736 publish-subscribe (pub/sub) 925 push notifications 831 PUT method 239 PuTTY client download link 221

    Q Quality of Service (QoS) 813 Qualys SSL Labs URL 820 Quick UDP Internet Connections (QUIC) 830

    R RAID SSDs 804 random number generator (RNG) 961 Raspberry Pi 856 Raspbian 991 Razor 880 Razor View engine about 61, 62 programming 62 variables 62, 64 Real Time Communication (RTC) 831 Redgate ANTS 775 Redis about 791, 926 URL 926 Redundant Array of Independent Disks (RAID) 848

    [ 1016 ] WOW! eBook www.wowebook.org

    ref-vars 574 reflection 904 regular expression (regex) 703, 904 about 905 Relational Database Service (RDS) 926 relationships defining 457, 458, 459 Release to manufacturing (RTM) 832 relevant dependencies adding, to project.json file 286 Remote Desktop Protocol (RDP) 851 reporting about 868 aggregate functions 868, 869 sampling 871 Representational State Transfer (REST) 815, 940 request pipelining 814 resource filters 60 Responsive Web Design reference link 545 result filters 60 Retina 807 revolutions per minute (rpm) 847 rewrites handling, in ASP.NET Core about 435 causes 436 fix, implementing 437, 438 issue, displaying 436 outcome, validating 439 RFC 2822 703 RFC 7725 URL 815 RFC2898 891 root module updating 515 Roslyn 305 round-trip time (RTT) 851 Route 53 861 route attribute controller level 154 Route Constraints 156 Router class 555 routes about 371 adding 375, 378

    handling, in ASP.NET Core 373, 374 updating 515 routing values passing, in HTTP action verbs in Controller 155 routing about 38, 39, 40, 147 attribute-based routing 152, 375 convention-based routing 147, 374 defining 371 three choices, for handling routes 375 through ages 371, 372 RSA 817 Ruby on Rails 749 RyuJIT 305, 880

    S sample single-page application project about 309 core requisites 311 vision 310 scaffolding about 48 sample 358, 359 tools, using 357 Scalable Vector Graphics (SVG) 809 scientific approach implementing 785 multiple changes, avoiding 785 repeatability 785 Scratch 991 Search Engine Optimization (SEO) 815 Secure Hash Algorithm (SHA) 889 Secure Page 285 secure pipe daemon (spiped) 996 Secure Shell (SSH) 215, 851 Secure Sockets Layer (SSL) 812 security 284 Select N+1 problems about 792, 793, 795, 796 paging 802 paging, implementing 798, 799, 800, 801 static site generator 802, 803 Sematic Versioning (SemVer) about 962 URL 962

    [ 1017 ] WOW! eBook www.wowebook.org

    Separation of Concerns (SoC) 530, 939 serialization about 894, 895 reference link 894 Server Name Indication (SNI) 819 server-side validation about 132, 134 controller action method updation, for model state verification 137, 138 View models updation, for display of validation error message 136 View models, updating with Data Annotation attribute 135 server-side of web application 11 server caching 842 Service Oriented Architecture (SOA) 940 Service Set Identifier (SSID) 891 service workers about 841, 915 example 916, 918, 922 reference link 915 ServiceStack.Text 894 session resumption 826 Session-Based Authentication Flow about 594, 595 downsides 596 SHA-3 997 SHA1 754 SHA256 754 shards 829 Shared Dictionary Compression for HTTP (SDCH) 833 signatures 598 silo 996 Simple Mail Transfer Protocol (SMTP) 779, 812 Simple Object Access Protocol (SOAP) 894 Simple Queue Service (SQS) 929 Simple Speed Tester URL 883 simulation 876, 877 Single Instruction Multiple Data (SIMD) CPU instructions 895, 896 single item

    dealing with 379, 380, 384, 385, 386 Single Responsibility Principle (SRP) 939 single-page application about 306, 307 product owner expectation 307 Skein URL 997 slow-start algorithm 813, 814 SMS services using 297 Socket.IO 831 Solid State Drive (SSD) 214, 756, 788, 847 Spark 994 SPDY 829 sprites 829 SQL Azure 868 SQL injection 795 SQL Server 2016 Express Edition installing 724 reference 723 SQL Server Management Studio (SSMS) about 725, 765 database, configuring 726 reference 725 SQL Server Profiler about 765 MiniProfiler 769 query, executing 767 using 765, 766 SQL Server switching to 723 stale caches managing 962 Startup class configuring 649 static site generator 802, 803 StatsD about 996 URL 960 sticky sessions 942 storage access speeds 757, 758 Storage Area Network (SAN) 788, 845 Storm 994 Streaming SIMD Extensions 2 (SSE2) 895 string concatenation 905

    [ 1018 ] WOW! eBook www.wowebook.org

    styles, applying conclusions 577 Item Detail View 565 Item Display mode 565 main navigation bar 552 Welcome View 558, 564 styles about 522, 524 applying 552 synchronous operations 906 System on a Chip (SoC) 991 system requirements, for Visual Studio 2015 installation Visual Studio Community 2015 23 Visual Studio Enterprise 23 Visual Studio Professional 23 Visual Studio Test Professional 23 Systematically Awesome Style Sheets (Sass) 538 SystemJS module loader library 347

    T tab menu about 518 class code 520, 521 implementing 518 styles 521, 524 template 519 table CSS classes about 166 bordered tables 167 context classes 168 hover tables 167 striped tables 166 Tabular Data Stream (TDS) 779, 869 Tag Helpers about 84 custom Tag Helpers 87 Tarsnap about 996 URL 996 Task Parallel Library (TPL) 896 about 897, 900 task runner about 352 working 352, 353, 354

    Team Foundation Server (TFS) 760 TechNet URL 876 Telerik JustTrace 775 template 519 Tensor Processing Unit (TPU) 893, 992 TensorFlow URL 992 Test-Driven Development (TDD) 784 testing 876, 877 thick client 804 thin client 804 third-party approach advantages 583 downsides 583 third-party authentication 581 third-party authorization 582 thread 790 three-tier architecture 804 Time To First Byte (TTFB) 776 TinyMapper installing 481 TLS handshake 819 Token-Based Authentication Flow 596, 597 tools about 759 alternative tools 760 for application profiling 769 for customized performance measurement 781, 784 for monitoring HTTP 775 network monitors 779 SQL 765 using 764 Top Level Domain (TLD) 846 topic references 577 Tracert 853, 855 Traffic Manager 861 Transact-SQL (T-SQL) 868 Transmission Control Protocol/Internet Protocol (TCP/IP) about 812, 813 slow-start algorithm 813, 814 Transport Layer Security (TLS) 812 Trust on First Use (TOFU) 843

    [ 1019 ] WOW! eBook www.wowebook.org

    tsc 326 Twitter about 684, 685, 686 permissions request, elevation 686, 687 reference 686 two-factor authentication mode 598 two-tier architecture 804 two-way data binding about 412 disabling 413 performing 413 TypeScript 323, 749

    U UI design rules 530 UI layer 265 unicast 929 Unity 250, 999 Universally Unique Identifiers (UUIDs) 874 User class 698 User Datagram Protocol (UDP) 812 user interface (UI) 789 User Registration logic AccountsController 691 AuthService 698 components, connecting 705 implementing 690 Login view 709 User class 698 User Registration view 710, 711 UserEditComponent 699, 702, 704, 705 UserViewModel 691 UserEditComponent about 699, 702, 704 adapting 713, 715, 717 UserViewModel 691

    V validation error message displaying, by updation of View model 136 validation about 132 client-side validation 132, 134 server-side validation 132, 134 variables, Razor View engine

    about 62 for loop 64 foreach loop 65 if condition 65 while loop 65 Varnish Configuration Language (VCL) 922 Varnish about 805, 922, 923 URL 922 View component about 79 creating 80 View engine 61 view models 793 View adding 53 creating 292 data, passing from Controller to 57, 58 ViewModels about 98 creating 290 role 364 virtual file systems 848 Virtual Machine (VM) 756, 775 Virtual Reality (VR) 999 Visual Studio (VS) about 773 system requirements, for installation 22 system requirements, reference link 22 used, for application profiling 773, 774, 775 Visual Studio 2015 Update 2 reference 191 Visual Studio 2015 versions 22 Visual Studio Community 2015 installation 24, 25, 26, 27 reference 23, 24 Visual Studio Controller templates 308

    W waterfall method 750 Web API 2 761 Web API about 19, 880 add items, to 490

    [ 1020 ] WOW! eBook www.wowebook.org

    Angular 2 client, adapting 494 delete items, to 490 update items, to 490 updating 491, 494 web application client-side 11 HTTP protocol 9 implementation 143, 146 server-side 11 using, steps 264 working 9 web caching about 910 background 911, 912 cache busting 914 considerations 924 Content Delivery Network (CDN) 923 HTTP headers 912, 913 proxy servers 922 service workers 915 web servers 922 Web Pages 880 web servers about 922 Internet Information Services (IIS) 922 web workers 915 web.config rewrite rules, updating 653, 654 web/application layer about 279 asynchronous logging, using 280 DB layer 280 no business logic, in Views 279 WebPack 358 WebSockets 831 Welcome View

    building 365 Wi-Fi Protected Access II (WPA2) 891 Windows 10 IoT Core 991 Windows Communication Foundation (WCF) 102, 894 Windows Imaging Component (WIC) 839 Windows Presentation Foundation (WPF) 762 Wireshark URL 779 using 779, 781 WordPress 802 workstation package manager 315 packages, installing 316 preparing 312, 313 project, setting up 312 resource manager 315 World Wide Web (WWW) 579 World Wide Web Consortium (W3C) 9, 531

    X XML-based DataSet Schema (XSD) 445

    Y Yellow Screen Of Death (YSOD) 959 Yeoman 357, 749 You Aren't Going to Need It (YAGNI) 937

    Z ZeroMQ 929 zones about 677 reference 677 Zopfli about 837 URL 837

    WOW! eBook www.wowebook.org