Deep Learning with Python: Learn Best Practices of Deep Learning Models with PyTorch [2 ed.] 1484253639, 9781484253632

Master the practical aspects of implementing deep learning solutions with PyTorch, using a hands-on approach to understa

1,325 230 7MB

English Pages 323 Year 2021

Report DMCA / Copyright

DOWNLOAD FILE

Deep Learning with Python: Learn Best Practices of Deep Learning Models with PyTorch [2 ed.]
 1484253639, 9781484253632

Table of contents :
Table of Contents
About the Authors
About the Technical Reviewers
Acknowledgments
Introduction
Chapter 1: Introduction to Machine Learning and Deep Learning
Defining Deep Learning
A Brief History
Rule-Based Systems
Knowledge-Based Systems
Machine Learning
Deep Learning
Advances in Related Fields
Prerequisites
The Approach Ahead
Installing the Required Libraries
The Concept of Machine Learning
Binary Classification
Regression
Generalization
Regularization
Summary
Chapter 2: Introduction to PyTorch
Why Do We Need a Deep Learning Framework?
What Is PyTorch?
Why PyTorch?
It All Starts with a Tensor
Creating Tensors
Tensor Munging Operations
Mathematical Operations
Element-Wise Mathematical Operations
Trigonometric Operations in Tensors
Comparison Operations for Tensors
Linear Algebraic Operations
Summary
Chapter 3: Feed-Forward Neural Networks
What Is a Neural Network?
Unit
The Overall Structure of a Neural Network
Expressing a Neural Network in Vector Form
Evaluating the Output of a Neural Network
Training a Neural Network
Deriving Cost Functions Using Maximum Likelihood
Binary Cross-Entropy
Cross-Entropy
Squared Error
Summary of Loss Functions
Types of Activation Functions
Linear Unit
Sigmoid Activation
Softmax Activation
Rectified Linear Unit
Hyperbolic Tangent
Backpropagation
Gradient Descent Variants
Batch Gradient Descent
Stochastic Gradient Descent
Mini-Batch Gradient Descent
Gradient-Based Optimization Techniques
Gradient Descent with Momentum
RMSprop
Adam
Practical Implementation with PyTorch
Summary
Chapter 4: Automatic Differentiation in Deep Learning
Numerical Differentiation
Symbolic Differentiation
Automatic Differentiation Fundamentals
Implementing Automatic Differentiation
What Is Autograd?
Summary
Chapter 5: Training Deep Leaning Models
Performance Metrics
Classification Metrics
Regression Metrics
Mean Squared Error
Mean Absolute Error
Mean Absolute Percentage Error
Data Procurement
Splitting Data for Training, Validation, and Testing
Establishing the Achievable Limit on the Error Rate
Establishing the Baseline with Standard Choices
Building an Automated, End-to-End Pipeline
Orchestration for Visibility
Analysis of Overfitting and Underfitting
Hyperparameter Tuning
Model Capacity
Regularizing the Model
Early Stopping
Norm Penalties
Dropout
A Practical Implementation in PyTorch
Interpreting the Business Outcomes for Deep Learning
Summary
Chapter 6: Convolutional Neural Networks
Convolution Operation
Pooling Operation
Convolution-Detector-Pooling Building Block
Stride
Padding
Batch Normalization
Filter
Filter Depth
Number of Filters
Summarizing key learnings from CNNs
Implementing a basic CNN using PyTorch
Implementing a larger CNN in PyTorch
CNN Thumb Rules
Summary
Chapter 7: Recurrent Neural Networks
Introduction to RNNs
Training RNNs
Bidirectional RNNs
Vanishing and Exploding Gradients
Gradient Clipping
Long Short-Term Memory
Practical Implementation
Summary
Chapter 8: Recent Advances in Deep Learning
Going Beyond Classification in Computer Vision
Object Detection
Image Segmentation
Pose Estimation
Generative Computer Vision
Natural Language Processing with Deep Learning
Transformer Models
Bidirectional Encoder Representations from Transformers
GrokNet
Additional Noteworthy Research
Concluding Thoughts
Index

Polecaj historie