The main goal of the book is to provide a comprehensive and accessible guide that empowers readers to understand, apply, and leverage machine learning algorithms and techniques effectively in real-world scenarios.
Table of ContentsPreface
1. Overview of Machine Learning1.1 Introduction
1.2 Sorts of Machine Learning
1.3 Regulated Gaining Knowledge of Dog and Human
1.4 Solo Learning
1.5 Support Mastering
1.6 Bundles or Applications of Machine Learning
1.6.1 Photograph Reputation
1.6.2 Discourse Recognition
1.6.3 Traffic Prediction
1.6.4 Item Recommendations
1.6.5 Self-Using Vehicles
1.6.6 Electronic Mail Unsolicited Mail And Malware Filtering
1.6.7 Computerized Private Assistant
1.6.8 Online Fraud Detection
1.6.9 Securities Exchange Buying and Selling
1.6.10 Clinical Prognosis
1.6.11 Computerized Language Translation
1.6.12 Online Media Features
1.6.13 Feeling Evaluation
1.6.14 Robotizing Employee Get Right of Entry to Manipulate
1.6.15 Marine Flora and Fauna Protection
1.6.16 Anticipate Potential Coronary Heart Failure
1.6.17 Directing Healthcare Efficiency and Scientific Offerings
1.6.18 Transportation and Commuting (Uber)
1.6.19 Dynamic Pricing
1.6.19.1 How Does Uber Decide the Cost of Your Excursion?
1.6.20 Online Video Streaming (Netflix)
1.7 Challenges in Machine Learning
1.8 Limitations of Machine Learning
1.9 Projects in Machine Learning
References
2. Machine Learning Building Blocks2.1 Data Collection
2.1.1 Importing the Data from CSV Files
2.2 Data Preparation
2.2.1 Data Exploration
2.2.2 Data Pre-Processing
2.3 Data Wrangling
2.4 Data Analysis
2.5 Model Selection
2.6 Model Building
2.7 Model Evaluation
2.7.1 Classification Metrics
2.7.1.1 Accuracy
2.7.1.2 Precision
2.7.1.3 Recall
2.7.2 Regression Metrics
2.7.2.1 Mean Squared Error
2.7.2.2 Root Mean Squared Error
2.7.2.3 Mean Absolute Error
2.8 Deployment
2.8.1 Machine Learning Projects
2.8.2 Spam Detection Using Machine Learning
2.8.3 Spam Detection for YouTube Comments Using Naïve Bayes Classifier
2.8.4 Fake News Detection
2.8.5 House Price Prediction
2.8.6 Gold Price Prediction
Bibliography
3. Multilayer Perceptron (in Neural Networks)3.1 Multilayer Perceptron for Digit Classification
3.1.1 Implementation of MLP using TensorFlow for Classifying Image Data
3.2 Training Multilayer Perceptron
3.3 Backpropagation
References
4. Kernel Machines4.1 Different Kernels and Their Applications
4.2 Some Other Kernel Functions
4.2.1 Gaussian Radial Basis Function (RBF)
4.2.2 Laplace RBF Kernel
4.2.3 Hyperbolic Tangent Kernel
4.2.4 Bessel Function of the First-Kind Kernel
4.2.5 ANOVA Radial Basis Kernel
4.2.6 Linear Splines Kernel in One Dimension
4.2.7 Exponential Kernel
4.2.8 Kernels in Support Vector Machine
References
5. Linear and Rule-Based Models5.1 Least Squares Methods
5.2 The Perceptron
5.2.1 Bias
5.2.2 Perceptron Weighted Sum
5.2.3 Activation Function
5.2.3.1 Types of Activation Functions
5.2.4 Perceptron Training
5.2.5 Online Learning
5.2.6 Perceptron Training Error
5.3 Support Vector Machines
5.4 Linearity with Kernel Methods
References
6. Distance-Based Models6.1 Introduction
6.1.1 Distance-Based Clustering
6.2 K-Means Algorithm
6.2.1 K-Means Algorithm Working Process
6.3 Elbow Method
6.4 K-Median
6.4.1 Algorithm
6.5 K-Medoids, PAM (Partitioning Around Medoids)
6.5.1 Advantages
6.5.2 Drawbacks
6.5.3 Algorithm
6.6 CLARA (Clustering Large Applications)
6.6.1 Advantages
6.6.2 Disadvantages
6.7 CLARANS (Clustering Large Applications Based on Randomized Search)
6.7.1 Advantages
6.7.2 Disadvantages
6.7.3 Algorithm
6.8 Hierarchical Clustering
6.9 Agglomerative Nesting Hierarchical Clustering (AGNES)
6.10 DIANA
References
7. Model Ensembles7.1 Bagging
7.1.1 Advantages
7.1.2 Disadvantages
7.1.3 Bagging Workage
7.1.4 Algorithm
7.2 Boosting
7.2.1 Types of Boosting
7.2.2 Advantages
7.2.3 Disadvantages
7.2.4 Algorithm
7.3 Stacking
7.3.1 Architecture of Stacking
7.3.2 Stacking Ensemble Family
References
8. Binary and Beyond Binary Classification8.1 Binary Classification
8.2 Logistic Regression
8.3 Support Vector Machine
8.4 Estimating Class Probabilities
8.5 Confusion Matrix
8.6 Beyond Binary Classification
8.7 Multi-Class Classification
8.8 Multi-Label Classification
Reference
9. Model Selection9.1 Model Selection Considerations
9.1.1 What Do We Care Approximately When Choosing the Final Version?
9.2 Model Selection Strategies
9.3 Types of Model Selection
9.3.1 Methods of Re-Sampling
9.3.2 Random Separation
9.3.3 Time Divide
9.3.4 K-Fold Cross-Validation
9.3.5 Stratified K-Fold
9.3.6 Bootstrap
9.3.7 Possible Steps
9.3.8 Akaike Information Criterion (AIC)
9.3.9 Bayesian Information Criterion (BIC)
9.3.10 Minimum Definition Length (MDL)
9.3.11 Building Risk Reduction (SRM)
9.3.12 Excessive Installation (Overfitting)
9.4 The Principle of Parsimony
9.5 Examples of Model Selection Criterions
9.6 Other Popular Properties
9.7 Key Considerations
9.8 Model Validation
9.8.1 Why is Model Validation Important?
9.8.2 How to Validate the Model
9.8.3 What is a Model Validation Test?
9.8.4 Benefits of Modeling Validation
9.8.5 Model Validation Traps
9.8.6 Data Verification
9.8.7 Model Performance and Validation
9.9 Self-Driving Cars
9.10 K-Fold Cross Validation
9.11 No One-Size-Fits-All Model Validation
9.12 Validation Strategies
9.13 K-Fold Cross-Validation
9.14 Capture Confirmation Using Hold-Out Validation
9.15 Comparison of Validation Strategy
References
10. Support Vector Machines10.1 History
10.2 Model
10.3 Kinds of Support Vector Machine
10.3.1 Straight SVM
10.3.2 Non-Direct SVM
10.3.3 Benefits of Help Vector Machines
10.3.4 The Negative Marks of Help Vector Machines
10.3.5 Applications
10.4 Hyperplane and Support Vectors Inside the SVM Set of Rules
10.4.1 Hyperplane
10.5 Support Vectors
10.6 SVM Kernel
10.7 How Can It Function?
10.7.1 See the Right Hyperplane (Circumstance 1)
10.7.2 See the Appropriate Hyperplane (Situation 2)
10.7.3 Distinguish the Right Hyper-Airplane (Situation 3)
10.7.4 Would We Have the Option to Organize Models (Circumstance 4)?
10.7.5 Track Down the Hyperplane to Isolate Into Guidelines (Situation 5)
10.8 SVM for Classification
10.9 SVM for Regression
10.10 Python Implementation of Support Vector Machine
10.10.1 Data Pre-Taking Care of Step
10.10.2 Fitting the SVM Classifier to the Readiness Set
10.10.2.1 Outcome
10.10.3 Anticipating the Investigated Set Final Product
10.10.3.1 Yield
10.10.4 Fostering the Disarray Lattice
10.10.5 Picturing the Preparation Set Outcome
10.10.5.1 Yield
10.10.6 Imagining the Investigated Set Outcome
10.10.6.1 Yield
10.10.7 Part or Kernel
10.10.8 Support Vector Machine (SVM) Code in Python
10.10.9 Intricacy of SVM
References
11. Clustering11.1 Example
11.2 Kinds of Clustering
11.2.1 Hard Clustering
11.2.2 Delicate Clustering
11.2.2.1 Dividing Clustering or Partitioning Clustering
11.2.2.2 Thickness Essentially Based Clustering or Density Fundamentally Based Clustering
11.2.2.3 Transport Model–Based Clustering or Distribution Model–Based Clustering
11.2.2.4 Progressive Clustering or Hierarchical Clustering
11.2.2.5 Fluffy Clustering or Fuzzy Clustering
11.3 What are the Utilization of Clustering?
11.4 Models
11.5 Uses of Clustering
11.5.1 In Character of Most Tumor Cells
11.5.2 In Web Crawlers Like Google
11.5.3 Shopper Segmentation
11.5.4 In Biology
11.5.5 In Land Use
11.6 Bunching Algorithms or Clustering Algorithms
11.6.1 K-Means Clustering
11.6.2 Mean-Shift Clustering
11.6.3 Thickness or Density-Based Spatial Clustering of Application with Noise (DBSCAN)
11.6.4 Assumption Maximization Clustering Utilizing Gaussian Combination Models
11.6.5 Agglomerative Hierarchical Clustering
11.7 Instances of Clustering Algorithms
11.7.1 Library Setup
11.7.2 Grouping or Clustering Dataset
11.7.3 Fondness or Affinity Propagation
11.7.4 Agglomerative Clustering
11.7.5 BIRCH
11.7.6 DBSCAN
11.7.7 K-Means
11.7.8 Mini-Batch K-Means
11.7.9 Mean Shift
11.7.10 OPTICS
11.7.11 Unearthly or Spectral Clustering
11.7.12 Gaussian Mixture Model
11.8 Python Implementation of K-Means
11.8.1 Stacking the Data
11.8.2 Plotting the Information
11.8.3 Choosing the Component
11.8.4 Clustering
11.8.5 Clustering Results
11.8.6 WCSS and Elbow Technique
11.8.7 Uses of K-Mean Bunching
11.8.8 Benefits of K-Means
11.8.9 Bad Marks of K-MEAN
References
12. Reinforcement Learning12.1 Model
12.2 Terms Utilized in Reinforcement Learning
12.3 Key Elements of Reinforcement Learning
12.4 Instances of Reinforcement Learning
12.5 Advantages of Reinforcement Learning
12.6 Challenges with Reinforcement Learning
12.7 Sorts of Reinforcement
12.7.1 Positive
12.7.2 Negative
12.8 What are the Useful Utilizations of Reinforcement Learning?
12.9 How is Reinforcement Learning Extraordinary from Supervised and Unsupervised Learning?
12.9.1 Supervised Learning
12.9.2 Unsupervised Learning
12.9.3 Semisupervised Learning
12.9.4 Reinforcement Learning
12.10 Ways to Deal with Execute Reinforcement Learning
12.10.1 Value-Based
12.10.2 Policy-Based
12.10.3 Model-Based
12.11 Components of Reinforcement Learning
12.11.1 Policy
12.11.2 Reward Signal
12.11.3 Value Function
12.11.4 Model
12.12 Typical Reinforcement Learning Algorithms
12.12.1 State-Activity Reward-State-Activity (SARSA)
12.12.2 Q-Learning
12.12.3 Profound Q-Networks
12.13 Support Learning Algorithm: Python Implementation Utilizing Q-Learning
References
13. Recommender Systems13.1 Recommender Frameworks
13.1.1 It Has the Accompanying Advantages
13.1.2 What Defines an Extraordinary Suggestion?
13.1.2.1 K-Fold Cross-Validation
13.1.2.2 MAE (Mean Absolute Error)
13.1.2.3 RMSD (Root Mean Square Deviation)
13.1.3 What Can Be Suggested?
13.1.4 For What Reason Will We Really Want Recommender Frameworks?
13.2 Use-Instances of Suggestion Framework
13.3 Phases of Proposal Technique
13.3.1 Reality Series Fragment
13.3.1.1 Explicit Remarks
13.3.1.2 Verifiable Remarks
13.3.1.3 Cross-Breed Remarks
13.3.2 Learning Segment
13.3.3 Forecast/Proposal Stage
13.4 How Does a Recommender Device Create Artistic Creations?
13.5 Kinds of Recommendation System
13.5.1 Content-Based Filtering
13.5.2 Cooperative Principally Based Filtering
13.5.2.1 Individual Fundamentally Based Collaborative Filtering
13.5.2.2 Thing-Based Collaborative Filtering
13.6 Half and Half Proposal Contraption
13.7 What Strategies are Utilized to Develop Recommender Frameworks?
13.7.1 Totally Related Neural Organizations
13.7.2 Item2vec
13.8 Evaluation Metrics
13.9 Python Implementation of Recommendation Systems
13.9.1 Implementation of Movie Recommender System
13.10 Forcing Recommender Structures with Marvsel
References
14. Advancements in Deep Learning14.1 What is Deep Learning?
14.2 The Difficulties of Deep Learning
14.2.1 Learning Without Supervision
14.2.2 Taking Care of Realities Out of the Entryways of the Preparation Appropriation
14.2.3 Consolidating Trustworthiness
14.2.4 The Requirement for Less Information and Better Execution
14.3 Upgrades in Deep Learning
14.3.1 GrowNet
14.3.2 TabNet
14.3.3 EfficientNet
14.3.4 The Lottery Value Ticket Speculation
14.3.5 The Top-Seeming Model with 0 Tutoring
14.4 Consideration and Transformers
14.4.1 For What Reason is this so Pertinent?
14.5 Generative Ill-Disposed Networks (GANs)
14.6 Vehicle Encoders
14.7 Python Code for Profound Learning
14.7.1 Steps to Develop Age and Gender Detection Project
14.7.1.1 Transferring the Information
14.7.1.2 Import the Fundamental Libraries for Stacking and Survey Information
14.7.1.3 Peruse the Data
14.7.1.4 Import the Essential Modules for Model Structure
14.7.1.5 Utilize the Important Information and Guide Them
14.7.1.6 Orientation Model
14.7.1.7 Train the Model
14.7.1.8 Make Preparing and Testing Split for Age Information
14.7.1.9 Age Model
14.7.1.10 Induction on the Prepared Model
14.7.1.11 Age and Gender Detection Project Output
References
15. Advanced Deep Learning Using Julia Programming15.1 Convolutional Neural Network
15.2 Deep Neural Network Implementation
15.3 Recurrent Neural Network Implementation
15.4 Long Short-Term Memory Implementation
15.5 Generative Adversarial Network (GAN) Implementation
15.6 Gated Recurrent Unit (GRU) Implementation
15.7 Radial Basis Function Network (RBFN) Implementation
15.8 Multilayer Perceptron (MLP) Implementation
15.9 Self-Organizing Map (SOM) Implementation
15.10 Deep Belief Network (DBN) Implementation
15.11 Restricted Boltzmann Machine (RBM) Implementation
15.12 Autoencoder Implementation
16 Machine Learning for Industrial Applications
16.1 Python Code for Generating Music Using Neural Networks
16.2 Image Caption Generator
16.3 Image Classification Using Julia
16.4 BERT Text Classifier on Tensor Processing Unit
16.5 Python Program for Reinforcement Learning Agent for Atari 2600
16.6 Python Program for Multi-Lingual ASR with Transformers
16.7 Python Program for Reinforcement Learning for Connect X
16.8 Stock Market Analysis and Forecasting Using Deep Learning
16.9 ASL Recognition with Deep Learning
16.10 Build Rick Sanchez Bot Using Transformers
Index Back to Top