Search

Browse Subject Areas

For Authors

Submit a Proposal

Fundamentals and Methods of Machine and Deep Learning

Algorithms, Tools, and Applications

Edited by Pradeep Singh
Copyright: 2022   |   Status: Published
ISBN: 978111982125  |  Hardcover  |  
466 pages | 148 illustrations
Price: $225 USD
Add To Cart

One Line Description
The book provides a practical approach by explaining the concepts
of machine learning and deep learning algorithms, evaluation of methodology advances, and algorithm demonstrations with applications.

Audience
Researchers and engineers in artificial intelligence, computer scientists as well as software developers.

Description
Over the past two decades, the field of machine learning and its subfield deep learning have played a main role in software applications development. Also, in recent research studies, they are regarded as one of the disruptive technologies that will transform our future life, business, and the global economy. The recent explosion of digital data in a wide variety of domains, including science, engineering, Internet of Things, biomedical, healthcare, and many business sectors, has declared the era of big data, which cannot be analysed by classical statistics but by the more modern, robust machine learning and deep learning techniques. Since machine learning learns from data rather than by programming hard-coded decision rules, an attempt is being made to use machine learning to make computers that are able to solve problems like human experts in the field.
The goal of this book is to present a practical approach by explaining the concepts of machine learning and deep learning algorithms with applications. Supervised machine learning algorithms, ensemble machine learning algorithms, feature selection, deep learning techniques, and their applications are discussed. Also included in the eighteen chapters is unique information which provides a clear understanding of concepts by using algorithms and case studies illustrated with applications of machine learning and deep learning in different domains, including disease prediction, software defect prediction, online television analysis, medical image processing, etc. Each of the chapters briefly described below provides both a chosen approach and its implementation.

Back to Top
Author / Editor Details
Pradeep Singh PhD, is an assistant professor in the Department of Computer Science Engineering, National Institute of Technology, Raipur, India. His current research interests include machine learning, deep learning, evolutionary computing, empirical studies on software quality, and software fault prediction models. He has more than 15 years of teaching experience with many publications in reputed international journals, conferences, and book chapters.

Back to Top

Table of Contents
Preface
1. Supervised Machine Learning: Algorithms and Applications

Shruthi H. Shetty, Sumiksha Shetty, Chandra Singh and Ashwath Rao
1.1 History
1.2 Introduction
1.3 Supervised Learning
1.4 Linear Regression (LR)
1.4.1 Learning Model
1.4.2 Predictions With Linear Regression
1.5 Logistic Regression
1.6 Support Vector Machine (SVM)
1.7 Decision Tree
1.8 Machine Learning Applications in Daily Life
1.8.1 Traffic Alerts (Maps)
1.8.2 Social Media (Facebook)
1.8.3 Transportation and Commuting (Uber)
1.8.4 Products Recommendations
1.8.5 Virtual Personal Assistants
1.8.6 Self-Driving Cars
1.8.7 Google Translate
1.8.8 Online Video Streaming (Netflix)
1.8.9 Fraud Detection
1.9 Conclusion
References
2. Zonotic Diseases Detection Using Ensemble Machine Learning Algorithms
Bhargavi K.
2.1 Introduction
2.2 Bayes Optimal Classifier
2.3 Bootstrap Aggregating (Bagging)
2.4 Bayesian Model Averaging (BMA)
2.5 Bayesian Classifier Combination (BCC)
2.6 Bucket of Models
2.7 Stacking
2.8 Efficiency Analysis
2.9 Conclusion
References
3. Model Evaluation
Ravi Shekhar Tiwari
3.1 Introduction
3.2 Model Evaluation
3.2.1 Assumptions
3.2.2 Residual
3.2.3 Error Sum of Squares (Sse)
3.2.4 Regression Sum of Squares (Ssr)
3.2.5 Total Sum of Squares (Ssto)
3.3 Metric Used in Regression Model
3.3.1 Mean Absolute Error (Mae)
3.3.2 Mean Square Error (Mse)
3.3.3 Root Mean Square Error (Rmse)
3.3.4 Root Mean Square Logarithm Error (Rmsle)
3.3.5 R-Square (R2)
3.3.5.1 Problem With R-Square (R2)
3.3.6 Adjusted R-Square (R2)
3.3.7 Variance 4
3.3.8 AIC
3.3.9 BIC
3.3.10 ACP, Press, and R2-Predicted
3.3.11 Solved Examples
3.4 Confusion Metrics
3.4.1 How to Interpret the Confusion Metric?
3.4.2 Accuracy
3.4.2.1 Why Do We Need the Other Metric Along With Accuracy?
3.4.3 True Positive Rate (TPR)
3.4.4 False Negative Rate (FNR)
3.4.5 True Negative Rate (TNR)
3.4.6 False Positive Rate (FPR)
3.4.7 Precision
3.4.8 Recall
3.4.9 Recall-Precision Trade-Off
3.4.10 F1-Score
3.4.11 F-Beta Sore
3.4.12 Thresholding
3.4.13 AUC-ROC
3.4.14 AUC-PRC
3.4.15 Derived Metric From Recall, Precision, and F1-Score
3.4.16 Solved Examples
3.5 Correlation
3.5.1 Pearson Correlation
3.5.2 Spearman Correlation
3.5.3 Kendall’s Rank Correlation
3.5.4 Distance Correlation
3.5.5 Biweight Mid-Correlation
3.5.6 Gamma Correlation
3.5.7 Point Biserial Correlation
3.5.8 Biserial Correlation
3.5.9 Partial Correlation
3.6 Natural Language Processing (NLP)
3.6.1 N-Gram
3.6.2 BELU Score
3.6.2.1 BELU Score With N-Gram
3.6.3 Cosine Similarity
3.6.4 Jaccard Index
3.6.5 ROUGE
3.6.6 NIST
3.6.7 SQUAD
3.6.8 MACRO
3.7 Additional Metrics
3.7.1 Mean Reciprocal Rank (MRR)
3.7.2 Cohen Kappa
3.7.3 Gini Coefficient
3.7.4 Scale-Dependent Errors
3.7.5 Percentage Errors
3.7.6 Scale-Free Errors
3.8 Summary of Metric Derived from Confusion Metric
3.9 Metric Usage
3.10 Pro and Cons of Metrics
3.11 Conclusion
References
4. Analysis of M-SEIR and LSTM Models for the Prediction of COVID-19 Using RMSLE
Archith S., Yukta C., Archana H.R. and Surendra H.H.
4.1 Introduction
4.2 Survey of Models
4.2.1 SEIR Model
4.2.2 Modified SEIR Model
4.2.3 Long Short-Term Memory (LSTM)
4.3 Methodology
4.3.1 Modified SEIR
4.3.2 LSTM Model
4.3.2.1 Data Pre-Processing
4.3.2.2 Data Shaping
4.3.2.3 Model Design
4.4 Experimental Results
4.4.1 Modified SEIR Model
4.4.2 LSTM Model
4.5 Conclusion
4.6 Future Work
References
5. The Significance of Feature Selection Techniques in Machine Learning N. Bharathi, B.S. Rishiikeshwer, T. Aswin Shriram, B. Santhi and G.R. Brindha
5.1 Introduction
5.2 Significance of Pre-Processing
5.3 Machine Learning System
5.3.1 Missing Values
5.3.2 Outliers
5.3.3 Model Selection
5.4 Feature Extraction Methods
5.4.1 Dimension Reduction
5.4.1.1 Attribute Subset Selection
5.4.2 Wavelet Transforms
5.4.3 Principal Components Analysis
5.4.4 Clustering
5.5 Feature Selection
5.5.1 Filter Methods
5.5.2 Wrapper Methods
5.5.3 Embedded Methods
5.6 Merits and Demerits of Feature Selection
5.7 Conclusion
References
6. Use of Machine Learning and Deep Learning in Healthcare—A Review on Disease Prediction System
Radha R. and Gopalakrishnan R.
6.1 Introduction to Healthcare System
6.2 Causes for the Failure of the Healthcare System
6.3 Artificial Intelligence and Healthcare System for Predicting Diseases
6.3.1 Monitoring and Collection of Data
6.3.2 Storing, Retrieval, and Processing of Data
6.4 Facts Responsible for Delay in Predicting the Defects
6.5 Pre-Treatment Analysis and Monitoring
6.6 Post-Treatment Analysis and Monitoring
6.7 Application of ML and DL
6.7.1 ML and DL for Active Aid
6.7.1.1 Bladder Volume Prediction
6.7.1.2 Epileptic Seizure Prediction
6.8 Challenges and Future of Healthcare Systems Based on ML and DL
6.9 Conclusion
References
7. Detection of Diabetic Retinopathy Using Ensemble Learning Techniques Anirban Dutta, Parul Agarwal, Anushka Mittal, Shishir Khandelwal and Shikha Mehta
7.1 Introduction
7.2 Related Work
7.3 Methodology
7.3.1 Data Pre-Processing
7.3.2 Feature Extraction
7.3.2.1 Exudates
7.3.2.2 Blood Vessels
7.3.2.3 Microaneurysms
7.3.2.4 Hemorrhages
7.3.3 Learning
7.3.3.1 Support Vector Machines
7.4 Proposed Models
7.4.1 AdaNaive
7.4.2 AdaSVM
7.4.3 AdaForest
7.5 Experimental Results and Analysis
7.5.1 Dataset
7.5.2 Software and Hardware
7.5.3 Results
7.6 Conclusion
References
8. Machine Learning and Deep Learning for Medical Analysis—A Case Study on Heart Disease Data
Swetha A.M., Santhi B. and Brindha G.R.
8.1 Introduction
8.2 Related Works
8.3 Data Pre-Processing
8.3.1 Data Imbalance
8.4 Feature Selection
8.4.1 Extra Tree Classifier
8.4.2 Pearson Correlation
8.4.3 Forward Stepwise Selection
8.4.4 Chi-Square Test
8.5 ML Classifiers Techniques
8.5.1 Supervised Machine Learning Models
8.5.1.1 Logistic Regression
8.5.1.2 SVM
8.5.1.3 Naive Bayes
8.5.1.4 Decision Tree
8.5.1.5 K-Nearest Neighbors (KNN)
8.5.2 Ensemble Machine Learning Model
8.5.2.1 Random Forest
8.5.2.2 AdaBoost
8.5.2.3 Bagging
8.5.3 Neural Network Models
8.5.3.1 Artificial Neural Network (ANN)
7.3.3.2 K-Nearest Neighbors
7.3.3.3 Random Forest
7.3.3.4 AdaBoost
7.3.3.5 Voting Technique
7.4 Proposed Models
7.4.1 AdaNaive
7.4.2 AdaSVM
7.4.3 AdaForest
7.5 Experimental Results and Analysis
7.5.1 Dataset
7.5.2 Software and Hardware
7.5.3 Results
7.6 Conclusion
References
8. Machine Learning and Deep Learning for Medical Analysis-A Case Study on Heart Disease Data
Swetha A.M., Santhi B. and Brindha G.R.
8.1 Introduction
8.2 Related Works
8.3 Data Pree-Processing
8.3.1 Data Imbalance
8.4 Feature Selection
8.4.1 Extra Tree Classifier
8.4.2 Pearson Correlation
8.4.3 Forward Stepwise Selection
8.4.4 Chi-Square Test
8.5 ML Classifiers Techniques
8.5.1 Supervised Machine Learning Models
8.5.1.1 Logistic Regression
8.5.1.2 SVM
8.5.1.3 Naive Bayes
8.5.1.4 Decision Tree
8.5.1.5 K-Nearest Neighbors (KNN)
8.5.2 Ensemble Machine Learning Model
8.5.2.1 Random Forest
8.5.2.2 AdaBoost
8.5.2.3 Bagging
8.5.3 Neural Network Models
8.5.3.1 Artificial Neural Network (ANN)
8.5.3.2 Convolutional Neural Network (CNN)
8.6 Hyperparameter Tuning
8.6.1 Cross-Validation
8.7 Dataset Description
8.7.1 Data Pre-Processing
8.7.2 Feature Selection
8.7.3 Model Selection
8.7.4 Model Evaluation
8.8 Experiments and Results
8.8.1 Study 1: Survival Prediction Using All Clinical Features
8.8.2 Study 2: Survival Prediction Using Age, Ejection Fraction and Serum Creatinine
8.8.3 Study 3: Survival Prediction Using Time, Ejection Fraction, and Serum Creatinine
8.8.4 Comparison Between Study 1, Study 2, and Study 3
8.8.5 Comparative Study on Different Sizes of Data
8.9 Analysis
8.10 Conclusion
References
9. A Novel Convolutional Neural Network Model to Predict Software Defects
Kumar Rajnish, Vandana Bhattacharjee and Mansi Gupta
9.1 Introduction
9.2 Related Works
9.2.1 Software Defect Prediction Based on Deep Learning
9.2.2 Software Defect Prediction Based on Deep Features
9.2.3 Deep Learning in Software Engineering
9.3 Theoretical Background
9.3.1 Software Defect Prediction
9.3.2 Convolutional Neural Network
9.4 Experimental Setup
9.4.1 Data Set Description
9.4.2 Building Novel Convolutional Neural Network (NCNN) Model
9.4.3 Evaluation Parameters
9.4.4 Results and Analysis
9.5 Conclusion and Future Scope
References
10. Predictive Analysis on Online Television Videos Using Machine Learning Algorithms
Rebecca Jeyavadhanam B., Ramalingam V.V., Sugumaran V. and Rajkumar D.
10.1 Introduction
10.1.1 Overview of Video Analytics
10.1.2 Machine Learning Algorithms
10.1.2.1 Decision Tree C4.5
10.1.2.2 J48 Graft
10.1.2.3 Logistic Model Tree
10.1.2.4 Best First Tree
10.1.2.5 Reduced Error Pruning Tree
10.1.2.6 Random Forest
10.2 Proposed Framework
10.2.1 Data Collection
10.2.2 Feature Extraction
10.2.2.1 Block Intensity Comparison Code
10.2.2.2 Key Frame Rate
10.3 Features Selection
10.4 Classification
10.5 Online Incremental Learning
10.6 Results and Discussion
10.7 Conclusion
References
11. A Combinational Deep Learning Approach to Visually Evoked EEG-Based Image Classification
Nandini Kumari, Shamama Anwar and Vandana Bhattacharjee
11.1 Introduction
11.2 Literature Review
11.3 Methodology
11.3.1 Dataset Acquisition
11.3.2 Pre-Processing and Spectrogram Generation
11.3.3 Classification of EEG Spectrogram Images With Proposed CNN Model
11.3.4 Classification of EEG Spectrogram Images With Proposed Combinational
CNN+LSTM Model
11.4 Result and Discussion
11.5 Conclusion
References
12. Application of Machine Learning Algorithms With Balancing Techniques for Credit Card Fraud Detection: A Comparative Analysis
Shiksha
12.1 Introduction
12.2 Methods and Techniques
12.2.1 Research Approach
12.2.2 Dataset Description
12.2.3 Data Preparation
12.2.4 Correlation Between Features
12.2.5 Splitting the Dataset
12.2.6 Balancing Data
12.2.6.1 Oversampling of Minority Class
12.2.6.2 Under-Sampling of Majority Class
12.2.6.3 Synthetic Minority Over Sampling Technique
12.2.6.4 Class Weight
12.2.7 Machine Learning Algorithms (Models)
12.2.7.1 Logistic Regression
12.2.7.2 Support Vector Machine
12.2.7.3 Decision Tree
12.2.7.4 Random Forest
12.2.8 Tuning of Hyperparameters
12.2.9 Performance Evaluation of the Models
12.3 Results and Discussion
12.3.1 Results Using Balancing Techniques
12.3.2 Result Summary
12.4 Conclusions
12.4.1 Future Recommendations
References
13. Crack Detection in Civil Structures Using Deep Learning
Bijimalla Shiva Vamshi Krishna, Rishiikeshwer B.S., J. Sanjay Raju, N. Bharathi, C. Venkatasubramanian and G.R. Brindha
13.1 Introduction
13.2 Related Work
13.3 Infrared Thermal Imaging Detection Method
13.4 Crack Detection Using CNN
13.4.1 Model Creation
13.4.2 Activation Functions (AF)
13.4.3 Optimizers
13.4.4 Transfer Learning
13.5 Results and Discussion
13.6 Conclusion
References
14. Measuring Urban Sprawl Using Machine Learning
Keerti Kulkarni and P. A. Vijaya
14.1 Introduction
14.2 Literature Survey
14.3 Remotely Sensed Images
14.4 Feature Selection
14.4.1 Distance-Based Metric
14.5 Classification Using Machine Learning Algorithms
14.5.1 Parametric vs. Non-Parametric Algorithms
14.5.2 Maximum Likelihood Classifier
14.5.3 k-Nearest Neighbor Classifiers
14.5.4 Evaluation of the Classifiers
14.5.4.1 Precision
14.5.4.2 Recall
14.5.4.3 Accuracy
14.5.4.4 F1-Score
14.6 Results
14.7 Discussion and Conclusion
Acknowledgements
References
15. Application of Deep Learning Algorithms in Medical Image Processing: A Survey
Santhi B., Swetha A.M. and Ashutosh A.M.
15.1 Introduction
15.2 Overview of Deep Learning Algorithms
15.2.1 Supervised Deep Neural Networks
15.2.1.1 Convolutional Neural Network
15.2.1.2 Transfer Learning
15.2.1.3 Recurrent Neural Network
15.2.2 Unsupervised Learning
15.2.2.1 Autoencoders
15.2.2.2 GANs
15.3 Overview of Medical Images
15.3.1 MRI Scans
15.3.2 CT Scans
15.3.3 X-Ray Scans
15.3.4 PET Scans
15.4 Scheme of Medical Image Processing
15.4.1 Formation of Image
15.4.2 Image Enhancement
15.4.3 Image Analysis
15.4.4 Image Visualization
15.5 Anatomy-Wise Medical Image Processing With Deep Learning
15.5.1 Brain Tumor
15.5.2 Lung Nodule Cancer Detection
15.5.3 Breast Cancer Segmentation and Detection
15.5.4 Heart Disease Prediction
15.5.5 COVID-19 Prediction
15.6 Conclusion
References
16. Simulation of Self-Driving Cars Using Deep Learning
Rahul M. K., Praveen L. Uppunda, Vinayaka Raju S., Sumukh B. and C. Gururaj
16.1 Introduction
16.2 Methodology
16.2.1 Behavioral Cloning
16.2.2 End-to-End Learning
16.3 Hardware Platform
16.4 Related Work
16.5 Pre-Processing
16.5.1 Lane Feature Extraction
16.5.1.1 Canny Edge Detector
16.5.1.2 Hough Transform
16.5.1.3 Raw Image Without Pre-Processing
16.6 Model
16.6.1 CNN Architecture
16.6.2 Multilayer Perceptron Model
16.6.3 Regression vs. Classification
16.6.3.1 Regression
16.6.3.2 Classification
16.7 Experiments
16.8 Results
16.9 Conclusion
References
17. Assistive Technologies for Visual, Hearing, and Speech Impairments: Machine Learning and Deep Learning Solutions
Shahira K. C., Sruthi C. J. and Lijiya A.
17.1 Introduction
17.2 Visual Impairment
17.2.1 Conventional Assistive Technology for the VIP
17.2.1.1 Way Finding
17.2.1.2 Reading Assistance
17.2.2 The Significance of Computer Vision and Deep Learning in AT of VIP
17.2.2.1 Navigational Aids
17.2.2.2 Scene Understanding
17.2.2.3 Reading Assistance
17.2.2.4 Wearables
17.3 Verbal and Hearing Impairment
17.3.1 Assistive Listening Devices
17.3.2 Alerting Devices
17.3.3 Augmentative and Alternative Communication Devices
17.3.3.1 Sign Language Recognition
17.3.4 Significance of Machine Learning and Deep Learning in Assistive Communication Technology
17.4 Conclusion and Future Scope
References
18. Case Studies: Deep Learning in Remote Sensing
Emily Jenifer A. and Sudha N.
18.1 Introduction
18.2 Need for Deep Learning in Remote Sensing
18.3 Deep Neural Networks for Interpreting Earth Observation Data
18.3.1 Convolutional Neural Network
18.3.2 Autoencoder
18.3.3 Restricted Boltzmann Machine and Deep Belief Network
18.3.4 Generative Adversarial Network
18.3.5 Recurrent Neural Network
18.4 Hybrid Architectures for Multi-Sensor Data Processing
18.5 Conclusion
References
Index

Back to Top



Description
Author/Editor Details
Table of Contents
Bookmark this page