Deep Learning and Its Applications Using Python
| Edited by Niha Kamal Basha, Surbhi Bhatia Khan, Abhishek Kumar and Arwa Mashat
Copyright: 2023 | Status: Published ISBN: 9781394166466 | Hardcover | 252 pages Price: $195 USD |
One Line DescriptionThis practical book gives a detailed description of deep learning models
and their implementation using Python programming relating to
computer vision, natural language processing, and other applications.
Audience
The book is ideal for computer science researchers, industry professionals, as well as postgraduate and undergraduate students who want to learn how to program deep learning models using Python.
DescriptionThis book thoroughly explains deep learning models and how to use Python programming to implement them in applications such as NLP, face detection, face recognition, face analysis, and virtual assistance (chatbot, machine translation, etc.). It provides hands-on guidance in using Python for implementing deep learning application models. It also identifies future research directions for deep learning. Readers/users will discover
• A precise description of deep learning history, fundamental concepts, and background information relating to deep learning;
• A detailed introduction to several concepts including tensorflow and keras, starting from the fundamentals to the application-based concept implementation using Python;
• Explanations of multilayer perceptron, convolutional neural network, recurrent neural network, and long short-term memory in terms of applications like chatbot, face detection and recognition;
• Advanced deep learning concepts along with their future research advancements;
• Assist in building the reader’s understanding through intuitive explanations and practical examples by exploring challenging concepts in the related applications of computer vision, natural language processing, and other models.
Back to Top Author / Editor DetailsNiha Kamal Basha is an assistant professor in the Department of Information Security, School of Computer Science and Engineering, Vellore Institute of Science and Technology, India. She has received a number of awards and published numerous research articles in peer-reviewed journals.
Surbi Bhatia, PhD, is an assistant professor in the Department of Information Systems, College of Computer Sciences and Information Technology, King Faisal University, Saudi Arabia. She has more than 10 years of teaching experience in different universities in India and Saudi Arabia. She has published many articles in peer-reviewed journals, authored or edited 9 books, and has been granted 8 national and international patents.
Abhishek Kumar gained his PhD in computer science from the University of Madras, India in 2019. He is assistant director /associate professor in the Computer Science & Engineering Department, Chandigarh University, Punjab, India. He has more than 100 publications in peer-reviewed international and national journals, books & conferences His research interests include artificial intelligence, image processing, computer vision, data mining, and machine learning.
Arwa Mashat, gained her PhD in Instructional Design and Technology from Old Dominion University, Virginia, USA in 2017. She has a rich 14 years of teaching and academic experience. She is currently an assistant professor at the College of Computing and Information Technology, King Abdulaziz University, Saudi Arabia. She is currently the Vice Dean for two colleges; the College of Computing and Information Technology and the Applied College at King Abdulaziz University. She has published many research papers in reputed journals.
Back to TopTable of ContentsPreface
1. Introduction to Deep Learning1.1 History of Deep Learning
1.2 A Probabilistic Theory of Deep Learning
1.3 Back Propagation and Regularization
1.4 Batch Normalization and VC Dimension
1.5 Neural Nets—Deep and Shallow Networks
1.6 Supervised and Semi-Supervised Learning
1.7 Deep Learning and Reinforcement Learning
References
2. Basics of TensorFlow2.1 Tensors
2.2 Computational Graph and Session
2.3 Constants, Placeholders, and Variables
2.4 Creating Tensor
2.5 Working on Matrices
2.6 Activation Functions
2.7 Loss Functions
2.8 Common Loss Function
2.9 Optimizers
2.10 Metrics
References
3. Understanding and Working with Keras3.1 Major Steps to Deep Learning Models
3.2 Load Data
3.3 Pre-Process Data
3.4 Define the Model
3.5 Compile the Model
3.6 Fit and Evaluate the Mode
3.7 Prediction
3.8 Save and Reload the Model
3.9 Additional Steps to Improve Keras Models
3.10 Keras with TensorFlow
References
4. Multilayer Perceptron4.1 Artificial Neural Network
4.2 Single-Layer Perceptron
4.3 Multilayer Perceptron
4.4 Logistic Regression Model
4.5 Regression to MLP in TensorFlow
4.6 TensorFlow Steps to Build Models
4.7 Linear Regression in TensorFlow
4.8 Logistic Regression Mode in TensorFlow
4.9 Multilayer Perceptron in TensorFlow
4.10 Regression to MLP in Keras
4.11 Log-Linear Model
4.12 Keras Neural Network for Linear Regression
4.13 Keras Neural Network for Logistic Regression
4.14 MLPs on the Iris Data
4.15 MLPs on MNIST Data (Digit Classification)
4.16 MLPs on Randomly Generated Data
References
5. Convolutional Neural Networks in Tensorflow5.1 CNN Architectures
5.2 Properties of CNN Representations
5.3 Convolution Layers, Pooling Layers – Strides - Padding and Fully Connected Layer
5.4 Why TensorFlow for CNN Models?
5.5 TensorFlow Code for Building an Image Classifier for MNIST Data
5.6 Using a High-Level API for Building CNN Models
5.7 CNN in Keras
5.8 Building an Image Classifier for MNIST Data in Keras
5.9 Building an Image Classifier with CIFAR-10 Data
5.10 Define the Model Architecture
5.11 Pre-Trained Models
References
6. RNN and LSTM6.1 Concept of RNN
6.2 Concept of LSTM
6.3 Modes of LSTM
6.4 Sequence Prediction
6.5 Time-Series Forecasting with the LSTM Model
6.6 Speech to Text
6.7 Examples Using Each API
6.8 Text-to-Speech Conversion
6.9 Cognitive Service Providers
6.10 The Future of Speech Analytics
References
7. Developing Chatbot’s Face Detection and Recognition7.1 Why Chatbots?
7.2 Designs and Functions of Chatbot’s
7.3 Steps for Building a Chatbot’s
7.4 Best Practices of Chatbot Development
7.5 Face Detection
7.6 Face Recognition
7.7 Face Analysis
7.8 OpenCV—Detecting a Face, Recognition and Face Analysis
7.8.1 Face Detection
7.8.2 Face Recognition
7.9 Deep Learning–Based Face Recognition
7.10 Transfer Learning
7.11 API’s
References
8. Advanced Deep Learning8.1 Deep Convolutional Neural Networks (AlexNet)
8.2 Networks Using Blocks (VGG)
8.3 Network in Network (NiN)
8.4 Networks with Parallel Concatenations (GoogLeNet)
8.5 Residual Networks (ResNet)
8.6 Densely Connected Networks (DenseNet)
8.7 Gated Recurrent Units (GRU)
8.8 Long Short-Term Memory (LSTM)
8.9 Deep Recurrent Neural Networks (D-RNN)
8.10 Bidirectional Recurrent Neural Networks (Bi-RNN)
8.11 Machine Translation and the Dataset
8.12 Sequence to Sequence Learning
References
9. Enhanced Convolutional Neural Network9.1 Introduction
9.2 Deep Learning-Based Architecture for Absence Seizure Detection
9.3 EEG Signal Pre-Processing Strategy and Channel Selection
9.4 Input Formulation and Augmentation of EEG Signal for Deep Learning Model
9.5 Deep Learning Based Feature Extraction and Classification
9.6 Performance Analysis
9.7 Summary
References
10. Conclusion10.1 Introduction
10.2 Future Research Direction and Prospects
10.3 Research Challenges in Deep Learning
10.4 Practical Deep Learning Case Studies
10.4.1 Medicine: Epilepsy Seizure Onset Prediction
10.4.2 Using Data from Test Drills to Predict where to Drill for Oil
10.5 Summary
References
IndexBack to Top