Course Information: IMDAI (CSET225)
Detailed Syllabus
Module 1: Why Intelligent Model? Types of Dataset - 1D Data (Tabular Data, CSV Files, DAT Files, MAT Files, Time Series Data, Spectral Data), 2D Data (RGB, grayscale images), 3D Data (e.g., MRI scans, Hyperspectral Imaging Data), Introduction to Artificial Neural Networks (ANN), Optimizer, SGD, Forward and backward pass, Architecture of ANN, Activation Functions in ANN, Overfitting and Underfitting, Introduction to Convolutional Neural Networks (CNN), Need for CNN (Limitations of ANN), CNN Architecture and Components, Convolutional Layers, Pooling Layers, Fully Connected Layers, Activation Functions in CNN, Training CNN: Techniques and Optimization, Feature Extraction in CNN, Understanding Filters and Kernels in CNN, Regularization, Applications of CNN in Image Processing and Beyond. Introduction to U-Net: Extending CNN for Image Segmentation, U-Net Architecture: Encoder-Decoder Structure, Skip Connections and Their Role in U-Net, Training U-Net for Biomedical Image Segmentation. AI in IOT Devices, Different Applications of AI, Biases in AI Models.
Module 2: Introduction to Generative Adversarial Networks (GANs), Architecture of GAN: Generator and Discriminator, Training GANs: Challenges and Techniques, Applications of GANs in Image Synthesis and Enhancement, Attacks on AI Models - Overview of Security Vulnerabilities in AI Models, Types of Attacks: Poisoning, Evasion, and Extraction, Defense Mechanisms for AI Models, Adversarial Attacks on AI Models, Crafting Adversarial Examples: Techniques and Tools, Adversarial Training and Defense Strategies, Case Studies of Adversarial Attacks in Real-world Applications, Dilated CNN, Architecture and Working of Dilated Convolutions, Applications of Dilated CNN in Image Segmentation and Object Detection, Vision Transformers (ViT), Transformer Architecture for Image Recognition, Training Vision Transformers: Techniques and Challenges, Kolmogorov Complexity in AI, Introduction to Kolmogorov Complexity, Applications of Kolmogorov Complexity in Model Evaluation.
Module 3: Introduction to Text Data in AI, Recurrent Neural Networks (RNN), Architecture and Working of RNN, Challenges with RNN: Vanishing and Exploding Gradients, Applications of RNN in Sequential Data Analysis, Long Short-Term Memory (LSTM), LSTM Architecture: Gates and Memory Cells, Training LSTM Networks, Applications of LSTM, Gated Recurrent Unit (GRU), GRU vs. LSTM: A Comparative Analysis, Applications of GRU in Sequence Modeling, Large Language Models (LLM) and Generative AI, Transformers and the Evolution of LLM, Generative AI: Understanding LLM in Text Generation, Training LLMs: Challenges and Strategies, Applications of LLM in Chatbots, Text Summarization, and Translation.
Module 4: Introduction to Time Series Analysis, Time Series Plots and Visualization Techniques, Time Series Forecasting: Methods and Approaches, Time Series Decomposition: Trend, Seasonality, and Residuals, Using RNN and LSTM for Time Series Prediction, Introduction to Diffusion Models. Explainable AI: Techniques and Applications.