This project investigates the mathematical foundations of neural network optimization, focusing on gradient descent convergence, Bayesian neural networks, and stochastic gradient methods.
It bridges theoretical optimization principles with practical training strategies for deep learning models.
Applications include uncertainty-aware learning and efficient model training for real-world tasks such as fraud detection.
Project Supervisor: Dr. Indrajit Jana
Link: M.Sc. Project
This project explores the Erdős–Kac Theorem, a landmark result in probabilistic number theory, which states that the number of distinct prime factors of a natural number follows a normal distribution.
It presents an accessible proof based on the work of Granville and Soundararajan, supported by essential theorems and background concepts.
The study highlights both the theoretical elegance of the result and its practical significance in areas like cryptography.
Project Supervisor: Dr. Kartick Adhikari
This project on Linear Regression builds a rigorous base in the simple linear model, least-squares estimation, and core properties.
It then develops inference and diagnostics—confidence/prediction intervals, residual analysis, tests for normality and constant variance, and lack-of-fit F-tests—plus remedies like Box-Cox transformations and LOWESS smoothing.
Hands-on sections use real datasets (height–weight, Toluca production, PCA-based credit-card features) to compute estimates, intervals, predictions, and visualize fits—linking theory to practice.
Project Supervisor: Dr. Indrajit Jana
"Mathematics is the language in which God has written the universe." – Galileo Galilei