This mini-project demonstrates a lightweight image classification pipeline in PyTorch using torchvision.datasets.FakeData to simulate a 10-class dataset (3Γ32Γ32 images). It focuses on understanding the vision training workflow end-to-end: dataset creation with transforms (ToTensor), mini-batch loading via DataLoader, and building a minimal CNN architecture (single Conv2D β ReLU β MaxPool β Flatten β Linear classifier). The model is trained on CPU/GPU using a standard training loop with CrossEntropyLoss + Adam, showing how logits-based multi-class classification is handled in PyTorch. Training progress is tracked through an epoch-wise loss curve, making it a quick, controlled experiment to validate CNN plumbing and debug the full image-training pipeline without relying on external datasets.
This mini-project builds an end-to-end binary classification pipeline in PyTorch using a synthetic tabular dataset generated with make_classification. It covers data splitting with train_test_split, wrapping NumPy arrays into TensorDataset, efficient batching through DataLoader, and training a simple feedforward neural network with two hidden layers, ReLU activations, and dropout for regularization. The model is trained on GPU/CPU using BCELoss with an Adam optimizer, while torchinfo.summary is used to inspect layer-wise shapes and parameter counts for clarity and debugging. Training and validation losses are tracked across epochs and visualized with a clean loss curve, making it a practical experiment for understanding tabular MLP training dynamics, overfitting behavior, and basic evaluation workflow in PyTorch.
This mini-project trains a simple CNN on the Kaggle Intel Image Classification dataset using torchvision.datasets.ImageFolder, demonstrating a practical image pipeline with directory-based labels, resizing to 64Γ64, tensor conversion, and ImageNet-style normalization for stable training. It builds a lightweight two-block ConvβReLUβMaxPool feature extractor followed by flattening, dropout regularization, and a linear classifier over 6 scene categories (buildings, forest, glacier, mountain, sea, street). The notebook uses torchinfo.summary to verify layer shapes and parameter counts, trains the model on GPU/CPU with CrossEntropyLoss and Adam, and tracks both training and test loss across epochs with a clear loss-curve visualization, making it a clean hands-on example of end-to-end multiclass image classification in PyTorch with real data loading and evaluation loops.