AI Projects
AI Projects
Evaluating the Impact of JPEG-2000 Compression and Reduced-Resolution Decoding on Next-Day Ocean Temperature Forecasting Using UNet Models.
Developed a U-Net–based deep learning model to reconstruct high-resolution sea-surface temperature (SST) fields from highly compressed JPEG-2000 (JP2) wavelet data, achieving compression ratios of up to 544×.
Systematically evaluated the effect of progressive JP2 quality layers and reduced-resolution decoding (rLevel down-sampling) on next-day SST forecasting performance.
Demonstrated that model accuracy remains stable across compression levels, achieving RMSE ≈ 0.22–0.23 °C, PSNR ≈ 40–42 dB, and SSIM ≈ 0.97, even under aggressive compression.
Achieved 172× effective data reduction (1445 GB → 8.65 GB) with minimal degradation in predictive accuracy, enabling scalable storage and transmission of large climate datasets.
Conducted comparative experiments across multiple compression and decoding configurations to identify optimal trade-offs between data fidelity, storage efficiency, and model performance.
Designed a robust preprocessing pipeline for large-scale geospatial ocean data, including normalization, tiling, and temporal alignment for next-day forecasting.
Validated the feasibility of compressed-domain learning for operational ocean forecasting, reducing I/O bottlenecks and computational overhead in HPC and cloud environments.
Applied MLOps best practices to ensure reproducible experimentation, model versioning, and performance monitoring across multiple training runs.
Provided empirical evidence supporting the use of JPEG-2000 as a climate-aware compression standard for deep-learning-based Earth system modeling workflows.
Tools: Python (3.10.10), TensorFlow (2.16.1), Scikit-Learn (1.5.0), Pandas (2.2.2), NumPy (2.0), Matplotlib (3.9.0)
Wildfire Prediction Using Deep Learning on Spatio-Temporal Environmental Data
Designed and implemented a UNet++-based deep learning model for wildfire prediction, leveraging spatio-temporal environmental and climate variables to detect and forecast fire occurrence.
Addressed severe class imbalance and sparsity of wildfire events through customized loss functions, including Binary Cross-Entropy + IoU and masked loss strategies, improving robustness on rare-event prediction.
Achieved Mean Absolute Percentage Error (MAPE) of 1.128%, demonstrating high predictive accuracy despite noisy labels and imbalanced datasets.
Developed a full data preprocessing and feature-engineering pipeline for geospatial and temporal inputs, ensuring model stability across heterogeneous environmental conditions.
Evaluated model performance using task-appropriate metrics beyond accuracy, emphasizing generalization, spatial consistency, and real-world applicability.
Applied MLOps methodologies to enable reproducible training, scalable experimentation, and reliable model monitoring.
Tools: Python (3.10.10), TensorFlow (2.16.1), Scikit-Learn (1.5.0), Pandas (2.2.2), NumPy (2.0), Matplotlib (3.9.0)
Readability Assessment For Text Simplification
Developed supervised machine-learning and neural NLP models to assess sentence-level readability as a prerequisite for automatic text simplification.
Achieved 87.91% accuracy in binary classification of simple vs. complex sentences in English, demonstrating strong generalization across diverse syntactic structures.
Performed binary sentence classification in Italian, reaching an accuracy of 82.21%, validating the model’s cross-linguistic robustness.
Designed and evaluated feature-rich pipelines combining lexical, syntactic, and statistical readability indicators (e.g., sentence length, lexical diversity, syntactic depth).
Integrated pretrained language representations to enhance semantic sensitivity in readability prediction.
Conducted comparative analysis across languages, highlighting structural and morphological differences impacting readability assessment.
Implemented end-to-end training and evaluation pipelines, ensuring reproducibility and consistent performance tracking.
Applied DevOps and CI/CD practices to automate model training, testing, and evaluation workflows.
Employed MLOps methodologies to support scalable deployment, monitoring, and lifecycle management of NLP models.
Positioned readability assessment as a decision module for downstream text simplification systems, improving controllability and output quality.
Tools: Python (3.10.10), TensorFlow (2.16.1), Scikit-Learn (1.5.0), SpaCy (3.4), NLTK (3.0), Transformers (4.41.2), Pandas (2.2.2), NumPy (2.0), Matplotlib (3.9.0)
Fraud Detection and Credit Risk Modeling Using Machine Learning in Financial Systems
Led the end-to-end development of fraud detection models, reducing false positives by 30% while improving overall detection accuracy by 25%, directly enhancing operational efficiency.
Designed and deployed predictive credit scoring models to support loan approval decisions, reducing decision latency by 20% and improving risk stratification.
Engineered feature pipelines incorporating transactional behavior, historical credit data, and temporal patterns to improve model robustness and sensitivity.
Addressed class imbalance in fraud datasets through sampling strategies and cost-sensitive learning, improving recall on minority fraud cases.
Implemented model explainability techniques (e.g., feature importance, local explanations) to ensure transparency and support regulatory audits.
Collaborated closely with compliance, legal, and risk management teams to align model behavior with banking regulations and internal governance standards.
Established model monitoring frameworks to track performance drift, data distribution changes, and fairness metrics in production.
Applied MLOps best practices for model versioning, automated retraining, CI/CD deployment, and lifecycle management across fraud detection and credit scoring systems.
Optimized data pipelines using SQL and Spark, enabling scalable processing of large-scale financial and transactional datasets.
Deployed and maintained models in cloud-based environments (AWS), ensuring reliability, scalability, and secure access control.
Tools: Python (3.9), TensorFlow (2.x), Scikit-Learn (0.24), SQL, Spark, AWS
Big Data analysis on direct marketing campaigns of banking institution
Conducted large-scale data analysis and predictive modeling on direct marketing campaign data from a banking institution.
Processed and analyzed a training dataset of approximately 800,000 records, leveraging distributed computing frameworks for scalability.
Built and evaluated multiple classification models, including Logistic Regression, Decision Tree, and Random Forest, to predict customer response to marketing campaigns.
Achieved the highest precision of 0.65 using Random Forest, optimizing the identification of high-probability customers.
Obtained the highest recall of 0.54 with Decision Tree, highlighting its effectiveness in capturing positive responses.
Performed comparative model analysis to assess trade-offs between precision and recall, supporting data-driven model selection.
Implemented feature preprocessing and transformation pipelines to handle heterogeneous customer and campaign attributes.
Leveraged Apache Spark and Hadoop to manage data ingestion, transformation, and distributed model training efficiently.
Evaluated model performance using confusion matrices and classification metrics, ensuring robust assessment on imbalanced datasets.
Demonstrated how big data analytics can improve targeting strategies and decision-making in large-scale banking marketing operations.
Tools: Python, Apache Spark, Hadoop
End-to-End Bank Loan Modeling and Predictive Analytics Using Supervised Learning
Built a supervised machine learning pipeline to model and predict bank loan approval status or customer credit risk using historical banking data.
Performed comprehensive exploratory data analysis (EDA) to uncover trends, correlations, and key predictors influencing loan outcomes.
Engineered and transformed features (e.g., income, credit history, account usage, demographic attributes) to enhance model discriminative capability.
Trained and compared multiple classification models—including Decision Trees, Random Forests, and Logistic Regression to assess effectiveness on loan prediction tasks.
Evaluated performance using relevant metrics such as precision, recall, F1-score, ROC-AUC, and confusion matrices to balance model sensitivity and specificity.
Optimized model performance through cross-validation, hyperparameter tuning, and regularization techniques to prevent overfitting.
Demonstrated how predictive analytics can inform banking decision support systems by identifying likely loan approvals and creditworthy customers.
Designed code to be modular and reproducible, enabling future improvement, extension to larger datasets, or integration into front-end applications.
Highlighted model interpretability and potential business value by demonstrating how insights from the models could support credit policy and portfolio management decisions.
Tools: Python, Scikit-Learn, Pandas, NumPy, Matplotlib/Seaborn
Hands-On Development and Evaluation of Large Language Models Using Modern NLP Frameworks
Conducted hands-on experimentation with Large Language Models (LLMs) to understand transformer-based architectures, attention mechanisms, and generative language modeling.
Implemented and evaluated prompt-based NLP workflows, analyzing the impact of prompt design on model reasoning, coherence, and task performance.
Explored fine-tuning and adaptation strategies for LLMs to improve task-specific performance on downstream NLP applications.
Integrated pretrained transformer models using modern NLP frameworks to perform text generation, summarization, and question-answering tasks.
Analyzed model behavior, limitations, and failure modes, including hallucination, sensitivity to input phrasing, and bias-related issues.
Built reproducible experimentation pipelines, enabling systematic comparison across model configurations and prompting strategies.
Applied tokenization, text preprocessing, and post-processing techniques to ensure stable and high-quality LLM outputs.
Investigated computational and efficiency considerations, such as inference latency and resource usage, relevant for real-world deployment.
Documented experiments and findings in a well-structured GitHub repository, emphasizing clarity, reproducibility, and practical insights.
Positioned LLMs within broader ML and MLOps workflows, considering versioning, evaluation, and responsible usage.
Tools: Markdown, Python notebooks, structured documentation
Skin Disease Image Classification for Accurate Categorization
Developed an end-to-end machine learning pipeline for classifying images of multiple skin diseases using computer vision techniques.
Curated and preprocessed a dataset of approximately 2,000 dermatological images, including resizing, normalization, and noise reduction.
Implemented and compared deep learning (CNN) and classical ML (SVM) approaches to evaluate trade-offs between precision and recall.
Achieved a maximum precision of 0.81 using a Convolutional Neural Network, indicating strong performance in minimizing false positives.
Obtained the highest recall of 0.66 with a Support Vector Machine, highlighting its effectiveness in identifying positive cases.
Conducted model performance analysis across multiple metrics, including precision, recall, and confusion matrices, to ensure balanced evaluation.
Applied image preprocessing techniques with OpenCV to enhance feature extraction and improve model stability.
Addressed class imbalance and limited dataset size through careful model selection and evaluation strategies.
Designed the pipeline to be modular and reproducible, enabling future extension to larger datasets or additional disease classes.
Demonstrated the applicability of machine learning for medical image classification, emphasizing accuracy, reliability, and interpretability.
Tools: Python, TensorFlow / Keras, OpenCV
Deep Learning–Based Image Segmentation for Pixel-Level Visual Understanding
Built and evaluated image segmentation models to perform pixel-level classification, enabling detailed partitioning of image regions based on learned visual features.
Implemented deep-learning architectures commonly used for segmentation (e.g., U-Net / encoder–decoder architectures) to improve spatial accuracy and boundary detection.
Designed a data preprocessing pipeline including image resizing, normalization, and augmentation to enhance the diversity and robustness of training samples.
Trained and validated models on annotated image datasets, optimizing for pixel accuracy, IoU (Intersection over Union), and other segmentation metrics.
Integrated techniques such as skip connections and multi-scale feature fusion to retain both global context and fine details in segmentation outputs.
Visualized segmentation results using overlay masks and color-coded pixel predictions to qualitatively assess model performance and edge delineation.
Explored loss functions suitable for dense prediction tasks (e.g., Dice loss, pixel-wise cross entropy) to improve convergence and training stability.
Developed a modular notebook structure enabling reproducibility, experiment tracking, and easy extension to new datasets or architectures.
Documented results and insights in the GitHub repository to support peer review, replication, and future enhancements.
Tools: Python, TensorFlow / Keras (or PyTorch if used), OpenCV, NumPy, Matplotlib
Systematic Framework for Approaching (Almost) Any Machine Learning Problem
Created a comprehensive guide and summary synthesizing a systematic framework for approaching diverse machine learning problems with a practical mindset.
Emphasized the importance of problem understanding, data exploration, and pipeline design before model training, guiding readers through foundational decisions that influence every stage of an ML workflow.
Covered key components of ML project work, including cross-validation strategies, evaluation metrics, and model selection, illustrating when and why to use each method depending on the task.
Highlighted structured data preprocessing approaches, especially handling categorical variables, feature engineering, and feature selection to improve model robustness.
Explained the role of hyperparameter optimization and how to effectively tune models for better generalization and performance.
Demonstrated how to meaningfully approach different problem types such as image classification, text classification, and regression, moving beyond theory into practical solution strategies.
Included best practices for model reproducibility, code organization, and model serving, bridging the gap between experimentation and real-world deployment.
Structured content to serve both learners and practitioners of ML, especially those aiming to transition from academic understanding to applied project execution.
Documented insights and actionable patterns in an accessible repository format to support peer learning, experimentation, and reusable guidance for future projects.
Tools: Markdown, Python notebooks, structured documentation
Android Applications projects
Android-Based Application Secure Payment and Virtual Credit Card Application Development
Leveraged more than 25 features of Android structures.
Spearheaded the development of specifications for new product features, including fingerprint authentication.
Analyzed over 20 banking applications to better understand user needs.
Engineered an easy‐to‐use Payment module which connects to major PG (Payment Gateway) Banks in Iran.
Resolved application bugs within a 24 hour service level agreement.
Designed 3 apps: Payment module for Taxi, Payment module for Chain supermarkets, Payment module for virtual credit card.
Tools: Kotlin(1.5.0), Java(SE 17), JDK(17.0.2),XML(2.1.2)
Android Application Development for Sports News Media Platforms
Directed a highly effective team of four developers in the creation of a mobile app for a athletics news service.
Integrated RESTful APIs and over 8 third‐party libraries to enhance application functionality.
Built 1 application for sport news media.
Tools: Java(SE 15), JDK(16.0.2),XML(1.9.1)
IoT-Based Android Application Development for Device and Asset Tracking
Guided a proficient team of developers in the development of an IoT mobile application.
created 2 apps: device Tracker and mobile tracker.
Deployed the same app for 10 inch and 7 inch tablets.
Tools: Java(SE 15), JDK(15.0.2),XML(1.9.1)
Android Application Development and Technical Support for Commuting Monitoring Systems
Conducted rigorous testing using troubleshooting methods, devised innovative solutions, and utilized more than 100 document resolutions for
incorporation into the knowledge base, facilitating support team utilization.
Created 2 educational service applications for drivers and parents to monitor their children’s commuting.
Tools: Java(SE 15), JDK(15.0.2),XML(1.9.1)
Android Application Development for National Supermarket Chain and Retail Services
Collaborated with merchant business units and tourism service teams to design mobile application systems aligned with client and customer requirements.
Worked closely with over 20 business analysts, software developers, and infrastructure specialists to deliver high-availability, mission-critical mobile applications
Designed and delivered two production Android applications:
Online shopping application for a national supermarket chain.
Tourism application supporting services for modern Milad tower in Tehran.
Translated complex business and operational requirements into scalable mobile computing solutions.
Ensured application reliability and availability suitable for large-scale retail and consumer usage.
Contributed to cross-functional coordination between business, technical, and infrastructure teams throughout the development lifecycle.
Tools: Java (SE 15), JDK (15.0.2), XML (1.9.1)
Android Application Location Review and Social Discovery Application
Developed a location-based mobile application enabling 200,000+ users to discover and evaluate the best places for leisure and social activities.
Designed and implemented a review and rating system, allowing users to compare 300+ comments per location for informed decision-making.
Built location-specific social networks, enabling users to interact, share experiences, and exchange feedback tied to places they visited.
Focused on user experience and engagement, optimizing navigation and content presentation for high user adoption.
Ensured application scalability and performance to support large active user bases.
Delivered a production-ready mobile solution supporting community-driven place discovery.
Tools: Java, Android SDK, XML