I am a Full Stack Software Engineer and GenAI Specialist with five years of experience building enterprise-grade applications, distributed systems, and intelligent AI-powered solutions. My career combines expertise in backend engineering, frontend development, cloud infrastructure, and applied machine learning, with a strong emphasis on deploying systems that are scalable, reliable, and human-centered.
Georgia Institute of Technology
Master's in Computer Science Specialization in Machine Learning
Aug 2023 - May 2025
At Georgia Tech, I focused on applying data science and machine learning techniques to real-world problems in healthcare, resource optimization, and accessibility. Key projects included:
Predictive Analytics for Echocardiography: Applied deep learning on echocardiograph videos for the RVEF dataset, building models that improved accuracy of cardiac predictions and assisted in clinical decision support.
Data Visualization at Scale: Designed interactive dashboards in Power BI and Tableau to analyze large datasets, providing actionable insights on system performance and resource allocation.
Applied ML Projects: Developed practical systems such as an indoor navigation assistant for visually impaired users and a multilingual healthcare platform, leveraging NLP and speech-to-text to enhance accessibility.
These projects deepened my expertise in predictive modeling, large-scale data visualization, and applied ML for social impact.
CMR Institute of Technology – Bangalore, India
Bachelor of Engineering in Information Science
Aug 2016 – Aug 2020
During my undergraduate studies, I gained a strong foundation in computer science while exploring emerging fields in machine learning and big data. Notable projects included:
Threat Analysis on Social Media: Leveraged BERT (Bidirectional Encoder Representations from Transformers) for NLP-based classification of tweets, identifying potential threats by analyzing sentiment and contextual patterns across large-scale datasets.
Applied Machine Learning Projects: Built smaller projects involving regression, classification, and data preprocessing pipelines, working with large datasets and big data tools to strengthen my understanding of ML workflows.
This period gave me hands-on exposure to NLP, text classification, and data engineering fundamentals, while shaping my interest in solving real-world challenges through data-driven approaches.
LangChain + MCP HR Assistant
The challenge at Allstate was to reduce the volume of repetitive HR queries that were slowing down support teams. I was tasked with designing an intelligent assistant capable of handling high concurrency with low latency. I architected and deployed a LangChain and MCP-powered HR assistant on AWS ECS, using FastAPI and OpenSearch for semantic search. I also incorporated authentication and scaling features with API Gateway and Cognito. The result was a production system that achieved sixty five percent ticket deflection, maintained 99.9 percent uptime, and supported more than three hundred concurrent users while consistently responding in under one second.
RAG-based Semantic Search Engines
Teams across Allstate struggled to find relevant code and dataset references due to fragmented documentation. My responsibility was to improve discoverability across these resources. I designed and implemented RAG-based semantic search and recommendation engines using FAISS and hybrid retrieval methods. I worked on semantic chunking and query optimization to ensure high accuracy. The outcome was a forty percent improvement in discovery accuracy, which significantly reduced developer time spent on manual searches.
LLM Fine-tuning and Domain Adaptation
Business teams required models that could adapt to specialized enterprise contexts, as generic LLMs did not perform well. I was asked to improve domain-specific text generation and retrieval. I fine-tuned Hugging Face Transformers using LoRA adapters and integrated them into downstream workflows. This reduced manual operational effort by eighty percent, improved accuracy, and sped up retrieval tasks across support teams.
LLMOps and Responsible AI Deployment
As adoption of LLMs grew, it became critical to ensure safety, governance, and cost-effectiveness in production. I was responsible for setting up LLMOps practices. I implemented prompt versioning and caching strategies, integrated observability dashboards, and applied evaluation frameworks such as RAGAS and DeepEval. I also collaborated with risk and compliance teams to integrate bias mitigation techniques and human-in-the-loop review steps for sensitive use cases. These practices improved response reliability, reduced drift, and cut inference costs while maintaining trust in the deployed systems.
Infrastructure Automation
One of the major bottlenecks at Allstate was the time it took to provision servers and applications. A single provisioning workflow could take up to thirty three days. I was tasked with automating these workflows to improve delivery timelines. I automated infrastructure provisioning pipelines across distributed environments, incorporating Windows and Linux builds, IIS servers, databases, and load balancers. I used Ansible, Docker, and Jenkins to drive consistency and automation. The result was a system that reduced provisioning time to just nine minutes across more than two thousand five hundred tasks, while maintaining ninety seven percent uptime and eliminating manual bottlenecks.
Anomaly Detection and Monitoring
Maintaining service reliability required faster and more accurate anomaly detection. I was responsible for creating a system that could process logs in real time and surface actionable alerts. I engineered a real-time anomaly detection pipeline using Airflow, BERT embeddings, and IsolationForest models, deployed with ECS canary rollouts and CI/CD. This pipeline reduced false positives by seventy percent, guaranteed alerts under thirty seconds, and improved uptime for critical business applications.
Log Ingestion Pipelines
Support teams were spending hours manually reviewing IVR logs. I was tasked with automating log ingestion and triage. I designed and deployed automated pipelines in Python and SQL to process over fifty thousand IVR records daily, extracting structured insights and errors for downstream reporting. This reduced manual triage effort by eighty percent and improved incident response time, making decision-making faster and more consistent.
Jan 2020 - May 2021
Intern at Allstate India Private Limited
Designed and developed Dyna, a virtual onboarding tool built in Unity, which gamified organizational guidelines into an immersive VR experience and improved new-employee engagement during onboarding.
Conducted research on NLP algorithms including BERT and Attention-based models to build a similarity model that enhanced natural language understanding and improved internal automation capabilities.
Aug 2019 - Jan 2020
Placement Coordinator
Coordinated campus recruitment drives for 300–500 students by aligning student profiles with company requirements, improving placement match rates and student success outcomes.
Streamlined the recruitment process by establishing clear communication workflows between students and recruiters, reducing wait times by 30% and improving candidate experience.
Facilitated fair and transparent screening processes, ensuring equal access to opportunities and increasing student satisfaction by 20%.
Managed large groups during recruitment events, maintaining order and efficiency, which ensured that no candidates missed their interviews and the process ran seamlessly.
I am passionate about creating software systems that balance scale, performance, and responsibility. I believe that engineering is not only about solving technical challenges but also about enabling people whether through better workflows, faster data discovery, or intelligent automation. My focus is on building solutions that are reproducible, interpretable, and aligned with human needs. I thrive in collaborative, agile teams and enjoy working at the intersection of software engineering, distributed systems, and applied AI.