Amit Bashyal | [469-245-6081] | amitbashyal@gmail.com | linkedin.com/in/amit-bashyal
PROFESSIONAL SUMMARY
Highly accomplished Ph.D. Physicist and Research Scientist with 10+ years of experience designing, developing, and deploying advanced computational frameworks, scalable machine learning models, and high-performance data analysis solutions. Proven expertise in GPU-accelerated computing, distributed systems, and MLOps, with a strong track record of leveraging AI/ML for complex problem-solving. Seeking to apply deep research and engineering skills in Machine Learning, Data Science, or Research Engineering roles within the finance and technology sector, focusing on leveraging cutting-edge AI/ML models and building efficient, scalable infrastructure.
SKILLS
Programming Languages: Python (PyTorch, scikit-learn, Pandas, NumPy), C, C++, CUDA, OpenMP, Jsonnet, shell
Machine Learning/AI: Deep Learning, Optimization Algorithms (Ax-platform), Natural Language Processing (LLM APIs: ChatGPT, Anthropic, Gemini), AI Model Training & Evaluation, MLOps, Data Modeling, Prompt Engineering
Cloud & Distributed Computing: Docker, Apptainer (Containerization), Slurm, HTCs, HPC, Parallel I/O, Workflow Orchestration (Panda/iDDS), Globus, RUCIO
Tools & Technologies: Git, CI/CD, Linux, ROOT, Jamma
Data Analysis: Large-Scale Data Analysis, Statistical Inference, Data Reduction, Data Visualization
EXPERIENCE
Computational Research Associate | Brookhaven National Laboratory, NY, March 2025 – Present
Led AI/ML-driven multi-objective optimization (Ax-platform) to enhance system performance for large-scale scientific experiments.
Engineered scalable computational workflows using Slurm and Panda/iDDS on HPCs and HTCs, accelerating experimental design cycles by 30%.
Managed containerized software deployments (Docker, Apptainer) for complex AI/ML pipelines, ensuring reproducibility and portability.
Developed an AI agent for peer coding using LLMs (ChatGPT) accelerating software development and GPU-accelerated code porting (e.g., 5000+ C++ lines in <1 week).
Researcher | Argonne National Laboratory, Lemont, IL | April 2021 – March 2025
Developed HPC software and data models for future large-scale experiments, improving data I/O efficiency, investigating intelligent lossy compression algorithms to optimize data throughput.
Designed parallel I/O systems for HPCs, achieving high scalability up to 512 compute nodes with ~12,000 processes.
Led research on GPU-friendly data models (optimized for broadcasting/reducing operations), enhancing data pipelines for heterogeneous computing environments.
Developed data acceleration workflow pipelines using C, C++ (CUDA, Kokkos, libtorch APIs) and Python (PyTorch, CuPy).
Developed data analysis and statistical inference workflows for distributed computing environments, reducing end-to-end analysis time from ~48 hours to less than 3 hours.
Implemented Git workflows for CI/CD, ensuring robust code quality and automated testing.
Developing a highly parallelized GPU-accelerated pipeline using PyTorch tensors, DNN models for prompt processing of large raw data produced at a (~) constant rate.
Communicated R&D findings at national/international conferences.
EDUCATION : Ph.D. in Physics | Oregon State University, Corvallis, OR | March 2021 (Graduate Research Award)