Hi, I'm Kesavar Kabilar and I claim data speaks! - because I can write code that gives it a purposeful voice and a visual interface. I enjoy developing solutions that optimize business processes and use data to drive actionable insights.
I am working at Oracle as a Data Engineer (Data Analytics), optimizing queries, writing scripts, and developing web services. I'm also currently pursuing a Master of Computer Science at the University of Illinois Urbana-Champaign, specializing in Data Science.
In my free time, I like to play chess (2300 Rapid rating), solve a Rubik's Cube (average around 14 seconds), and juggle (5 balls for 20 seconds).
Apr 2024 - Sep 2025
Skills: SQL, Tableau, Python, Shell Scripting, JavaScript, RESTful API
Implemented data retrieval from OCI Database using Python and Shell scripting to power a Tableau dashboard, providing stakeholders with real-time business and system insights.
Optimized and wrote over 80 SQL queries for data extraction workflows using the Oracle Data Integrator API, resulting in a 35% runtime reduction of the data pipelines, and implemented parameterized interfaces for scalability.
Automated key business processes by writing 200+ JavaScript files, of scheduled and map-reduce scripts consisting of invoice generation, receipt creation, and others resulting in 30% faster performance in NetSuite operations.
Interconnected data integration by developing 73 NetSuite's REST and SOAP API web services to external Power BI, Boomi, and Snowflake systems. This allowed system interoperability and facilitated data exchange with client teams.
Led technical transitions for enterprise clients from the legacy NetSuite.com data source to NetSuite2.com, resolving complex field-mapping discrepancies and rewriting legacy SQL into optimized SuiteQL.
Jun 2023 - Mar 2024
Skills: Python, Selenium, Unit Tests, Scrum
Collaborated with Legal, Tech, and Operations teams to design and implement features, such as standarized feedback, automated dashboards, and user activity ensuring all departments had access to their required metrics.
Managed performance for all digital tools by using SQL to identify data integrity risks and executing cleanup projects that maintained a 99.9% accuracy rate across all public legal resources.
Presented weekly tableau dashboards on user activity metrics to senior management, providing data driven recommendations that led to a 12% increase in user engagement through targeted content updates.
May 2020 - Aug 2022
Skills: PyTorch, Numpy, RNN, MongoDB, Express.js, React.js, Node.js, Jira
Applied Apache Hadoop’s MapReduce framework to parallelize the processing of 8 terabytes of academic data which reduced job completion times by 46% and it significantly improved data accessibility for research teams.
Developed C++ algorithms using cached memory pools for specific data transformation tasks, significantly minimizing computational overhead and reducing memory bottlenecks which was causing performance regressions.
Aug 2024 - Apr 2026
Achieved the highest grade in courses Data Mining (with 97%) and Data Cleaning (with 109%) among all enrolled students, excelling in data transformation, outlier detection, feature engineering, and clustering.
Presented a machine learning project on predictive modeling using Scikit-learn's KMeans implementation at the UIUC Big Data 2025 Conference, showcasing insights into user engagement patterns.
Coursework: Data Mining, Data Visualization, Applied Machine Learning, Natural Language Processing, Deep Learning for Healthcare, Cloud Computing Application, Scientific Visualization, and Theory and Practice of Data Cleaning.
Sep 2019 - Jun 2023
GPA: 3.87/4.00 (High Distinction), Dean's List of Scholar 2020-2023
Coursework: Machine Learning, Neural Network and Deep Learning, Computer Vision, Algorithm Design and Analysis, SQL Databases, Data Structures and Analysis, Linear Algebra, Probability and Inductive Logic, Numerical Methods.
Demonstrated significant performance improvement in caching and database query response times by utilizing AWS RDS Aurora MySQL, ElastiCache (Redis), and AWS Lambda.
Analysis of Wikipedia dataset with Big Data Technologies: Apache Spark, MapReduce, PySpark, Docker.
Demonstrates a data visualization of global annual population change per selected year using Tableau and C++ for data cleaning.
Data management system using Hadoop tools: Apache HBase, Apache ZooKeeper, and Apache Phoenix, within a Dockerized environment and Java for data manipulation and join simulation.
A recurrent neural network model to predict future stock market prices according to previous price ranges.
A chatbot implemented using AWS Lex that accesses DynamoDB to collect location information about cities and answers user queries about the stored dataset.
Frequent Itemset Mining with Apriori Algorithm, Sequential Pattern Mining with PrefixSpan Algorithm, Agglomerative Hierarchical Clustering for Geographic Data, Evaluating Clustering Quality with Jaccard Similarity and NMI, Decision Tree Classification with Custom Maximum Depth, Naive Bayes Classifier for Animal Classification.
An interactive D3.js visualization exploring how car efficiency increased while horsepower declined in the aftermath of the 1970s oil crisis.
A dynamic Cloud Infrastructure system that automatically scales EC2 instances up or down with Load Balancer based on incoming POST and GET request workloads.
Everyday Excel (Advanced) Portfolio: Demonstrating mastery in advanced Excel techniques, including dynamic formulas, data validation, VBA scripting, data integration, and sensitivity analysis for dynamic real-time calculations and complex data manipulation.
Accomplished a 20% increase in user satisfaction by leading the MERN stack implementation for optimizing client engagement in learning, tracking, and managing Bitcoins, Blockchains, and NFTs.
Enhanced backend efficiency with a 15% reduction in response time for "GET," "POST," and "PUT" operations by developing and integrating Restful APIs. This optimization significantly maximized the overall performance of backend processes.
Engineered visually stunning web pages, resulting in a 25% boost in client engagement and a 30% extension in average user session duration, showcasing outstanding teamwork with a 10% ahead-of-schedule achievement of project milestones.
Proficient in virtualization, containerization (specifically Docker), Dockerfile creation, multi-container orchestration with Compose and Airflow, Kubernetes core concepts, cluster architecture, deployment using cloud environments, GitHub Codespaces, and AI-driven tools, and effectively handle data scenarios through mastering containerization, deploying apps, and addressing production issues with cloud orchestration and SRE practices.