Poke around my AI projects here — curious what you’ll think!
I led next-gen sensor validation tests across three platforms. Generated 1,000+ ML-critical scenarios and executed, reviewed, and analyzed 2,000+ verification tests. I pioneered fog (eMOR) testing, coordinated high stakes VRU training data campaigns to unblock and fully expand San Francisco's ODD, was awarded the inaugural "Challenge Coin" for impact and teamwork, scaled readiness testing 10x, built a lidar ground-truth reflectance (BSDF) database, drove 30%+ QoQ V&V testing efficiency via JIRA dashboards, and standardized operations with SOPs, training playbooks, and cross-functional leadership.
In the field, I’m the go-to "point person" who deeply understands scene dynamics, serves as the technical root-cause troubleshooter, and ensures safe and flawless execution; to clients and customers, I’m the empathetic technical translator who grasps their needs, delivers clear recommendations to senior leadership, and turns complex challenges into trusted, scalable solutions.
AI perception and annotation analysis platform that transforms raw visual data into structured, high-fidelity scene intelligence. Going far beyond standard object detection, it deeply interprets environmental context, object interactions, and dynamic relationships within a scene, then evaluates potential safety risks, anomalies, and edge cases with precise severity scoring and threat assessment. This creates a powerful bridge where perception training data meets edge-case scene understanding meets severity and threat assessment.
Rapidly surface long-tail scenarios that matter most; accelerate training and perception models with actionable insights — demo here.