I design and deploy AI systems that transform how educational content is created, aligned, reviewed, and delivered. My work merges instructional design, machine learning, and automation engineering to solve high-volume, high-impact problems across World Languages, Math, ELA, and compliance workflows.
Across these projects, I’ve eliminated 1,800+ hours of manual work, avoided $140k+ in labor costs, and built reusable AI pipelines and intelligent content systems that scale across programs and grade bands.
Below is a curated selection of my work.
I design systems that take raw curriculum and turn it into clean, machine readable, AI friendly data that can actually power products instead of just sitting in PDFs.
Impact
Converted large sets of Math (479 lessons, Grades 6–Algebra II) and World Languages content into structured JSON and knowledge graph ready formats.
Built a cross-program curriculum knowledge graph that maps content to standards, themes, skills, assessments, and learning goals.
Standardized all math expressions into MathML so they render safely in browsers and are usable by downstream tools.
Created a reusable data layer that feeds search, analytics, RAG systems, and future adaptive models.
Enabled retrieval augmented generation (RAG) that answers educator questions using our actual curriculum, with standards aware filtering and context, instead of generic model guesses.
What I Built
A schema for curricular entities: concepts, skills, activities, standards, assessments, scaffolds, pacing, and materials.
Python pipelines to ingest World Languages and Math content into structured JSON and knowledge graph inputs.
Prompt driven extraction workflows that capture lesson metadata, learning goals, big ideas, essential questions, pacing guides, activity structures, key terms, and required materials.
Full MathML transformation for every math expression field so the content is both human readable and machine actionable.
Vector based retrieval and standards aware filters that sit on top of this data layer to power personalized AI support for educators.
RAG pipelines that ground AI answers in our curriculum and knowledge graph, enabling smarter recommendations, alignment analysis, and more adaptive learning experiences.
Impact:
Generated 60 full teacher lessons, including 48 detailed 20-day pacing plans.
Automated 70–80% of the workflow.
Eliminated 527.2 hours of manual work and saved $26,360.
What I Built:
Rule-driven lesson generation system using LLMs + structured prompts.
Outputs include pacing, scaffolding, activity flow, and learning goals.
Workflow now produces BTS-ready curriculum materials at scale.
I built an automated standards-alignment engine that turns curriculum activities into machine-readable, standards-linked metadata for large-scale reviews, RFPs, and product decisions.
Impact
Automated ACTFL, CEFR, and state standard tagging across core and ancillary materials, replacing over 1,000 hours of manual work and avoiding roughly $100K in labor cost.
Produced consistent, auditable tagging with a one-sentence rationale for every tag, making it usable for state reviews, RFP submissions, and internal QA.
Created a reusable alignment layer that now feeds search, reporting, and AI workflows instead of treating standards as an afterthought spreadsheet.
What I Built
A standards-classification engine running on AWS Lambda with Claude 3 Sonnet, designed to scale across programs and standards sets (ACTFL, CEFR, and state frameworks).
Automated extraction and normalization of activity text, converting messy curriculum language into structured inputs that models can reliably interpret.
A tagging pipeline that applies traceable, reproducible logic: each tag comes with a short, human-readable rationale, enabling audits, corrections, and confidence for high-stakes use (state reviews, adoptions, RFPs).
A flexible metadata structure that can plug into future AI systems (RAG, recommendation, analytics) without rebuilding the tagging logic from scratch.
I built a reusable AI framework that automates curriculum and instructional materials reviews across multiple states, each with its own standards and social content requirements. The system uses:
AWS Lambda for scalable, event-driven workflow orchestration
S3 as a structured content hub for lesson files, rubrics, and outputs
Bedrock (Claude + OpenAI) for LLM-based analysis and classification
Python utilities for modular extraction, normalization, and rule application
Vector search + standards filtering for RAG-based alignment checks
MathML, JSON, CSV, and Sheets for clean, review-ready outputs
The key outcome is speed and reusability: new state review pipelines can be spun up quickly by swapping in a different set of state standards and social content rules, without rebuilding the system from scratch. This allows fast deployment of state-specific automated reviews that flag alignment issues, potential social content concerns, and suitability for adoption.
Across these systems, I’ve built a cohesive automation and intelligence framework using:
AWS Lambda for scalable workflow orchestration
S3 as a structured content hub
Bedrock (Claude + OpenAI) for LLM-based processing
Python, with modular extraction and normalization utilities
Vector search + standards filtering for RAG
MathML, JSON, CSV, and Sheets for clean machine and human outputs
This technical foundation lets new automation projects spin up quickly and integrate cleanly with existing pipelines.