Context
NASA’s SBIR/STTR program supports early-stage companies developing cutting-edge technologies with potential for both commercial and federal application. While the program has backed thousands of startups, measuring its long-term commercial impact remained complex and inconsistent—particularly when trying to connect company outcomes directly to NASA’s early support.
To address this, NASA partnered with CrowdPlat, a consulting and crowdsourcing firm that delivers high-skill project teams for government and enterprise innovation challenges. As part of this collaboration, I was brought on as a Consulting Analyst to co-develop a methodology for tracing commercialization success back to NASA’s funding interventions—supporting both performance evaluation and program storytelling at scale.
Timeline: Jan 2025 – Present
Scope: Develop a methodology to trace startup commercialization outcomes to NASA's seed funding initiatives.
Impact: Created a standardized, AI-assisted process for identifying and scoring the strength of attribution across KPIs.
The Problem
Despite having access to company data and funding timelines, there was no standardized approach to:
Track commercialization outcomes (IPOs, acquisitions, licensing deals, etc.)
Assess the strength of linkage between success metrics and NASA's involvement
Enable efficient report generation across dozens of companies
Manual research was:
Time-consuming
Inconsistent across analysts
Incomplete in detecting indirect outcomes or citations
Methodology Towards a Solution
We began by engaging directly with stakeholders from NASA’s SBIR and SBIR Ignite programs to understand:
The purpose behind measuring commercialization traceability
The existing gaps in how success was tracked or interpreted
The key performance indicators (KPIs) they wanted to prioritize (e.g., product launches, acquisitions, licensing deals, follow-on investments)
We also reviewed public-facing documentation and prior reports to better understand:
The structure of SBIR/STTR phases
Existing company data repositories
Patterns of funding-to-market trajectories
This discovery phase clarified the lack of a consistent protocol across teams and the need for a model that could assess attribution strength—not just presence of outcomes.
We then launched a manual research process across a cohort of NASA-funded startups. For each KPI, we:
Searched public databases (e.g., news articles, patents, investor updates, grant reports - Federal Databases and Techport)
Tracked references to NASA funding, grants, or partnerships in company disclosures or third-party media
Recorded findings in a structured format, tagging the strength and clarity of any attribution links
The manual process helped surface a key insight: many commercialization successes were only indirectly connected to NASA’s role—and these connections varied widely in how explicitly they were documented.
To address the variability and ensure traceability assessments were consistent, I co-developed a quantitative scoring framework as part of a consulting team effort .
This framework included:
A list of core KPIs
A rubric to assign attribution strength scores for each data point (e.g., “direct citation of NASA funding” = high score; “uncited growth following grant period” = low score)
A logic model that connected score ranges to interpretation tiers (e.g., Strong Attribution, Moderate Attribution, Weak/None)
The scoring system allowed for a more objective, standardized approach to analyzing startup trajectories—critical for making comparisons across firms and funding cycles.
To scale the traceability process, I initiated a collaboration with a data scientist. We explored:
How LLM-powered AI prompts could replicate our manual research process at scale
The feasibility of using AI to scan publicly available content for NASA-attributed success events
The integration of the scoring logic into semi-automated pipelines
This exploratory collaboration informed our thinking around how AI could support future research workflows at scale
Finally,I contributed to the design and drafting of a Standard Operating Procedure (SOP) to support knowledge transfer and repeatability . The SOP included:
An overview of the methodology and rationale
KPI definitions and attribution logic
Research steps and suggested sources
Sample AI prompts for each KPI
Guidelines on how to score and interpret results
Reporting templates and sample deliverable structures
The goal was to ensure that NASA employees or future consultants could replicate our methodology with consistency—regardless of technical or research expertise.
Ongoing Approach
Throughout the process, we presented interim outputs to NASA stakeholders to gather feedback on:
The clarity and usability of the framework
The realism of data sourcing assumptions
The alignment with program objectives
This feedback loop was crucial in refining both the scoring logic and the reporting structure, ensuring buy-in from decision-makers and relevance to real program needs.