AI-assisted development is accelerating software sprawl, and that means there are even more sources of vulnerabilities to track. Managing this is an enormously complex task, as organizations leverage many disparate tools for testing and reporting to understand their application inventory and associated risks. As more organizations grapple with this challenge in the AI era, existing issues with fragmented visibility will only get bigger. In this session, we’ll talk about how an Application Security Posture Management (ASPM) approach can help set the right foundations for accountability, operations, and transparency in future-proofing your AppSec program. We’ll discuss best practices for consolidating data, integrating developer workflows, and scaling remediation activities to address AI risks in software development.
Natasha Gupta currently leads AppSec platform strategy at Black Duck Software, a subsidiary of Synopsys. She has worked for over a decade in the cybersecurity and enterprise networking space. Prior to Black Duck, Natasha was with ServiceNow, where she managed go-to-market for ServiceNow Security Operations, a SOAR platform for incident and vulnerability management. She has also held previous roles in product marketing and software product management at Imperva and A10 Networks.
In the age of Generative AI, data governance must evolve from a traditional 'gatekeeper' role to an 'enabler' of innovation. A successful Generative AI strategy is fundamentally dependent on a robust data governance framework. The rise of LLMs and other GenAI tools, which thrive on vast amounts of unstructured data, necessitates new approaches to data management. The talk will focus on data engineering processes and the modeling of data governance into the AI application itself. Ultimately, a modern data governance framework is essential for minimizing risk, building trust and unlocking the full potential of Generative AI.
Nandini Singh is a Staff TPM at Google specializing in advanced data analytics and cybersecurity operations. Her work spans vulnerability management, regulatory compliance, and the security of open-source software ecosystems. At Google, she has played a key role in advancing initiatives such as the Secure AI Framework (SAIF), which establishes industry standards for safer AI adoption, and in shaping open-source security efforts like Security Scorecards, which help organizations assess risk in widely used software libraries. She has also contributed to compliance programs for major digital regulations, including the Digital Services Act (DSA), ensuring that technology systems align with evolving global policy requirements. With a strong foundation in data modeling and security risk governance, Nandini blends technical insight with policy awareness to strengthen trust and resilience across complex systems.