With the increasing capabilities of AI, the model itself, the hardware it runs on, the outputs it generates, and the artifacts it creates have all emerged as high-value intellectual assets that require robust protection throughout development, deployment, and distribution. Proactive mechanisms such as watermarking are used to embed imperceptible, verifiable signatures into models or their outputs to establish provenance and deter unauthorized use; attestation frameworks verify that only authorized models execute on trusted hardware platforms, ensuring device-level IP protection and guarding against tampering or piracy. These protections not only help assert ownership over AI models and their generated content but also strengthen trust and traceability across increasingly complex software and hardware supply chains, addressing evolving legal, commercial, and security challenges in the AI era
ACESLab’s research has advanced a unified suite of watermarking and attestation frameworks for generative AI. Frameworks like REMARK-LLM, RoSeMary, and SWaRL embed robust, quality-preserving, and high-capacity watermarks onto the LLM-generated natural languages and code, which enables reliable provenance and misuse detection in LLM-generated content. EmMark and AttestLLM protect model and hardware IP on edge devices through optimization-based watermark encoding that preserves fidelity and software-hardware co-design that improves verification efficiency. ICMarks and AutoMarks embed imperceptible signatures into AI-created chip layout artifacts. ICMarks strategically co-optimizes watermark constraints with physical design constraints to preserve power-performance-area metrics, while AutoMarks leverages graph neural networks to automate and accelerate watermark search and insertion with high fidelity and transferability across layouts. Together, these lines of work address the full stack of IP protection and traceability challenges in LLM distribution and supply chains.