The threat of manipulated multi-modal media – i.e., audio, images, video, and text – is increasing as automated manipulation technologies become more accessible, and social media continues to provide a ripe environment for viral content sharing. The creators of convincing media manipulations are no longer limited to groups with significant resources and expertise. Today, an individual content creator has access to capabilities that could enable the development of an altered media asset that creates a believable, but falsified, interaction or scene. However, not all media manipulations have the same real-world impact. Computer-generated editing techniques used by the film industry are employed for very different reasons than bad actors manipulating media to target reputations, the political process, or other aspects of society. Determining how media content was created or altered, what reaction it’s trying to achieve, and who was responsible for it could help quickly determine if it should be deemed a serious threat or something more benign. Dave Doermann, a forward-looking DARPA program manager, recognized these potential challenges from synthetic and manipulated media before the term deepfakes existed and developed DARPA’s Media Forensics (MediFor) program. MediFor, which started in 2016 and concluded in 2020, developed technologies to automatically quantify the integrity of an image or video to facilitate analysis of media with unknown or uncertain provenance. Prior to MediFor, media forensics often required ad-hoc, time intensive, and qualitative analysis done by image analysis experts. It was difficult to know if the tools were working and the analysis did not scale to the large quantities of media available today. Today, the Semantic Forensics (SemaFor) program is building on the work of MediFor and going beyond the development of technologies that can detect manipulated media at scale by providing tools to automatically detect, attribute, and characterize falsified, multi-modal media assets (e.g., text, audio, image, video) to defend against large-scale, automated disinformation attacks. The SemaFor program has a framework containing a rapidly expanding, regularly updated suite of state-of-the-art semantic inconsistency detectors that dramatically increases the opportunity for defenders to gain an asymmetric advantage over media falsifiers.