Workshop on Graphic Design Understanding and Generation 2025
Oct 19, 2025
GDUG Workshop in conjunction with ICCV2025
Oct 19, 2025
GDUG Workshop in conjunction with ICCV2025
The workshop on Graphic Design Understanding and Generation (GDUG) aims to bring together researchers, creators, and practitioners to discuss the important concepts, technical perspectives, limitations, and ethical considerations surrounding recognition and generative approaches to graphic design and documents. While recent advances in generative AI are making impressive strides in creative domains, there is a disconnect between research attempts and the real-world workflow that involves graphics design, such as the creation of a website, posters, online advertisements, social media posts, infographics, or presentation slides, where creators do not paint pixels but instead work with structured documents, such as layered object representation, stylistic attributes, and typography. In addition, despite the richness of what humans perceive from visual presentation, there is no universal metric for evaluating the quality of graphic design.
We welcome topics related but not limited to the following:
Multi-modal document understanding and generation
Font and typography analysis and generation
Layout analysis and generation
Attribute-based styling and colorization
Differentiable rasterization and its applications
Graphic design dataset and perceptual evaluation metrics
AI-assisted design authoring tools and CAD applications
Technology to protect copyright
The GDUG workshop has two tracks, each with different deadlines. Carefully read through the following instructions.
Submission to the proceedings track must be in the full paper format and will be peer reviewed. The format is 5-8 pages, excluding references, in the ICCV 2025 format. Papers will be peer-reviewed in a single-blind fashion. Accepted papers will be included in the ICCV workshop proceedings and will have a poster presentation at the workshop. We welcome both novel work and work in progress that has not been published elsewhere, but authors should be aware that papers with more than four pages can conflict with the dual-submission policy of other venues like the ICCV main conference.
https://openreview.net/group?id=thecvf.com/ICCV/2025/Workshop/GDUG
Upon submission, be sure to set up the OpenReview profile well before the deadline. New OpenReview profiles created without an institutional email will go through a moderation process that can take up to two weeks.
11:59 PM, HST
Paper submission: July 2
Notification to authors: July 11
Camera-ready submission: August 18 (11:59 PM, PDT)
Workshop: October 19
Submissions to the non-proceedings track can be either extended abstracts (less than four pages) or full papers, excluding references, in the ICCV 2025 format. Papers will not be peer-reviewed, but a jury of organizers will select them based on their topic match and the minimum quality requirement to the workshop. Accepted papers will have a poster presentation at the workshop. We welcome novel work, work in progress, or recently published work in another venue (e.g., the ICCV main conference) that has a relevant topic to the workshop.
Although we accept published work for presentation, authors should be aware that papers longer than four pages can conflict with the dual-submission policy of other venues. Check the submission policy of the other venue if in doubt.
https://forms.gle/qu7eeC4JyyEi4BMc6
11:59 PM, HST
Paper submission: August 18 August 22
Notification to authors: August 25
Workshop: October 19
The workshop will happen on October 19 in the afternoon. The program is the following.
1:00 pm Opening
1:10 pm Invited speaker 1: Seiichi Uchida
1:35 pm Invited speaker 2: Sai Rajeswar Mudumba
2:00 pm Short break
2:10 pm Invited speaker 3: Gökhan Yildirim
2:35 pm Invited speaker 4: Vlad Morariu
3:00 pm Coffee break
3:30 pm Invited speaker 5: Tali Dekel
3:55 pm Invited speaker 6: Jingye Chen
4:20 pm Poster session
MG-Gen: Single Image to Motion Graphics Generation [Proceedings]
Takahiro Shirakawa, Tomoyuki Suzuki, Takuto Narumoto, Daichi Haraguchi
Embedding Font Impression Word Tags Based on Co-occurrence [Proceedings]
Yugo Kubota, Seiichi Uchida
LayerD: Decomposing Raster Graphic Designs into Layers [ICCV 2025]
Tomoyuki Suzuki (CyberAgent), Kang-Jun Liu (Tohoku University), Naoto Inoue (CyberAgent), Kota Yamaguchi (CyberAgent)
ChartGen: Scaling Chart Understanding Via Code-Guided Synthetic Chart Generation [arXiv]
Jovana Kondic (MIT), Pengyuan Li (IBM Research), Dhiraj Joshi (IBM Research), Zexue He (MIT-IBM Watson AI Labs), Shafiq Abedin (IBM Research), Jennifer Sun (MIT), Ben Wiesel (IBM Research), Eli Schwartz (IBM Research), Ahmed Nassar (IBM Research), Bo Wu (MIT-IBM Watson AI Labs, IBM Research), Assaf Arbelle (IBM Research), Aude Oliva (MIT, MIT-IBM Watson AI Labs), Dan Gutfreund (MIT-IBM Watson AI Labs, IBM Research), Leonid Karlinsky (MIT-IBM Watson AI Labs, IBM Research), Rogerio Feris (MIT-IBM Watson AI Labs, IBM Research)
RouteExtract: A Modular Pipeline for Extracting Routes from Paper Maps
Bjoern Kremser (Technical University of Munich, The University of Tokyo), Yusuke Matsui (The University of Tokyo)
MUSE: A Training-free Multimodal Unified Semantic Embedder for Structure-Aware Retrieval of Scalable Vector Graphics and Images
Kyeong Seon Kim (KAIST), Baek Seong-Eun (POSTECH), Lee Jung-Mok (POSTECH), Tae-Hyun Oh (KAIST)
FASTER: A Font-Agnostic Scene Text Editing and Rendering framework [WACV 2025]
Aloy Das(Indian Statistical Institute), Sanket Biswas (Universitat Aut`onoma de Barcelona), Prasun Roy (University of Technology Sydney), Subhankar Ghosh (University of Technology Sydney), Umapada Pal (ndian Statistical Institute), Michael Blumenstein (University of Technology Sydney), Josep Llad´os (Universitat Aut`onoma de Barcelona), Saumik Bhattacharya (Indian Institute of Technology, Kharagpur)
Contact: gdug2025@googlegroups.com