Workshop on Graphic Design Understanding and Generation
June 17, 2024
GDUG Workshop in conjunction with CVPR2024
The workshop on Graphic Design Understanding and Generation (GDUG) aims to bring together researchers, creators, and practitioners to discuss the important concepts, technical perspectives, limitations, and ethical considerations surrounding recognition and generative approaches to graphic design. While recent advances in generative AI are making impressive strides in creative domains, there is a disconnect between raster-based approaches and the real-world workflow that involves vector graphics, such as the creation of a website, posters, online advertisements, social media posts, infographics, or presentation slides, where creators do not paint pixels but instead work with layered objects, stylistic attributes, and typography. In addition, despite the richness of what humans perceive from visual presentation, there is no universal metric for evaluating the quality of graphic design. The GDUG workshop aims to identify, discuss, and address these issues in the graphic design workflow.
Topics
Multi-modal document understanding and generation
Layout analysis and generation
Font and typography analysis and generation
Color palette recommendation
Differentiable rasterization and its applications
Markup language generation
Creative workflow automation / AI-assisted design authoring tools and CAD application
Explainable approach / perceptual analysis for graphic design
Infographic understanding
Dataset/evaluation metrics for graphic design
Technology to protect copyright
Submission
Instructions
We call for both full papers (8 pages) and extended abstracts (4 pages excluding references). Paper submissions should adhere to the CVPR 2024 format.
Submission Site
Important Dates
Dates (11:59PM, AoE)
Paper submission: Mar 25, 2024
Notification to authors: Apr 5, 2024
Camera-ready submission: Apr 14, 2024
Workshop: Jun 17, 2024
We call for both full papers (8 pages) and extended abstracts (4 pages excluding references) to be presented at the GDUG workshop at CVPR. Papers will be peer-reviewed in a single-blind fashion. By default, the papers will be included in the proceedings of CVPR workshops which will be indexed by CVF Open Access but not by IEEE Xplore. Authors can choose to opt out of inclusion in the proceedings in which case we ask the authors to share their paper on arXiv. We welcome both novel work and work in progress that has not been published elsewhere, but authors should be aware that papers with more than 4 pages can conflict with the dual-submission policy of other venues like ECCV.
We also accept poster presentations of the recently published papers elsewhere; e.g., CVPR main conference, to foster the exchange of research ideas. We do not review these peer-reviewed papers again. A jury of organizers will select these papers.
Invited speakers
Program
Schedule
Date: Mon, June 17
Location: Summit 344 (Poster session will be in the Arch building)
Time table:
1:30 pm - 1:40 pm: Opening [slides]
1:40 pm - 2:20 pm: Invited talk 1: Can AI Shape Graphic Design Creation?; Cherry Zhao
2:20 pm - 3:00 pm: Invited talk 2: Reinventing Design Creation by Foundation Models; Shizhao Sun [Slides]
3:00 pm - 3:30 pm: Coffee break
3:30 pm - 4:10 pm: Invited talk 3 (remote): Font Synthesis via Deep Generative Models; Zhouhui Lian
4:10 pm - 4:40 pm: Paper spotlights
SciPostLayout: A Dataset for Layout Analysis and Layout Generation of Scientific Posters; Hao Wang (Waseda University); Shohei Tanaka (OMRON SINIC X Corporation); Yoshitaka Ushiku (OMRON SINIC X Corp.) [Paper] [Poster] [Slides]
PosterLlama: Bridging Design Ability of Langauge Model to Contents-Aware Layout Generation; Jaejung Seol (UNIST | Ulsan National Institute of Science & Technology); SeoJun Kim (UNIST); Jaejun Yoo (UNIST) [arXiv]
DocSynthv2: A Practical Autoregressive Modeling for Document Generation; Sanket Biswas (Computer Vision Centre); Rajiv Jain (Adobe Research); Vlad I Morariu (Adobe Research); Jiuxiang Gu (Adobe Research); Puneet Mathur (University of Maryland); Curtis Wigington (Adobe Research); Tong Sun (Adobe Research); Josep Llados (Computer Vision Center, Barcelona) [arXiv] [poster] [slides]
4:50 pm - 5:30 pm: Poster session (Poster ID, Title, Authors)
240. Reference-based GAN Evaluation by Adaptive Inversion; Jianbo Wang (The University of Tokyo); Heliang Zheng (USTC); Toshihiko Yamasaki (The University of Tokyo) [CVF]
241. AmbiGen: Generating Ambigrams from Pre-trained Diffusion Model; Boheng Zhao (Purdue University); Rana Hanocka (University of Chicago); Raymond Yeh (Purdue University) [arXiv]
242. OpenCOLE: Towards Reproducible Automatic Graphic Design Generation; Naoto Inoue (CyberAgent); Kento Masui (CyberAgent); Wataru Shimoda (CyberAgent.Inc); Kota Yamaguchi (CyberAgent) [arXiv] [poster] [code]
243. SciPostLayout: A Dataset for Layout Analysis and Layout Generation of Scientific Posters; Hao Wang (Waseda University); Shohei Tanaka (OMRON SINIC X Corporation); Yoshitaka Ushiku (OMRON SINIC X Corp.) [Paper] [Poster] [Slides]
244. SVGEditBench: A Benchmark Dataset for Quantitative Assessment of LLM's SVG Editing Capabilities; Kunato Nishina (The University of Tokyo); Yusuke Matsui (The University of Tokyo) [arXiv]
245. Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation; Daichi Horita (The University of Tokyo); Naoto Inoue (CyberAgent); Kotaro Kikuchi (CyberAgent); Kota Yamaguchi (CyberAgent); Kiyoharu Aizawa (The University of Tokyo) [arXiv]
246. PosterLlama: Bridging Design Ability of Langauge Model to Contents-Aware Layout Generation; Jaejung Seol (UNIST | Ulsan National Institute of Science & Technology); SeoJun Kim (UNIST); Jaejun Yoo (UNIST) [arXiv]
247. DocSynthv2: A Practical Autoregressive Modeling for Document Generation; Sanket Biswas (Computer Vision Centre); Rajiv Jain (Adobe Research); Vlad I Morariu (Adobe Research); Jiuxiang Gu (Adobe Research); Puneet Mathur (University of Maryland); Curtis Wigington (Adobe Research); Tong Sun (Adobe Research); Josep Llados ("Computer Vision Center, Barcelona") [arXiv] [poster] [slides]
Organizers
Sponsor
Contact: cvpr2024-gdug-workshop@googlegroups.com