Workshop on Objectionable Content and Misinformation
ECCV 2018, Munich - 8th September
With the advent of Internet and, especially, search engines and social networks, vast amounts of images and videos are created and shared per day resulting in billions of views of this information rendered to a heterogeneous and large audience. In many cases, it is necessary to understand the underlying semantics of the visual footage for two reasons:
- Content: To detect potential objectionable content or sensitive imagery like nudity, pornography, violence, hate, children exploitation and terrorism among others. This information may be used to enforce viewing policies like preventing minors to adult content, take down gore images, moderating hate, control terrorism propaganda, etc.
- Misinformation: Hoaxes, out-of-context images, fake footage, etc. may contribute to misinformation. Assessing the veracity of images and videos is a key element to guarantee that information (news, blogs, posts, etc.) is unbiased and trustful.
Both fronts go often together as we have seen in the recent photo-realistic face-swapping advancements. Developing tools to detect such content require a great deal to computer vision and machine learning expertise yet the relevant communities have devoted little attention to such problems.
The aim of this workshop is to give the opportunity to explore the specific challenges in the computer vision domain entailed by objectionable content and misinformation. We are looking forward to academia and industry to expose their challenges, their progress and to build a joint forum of discussion around this area of research.
The topics include (but are not limited to):
- Image/video forensics
- Detection/analysis/understanding of fake images/videos
- Misinformation detection/understanding: mono-modal and multi-modal
- Adversarial technologies
- Detection/understanding of Objectionable Content:
- Nudity and pornography
- Hate speech
- Endangered children