Dense Relational Image Captioning

Publication

"Dense Relational Captioning: Triple-Stream Networks for Relationship-Based Captioning"

Dong-Jin Kim, Jinsoo Choi, Tae-Hyun Oh, and In So Kweon.

IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

[PDF] [Dataset] [code] [Slides] [Poster]


"Dense Relational Image Captioning via Multi-task Triple-Stream Networks"

Dong-Jin Kim, Tae-Hyun Oh, Jinsoo Choi, and In So Kweon.

IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), under review.

[PDF]

Awards

Qualcomm Innovation Award [certificate]

"Dense Relational Image Captioning via Multi-task Triple-Stream Networks"

Qualcomm Inc.

Abstract

We introduce dense relational captioning, a novel image captioning task which aims to generate multiple captions with respect to relational information between objects in a visual scene. Relational captioning provides explicit descriptions of each relationship between object combinations. This framework is advantageous in both diversity and amount of information, leading to a comprehensive image understanding based on relationships, e.g., relational proposal generation. For relational understanding between objects, the part-of-speech (POS, i.e., subject-object-predicate categories) can be a valuable prior information to guide the causal sequence of words in a caption. We enforce our framework to not only learn to generate captions but also predict the POS of each word. To this end, we propose the multi-task triple-stream network (MTTSNet) which consists of three recurrent units responsible for each POS which is trained by jointly predicting the correct captions and POS for each word. In addition, we found that the performance of MTTSNet can be improved by modulating the object embeddings with an explicit relational module. We demonstrate that our proposed model can generate more diverse and richer captions, via extensive experimental analysis on large scale datasets and several metrics. We additionally extend analysis to an ablation study, applications on holistic image captioning, scene graph generation, and retrieval tasks.

Multi-Task Triple-Stream Networks

Fig.1. Overall architecture of the proposed multi-task triple-stream networks.

Fig.2. An illustration of the unrolled triple-stream LSTMs.

Example of Generated Captions and Regions

Bibtex

@inproceedings{kim2019dense,

title={Dense relational captioning: Triple-stream networks for relationship-based captioning},

author={Kim, Dong-Jin and Choi, Jinsoo and Oh, Tae-Hyun and Kweon, In So},

booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},

pages={6271--6280},

year={2019}

}


@article{kim2020dense,

title={Dense Relational Image Captioning via Multi-task Triple-Stream Networks},

author={Kim, Dong-Jin and Oh, Tae-Hyun and Choi, Jinsoo and Kweon, In So},

journal={arXiv preprint arXiv:2010.03855},

year={2020}

}