MEWL: Few-shot multimodal word learning with referential uncertainty


Guangyuan Jiang1,2,✉️, Manjie Xu2,3, Shiji Xin1, Wei Liang3, Yujia Peng1,2, Chi Zhang2,✉️, Yixin Zhu1,✉️

1Peking University        2Beijing Institute for General Artificial Intelligence         3Beijing Institute of Technology

ICML 2023

paper, code, and dataset

Overview

Without explicit feedback, humans can rapidly learn the meaning of words. Children can acquire a new word after just a few passive exposures, a process known as fast mapping. This word learning capability is believed to be the most fundamental building block of multimodal understanding and reasoning. Despite recent advancements in multimodal learning, a systematic and rigorous evaluation is still missing for human-like word learning in machines. To fill in this gap, we introduce the MachinE Word Learning (MEWL) benchmark to assess how machines learn word meaning in grounded visual scenes. 

MEWL covers human’s core cognitive toolkits in word learning: cross-situational reasoning, bootstrapping, and pragmatic learning. Specifically, MEWL is a few-shot benchmark suite consisting of nine tasks for probing various word learning capabilities. These tasks are carefully designed to be aligned with the children’s core abilities in word learning and echo the theories in the developmental literature. By evaluating multimodal and unimodal agents’ performance with a comparative analysis of human performance, we notice a sharp divergence in human and machine word learning. We further discuss these differences between humans and machines and call for human-like few-shot word learning in machines.

Motivation: How children learn words


Our contributions


MEWL  Tasks

Dataset


We design nine unique tasks in MEWL to comprehensively evaluate alignment between humans and machines:


shape color material object composite relation bootstrap number pragmatic


They are designed for:



Baselines

Models:

Captioning for unimodal models:

Discussion:

Citation

If you find MEWL useful, please cite us:


@inproceedings{jiang2023mewl,

  title={MEWL: Few-shot multimodal word learning with referential uncertainty},

  author={Jiang, Guangyuan and Xu, Manjie and Xin, Shiji and Liang, Wei and Peng, Yujia and Zhang, Chi and Zhu, Yixin},

  booktitle={ICML},

  year={2023}

}