While our call for contributions is now closed, we hope you consider joining us in discussion throughout the symposium.
Registration is required for participation, whether you are in person or remote! Register Today!
Registration Information: https://aaai.getregistered.net/2025-spring-symposium
Foundation models such as large language models (LLMs), vision language models (VLMs), and speech foundation models, can enable more effective, natural, and engaging child-AI interactions. Both academic researchers and industry practitioners are increasingly interested in leveraging these models to provide accessible, personalized support for children in areas such as education, entertainment, health, and well-being. However, the opportunities presented by foundation models are accompanied by significant risks and ethical concerns, especially in the context of child-AI interactions. Notable concerns include privacy, bias, and the potential of exposure to harmful and illegal content. The proposed symposium aims to bring together an interdisciplinary group of presenters and participants from relevant fields including, but not limited to, human-robot interaction (HRI), human-computer interaction (HCI), natural language processing (NLP), spoken language understanding (SLU), machine learning (ML), education, and pediatric healthcare. The symposium aims to offer a unique opportunity for researchers across these disciplines to foster mutual understanding and facilitate collaborations, paving the way for future advancements in child-AI interaction.
This symposium aims to include topics from key disciplines that study child-AI interaction. These topics may be categorized into two related categories.
1. Child-Centered Interaction Research: This category focuses on work that creates novel interactions for children; this includes both interactions that currently leverage AI and interactions that do not yet leverage AI. We invite submissions across different research fields, including but not limited to human-robot interaction, human-computer interaction, social science, education, and pediatrics.
2. Child-Centered AI Research: This category welcomes submissions that explore novel machine learning methods across various sub-fields that have implications in child-AI interactions. We solicit work that includes but is not limited to natural language processing, speech processing, computer vision, and multimodal machine learning.
The symposium will include the following activities: 1) invited presentations that foster understanding across different research topics or fields; 2) student paper presentations to facilitate in-depth discussions on up-and-coming work; 3) open-format panels to encourage free discussions and Q\&A between speakers and participants; and 4) breakout rooms to provide opportunities for networking and tailored discussions based on participants’ research interests.
All paper submissions will be reviewed through a rigorous single-blind process by the programs committee. We are soliciting two kinds of contributions:
* Poster/short/position papers: Recommended 2 pages and Maximum 4 pages excluding references.
* Full papers: Maximum 8 pages excluding references.
We will use the official AAAI EasyChair site for our paper submissions. You can access that site here:
https://easychair.org/conferences/?conf=sss25
Be sure to select the "Symposium on Child-AI Interaction in the Era of Foundation Models" Track.
All deadlines are 11:59pm In Anywhere on Earth (AoE) time zone on January 10th.
For archival papers that are going to be included in the official AAAI Symposium Proceedings:
* Submission deadline: January 17
* Notification of acceptance or rejection: January 27
* Camera-ready paper deadline: February 1
For non-archival papers:
* Submission deadline: February 1
* Notification of acceptance or rejection: February 10
* Camera-ready paper deadline: February 17
All submissions must be accompanied by at least one registered participant for in-person presentation. See registration deadlines and fees below:
In this category, we focus on work that creates novel interactions for children; this includes both interactions that currently leverage AI and interactions that do not yet leverage AI. We invite submissions across different research fields, including but not limited to human-robot interaction, human-computer interaction, social science, education, and pediatrics.
• Design or study of AI-driven interactive systems for children accounting for the needs of target communities
• Novel hardware or physical devices for child-centered applications
• AI ethics principles for child-centered applications
• Design for children’s agency and fundamental rights
• Age-appropriate AI algorithmic design
• Quantitative or/and qualitative methodology to evaluate child-AI interaction
• Robot tutors
• AI-enabled intelligent tutoring systems
• Personalization of child-AI interaction
• Child Data Privacy and Security
• Ethical guidelines and policies for safe child-AI interaction
• Behavioral analysis of children
• Child learning theory
• Pedagogical methodologies for K-12 education
• Novel datasets for child-centered applications
In this category, we welcome submissions that explore novel machine learning methods across various sub-fields that have implications in child-AI interactions. We will solicit work that includes but is not limited to natural language processing, speech processing, computer vision, and multimodal machine learning.
• Foundation Models (e.g., LLMs and VLMs) for child-centered applications such as education,
pediatric healthcare, or entertainment
• Domain adaptation methods for foundation Models
• Ethical considerations and fairness of foundation Models for child-centered applications
• Development of trustworthy foundation Models for children
• Child speech recognition and diarization
• Speech foundation models on child-centered applications
• Text-to-speech systems for child-AI interactions
• Domain adaptation for children’s speech modeling
• Paralinguistic speech modeling and analysis for child-AI interactions
• Computer vision for child-centered applications
• Child action and gesture recognition
• Affective computing for children
• Multimodal modeling of children’s behaviors