Logo designed on Canva 2025
In this 5-week immersive session, participants gain hands-on experience designing and integrating AI-generated personas in Virtual Reality and Virtual Worlds for English language teaching. They learn to create avatars, engineer and contrast effective prompts, as well as to develop interactive, level-appropriate language activities, and meaningful conversational practice for their classes.
This session targets experienced language educators, course designers, and Webheads with intermediate expertise in technology-enhanced language teaching who wish to level up their immersive course design skills at any level of education. Newcomers to Artificial Intelligence, Virtual Worlds and Virtual Reality can follow the live sessions or their recordings, with guidance provided for setup, orientation, navigation and basic immersive skills.
By the end of this session, participants will be able to:
Navigate the LMS and interact with fellow participants
Enter the VR or VW environment (optional)
Construct and implement a virtual teaching assistant tailored for language learning contexts.
Employ persona design techniques to shape effective virtual teaching agents.
Analyse and compare prompting strategies for their impact on character creation and scope of expertise.
Develop AI-powered characters equipped to guide learners within defined domains of practice.
Animate and personalise avatars in virtual environments by integrating them with AI-driven chatbots.
Design immersive language-learning tasks where AI companions foster interaction and engagement.
Formulate detailed character personas that align with specific teaching aims and learner needs.
Determine the instructional roles and lesson phases where AI characters can provide meaningful support.
Compose advanced prompts for LLMs to generate interactive NPCs that deliver specialised knowledge in memorable ways.
Embed AI pedagogical agents into virtual worlds to create authentic communicative opportunities.
Design, implement, and present a final activity showcasing AI-enhanced language teaching in a virtual world. (Final Project, Week 5)
The first Session each week involves screen sharing demonstration and Orientation around the VR/VW platform.
The second Session involves creative, hands-on creativity for those who can enter the VR/VW each week.
The Session recordings are always available in Canvas LMS for those who cannot attend the Live Sessions.
Indicative Activities
Participants will engage in:
Screen sharing and/or watching the recorded sessions.
Entering the communication platform.
Sampling 3D virtual simulations of real or imagined scenarios, providing safe spaces for practising skills.
Creating immersive interactive tasks that develop language concepts and vocabulary through AI Companions in Virtual Reality and Virtual Worlds.
Quest-based learning activities designed with AI integration.
Collaborative projects where learners co-create their own AI Companion(s).
Creating and contrasting prompts for different lesson types using a framework for designing AI-driven NPC companions.
Building interactive, immersive, and pedagogically effective experiences with AI-driven Companions for language learners.
Using in-world affordances to build their Immersive AI Companions.
Focus: Getting to know each other
Activities:
Entering Canvas and creating a Profile
Introducing themselves and their experience in AI with VR/VWs
Completing the Needs Analysis form
Reflecting on basic bibliographic suggestions
Exploring different VR platforms
Welcome, Introductions and Orientation Session -
Live Session: Sunday 11th January 2026. 21:00 CET
Tools: Core: Frame VR, Second Life, WondaVR, Kitely. Subsidiary: FluentWorlds, Holotutor ai, Convai
Objectives:
By the end of Week one, participants will be able to:
Overview VR and VW destinations with AI Companions.
Explore the utility of personas.
Familiarise themselves with prompt engineering.
Session One: Acquire/ Deepen: Orient, Overview, Analyse, Define and Develop.
Live Session: Thursday 15th January, 21:00 CET
Activities:
Enter, navigate and screen share different VR and VW destinations by attending the Live Session or by watching the recording.
Explore sample environments by reviewing examples of AI Companions in WondaVR by Georgia Maneta.
Utilise a customised AI companion with which participants can interact, through screensharing.
Session Two: Deepen: Develop, Implement, Clarify in WondaVR
Live Session: Sunday 18th January, 21:00 CET
Activities:
Create an account, navigate and create experiences in WondaVR by attending the Live Session with Georgia Maneta or by watching the recording.
Practise developing and editing prompts in WondaVR.
Edit a ready-made AI Companion.
Create and customise AI personas to align with specific student learning objectives and activities in WondaVR.
Create and Reflect in WondaVR by exhibiting learning outcomes on a collaborative platform.
Moderators: Helena Galani, Georgia Maneta
Focus: Prototype. Clarify/Ideate and Implement.
Tools: FRAME VR
Objectives:
By the end of Week Two, participants will be able to:
Examine the role of personas in creating effective virtual teaching agents in FrameVR.
Analyse different prompt types and their effectiveness in designing characters.
Define the knowledge or skills the characters will support for learners.
Session One – Acquire / Deepen: Orient, Analyse, Define, Develop.
Live Session: Thursday 22nd January, 21:00 CET
Activities:
Explore FrameVR environments through screen sharing by attending the Live Session by Helena Galani or by watching the recording.
Interact with customised AI companions to introduce the concept of embodied personas by participating on a mission in FrameVR.
Utilise a suggested prompt engineering framework to build an embodied AI Companion.
Analyse sample virtual spaces, activities and prompts through screensharing, in FrameVR.
Session Two – Deepen: Develop, Implement, Clarify.
Live Session: Sunday 25th January, 21:00 CET
Activities:
Create an account, enter FrameVR, navigate and practise creating prompts through screen sharing by attending the Live Session with Helena Galani or by watching the recording.
Demonstrate how to design a character by customising AI companions to align with specific learning objectives and activities in FrameVR.
Implement the AI Companion knowledge base to customise the embodied character .
Create and Reflect in FrameVR by exhibiting learning outcomes on a collaborative platform.
Moderators: Helena Galani, Georgia Maneta.
Focus: Ideate. Orient and Define.
Tools: Second Life
Objectives:
By the end of Week Three, participants will be able to:
Analyse different prompt types and their effectiveness in creating characters in Second Life.
Explore the use of and implement API keys.
Create and optimise prompts for ChatGPT and other LLMs based on a particular Framework to generate engaging NPC characters.
Define the knowledge areas or skills that AI characters will support for learners.
Develop one or more characters based on defined personas.
Implement affordances to build the embodied AI Companions.
Identify lesson stages and roles where these characters can effectively support learning tasks.
Session One – Acquire / Deepen: Orient, Analyse, Define, Develop
Live Session: Thursday 29th January, 21:00 CET
Activities:
Explore sample simulations with AI Companions in Second Life, through screen sharing by attending the Live Session with Helena Galani or by watching the recording.
Enter Second Life and interact with customised AI companions on a quest by Helena Galani.
Practise, create, animate and customise AI companion(s) by aligning them with specific learning objectives and activities.
Session Two – Deepen: Develop, Implement, Clarify
Live Session: Sunday 1st February, 21:00 CET
Activities:
Create an account, enter and navigate Second Life through screen sharing by attending the Live Session with Helena Galani or by watching the recording.
Use API Keys to design and build an AI persona by screen sharing and/or entering Second Life, by Helena Galani.
Familiarise participants with prompt engineering skills by adding script and notecards.
Practise prompt design and optimisation of AI companion(s) and their demeanour, in Second Life by focusing on a suggested framework to determine lesson stages at which the AI personas can support language learning.
Create and Reflect in Second Life by exhibiting learning outcomes on an online collaborative platform.
Guest Speaker: Prof. Randall Sadler
Moderators: Helena Galani, Georgia Maneta.
Focus: Prototype. Clarify/ Ideate. Orient and Implement.
Tools: Kitely, OpenSim.
Objectives:
By the end of Week four, participants will be able to:
Leverage personas to design effective virtual teaching agents.
Animate avatars in virtual worlds using script and animations.
Design and program an AI Companion in Kitely, OpenSim for language education.
Create and optimise prompts with ChatGPT and other LLMs based on a particular Framework to generate engaging AI NPC characters.
Determine lesson stages and roles where these AI characters can effectively support learning activities.
Session One – Acquire / Deepen: Orient, Analyse, Define, Develop
Live Session: Thursday 5th February, 21:00 CET
Activities:
Explore sample simulations with AI Companions in Kitely, through screen sharing, by attending the Live Session with Helena Galani or by watching the recording.
Determine lesson stages and roles where these AI characters can effectively support learning activities by practising prompt engineering based on a suggested Framework.
Design thinking exercises to specify the lesson stage of using the AI Companion.
Generate engaging AI characters by creating and optimising prompts with Large Language Models based on a particular Framework.
Session Two – Create: Implement, Clarify, Develop
Live Session: Sunday 8th February, 21:00 CET
Activities:
Create an account, enter and navigate Kitely through screen sharing by attending the Live Session with Helena Galani or by watching the recording.
Customise, animate AI companions and use elements of prompt engineering through screen sharing and/or entering Kitely.
Brainstorm and set parameters for persona-driven, customised AI behaviour in immersive educational contexts.
Brainstorm and discuss character creation, design thinking, and prompt engineering skills to establish key parameters for customised AI companion behaviours in Kitely.
Create and Reflect in Kitely/OpenSim by exhibiting learning outcomes on an online collaborative platform.
Guest Speaker: Heike Philp
Moderators: Helena Galani, Georgia Maneta
Focus: Showcase and Feedback
Tools: All.
Objectives:
By the end of Week Five, participants will be able to:
Participants present their embodied AI character(s) and prompt(s) within (a) virtual world(s) or virtual reality environment of their choice by attending the Live Session with Helena Galani and Georgia Maneta or after watching the recording.
Demonstrate a language learning activity incorporating one or more AI pedagogical agents developed during Weeks 1–4 (Final Project).
Session One – Deepen: Showcase
Live Session: Thursday 12th February, 21:00 CET
Activities:
Participants present their project(s) and lesson aims with expectations, in-world and by screen sharing on the collaborative presentation tool during the Live Session with Helena Galani and Georgia Maneta or after watching the recording.
Exhibit learning outcomes based on one of the Virtual Reality or Virtual World platform(s) of their choice in the Session, on an online collaborative platform.
Reflect on the immersive experience during the Sessions.
Provide final feedback and suggestions during the Live Session.
Session Two – Deepen: Show & Tell
Live Session: Sunday 15th February, 21:00 CET
Activities:
Participants reflect on their experience and the session outcomes by sharing insights and wrapping up on the collaborative presentation tool.
Participants present their virtual world language learning activity by using the AI companions they develop throughout the course by screen sharing and in-world.
Participants complete the Final Evaluation Feedback Form available on the Learning Management System.
Moderators: Helena Galani and Georgia Maneta
Content space: Canvas and Google Drive (Folders per session).
Collaborative space: Digipad and additional apps for activities and application
Live meeting space: Zoom
The basic free plans for these tools allow participants to visit virtual spaces; some allow limited space and resource creation. Others require a 14-day trial or a premium plan to create objects or host a virtual world.
Virtual Spaces:
Core Virtual Spaces:
Subsidiary Virtual Spaces:
Language Models:
Other technology tools:
Zoom for Education (Live meeting space)
Canvas free for Teachers (Interactive, Content space (LMS)
Pbworks
Google Drive
Google slides
Grivokostopoulou, F., Kovas, K., & Perikos, I. (2020). The effectiveness of embodied pedagogical agents and their impact on students learning in virtual worlds. Applied Sciences, 10(5), 1739. Retrieved from https://www.mdpi.com/2076-3417/10/5/1739
Hemminki-Reijonen, U., Hassan, N. M., Huotilainen, M., Koivisto, J. M., & Cowley, B. U. (2025). Design of generative AI-powered pedagogy for virtual reality environments in higher education. npj Science of Learning, 10(1), 31. Retrieved from https://www.nature.com/articles/s41539-025-00326-1 on 30/08/2025
Huifang Ou. (2025). AI-Powered NPCs in Virtual Environments: Creating Believable Characters Through Machine Learning. Proceedings of the 3rd International Conference on Software Engineering and Machine Learning DOI: 10.54254/2755-2721/145/2025.21926 Retrieved from https://www.researchgate.net/publication/390575074_AI-Powered_NPCs_in_Virtual_Environments_Creating_Believable_Characters_Through_Machine_Learning
Jain, A. (2023). The five stages of design thinking: A comprehensive overview. Rural Handmade. https://ruralhandmade.com/blog/the-five-stages-of-design-thinking-a-comprehensive-overview
Khosrawi-Rad, B., Schlimbach, R., Strohmann, T., & Robra-Bissantz, S. (2022). Design knowledge for virtual learning companions. Proceedings of the 2022 AIS SIGED International Conference on Information Systems Education and Research. 6. Retrieved from https://aisel.aisnet.org/siged2022/6/
Miao, F., Holmes, W., Huang, R., & Zhang, H. (2021). AI and education: Guidance for policy-makers. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000376709
Momand, Z., Zekri, M., & Tariq, U. (2023). Immersive technologies in virtual companions: A systematic literature review. arXiv Cornell University https://doi.org/10.48550/arXiv.2309.01214
Tapare, P., Shahaji, B., Shah, R. (2024). Generative AI in Virtual Reality. International Research Journal of Modernization in Engineering Technology and Science. Retrieved from www.irjmets.com/uploadedfiles/paper//issue_11_november_2024/63217/final/fin_irjmets1730564762.pdf
Registration Jan. 4-10, 2026
For participants to join this session:
To register, go to https://canvas.instructure.com/enroll/M3LJ4F
Enroll in https://canvas.instructure.com/register ; use the join code M3LJ4F
TESOL CALL-IS