YoChatGPT! has engaged over 21,100+ active users per month (teachers and students, etc...) locally and internationally from 130+ countries,; and over 2440+ rooms made; 114,000+ views of pages, and an average of 2 minutes and 36 seconds of engagement/user internationally through our Google Analytics (since launch on year ago in July 2024).
[1] Ting, F. S. T., Chan, L. C. L., & Wang, D. D. (2025). Leveraging Generative AI for Personalized Inquiry-Based Mathematics Education. Accepted Paper presentation at the EdUHK-THU Education Forum: Future Education and Learning on 13-14 June 2025
[2] Ting, F.S.T. Wang, D., Chan, L.C.L. (2025) Integrating GenAI into Inquiry-Based Learning for Mathematics Education Conference Proceedings of the The 2nd EdUHK x HKUST Joint International Conference on AI and Education, p27
[3] Ting, F. S. T., & Chan, L. C. L. (2025). Personalized Learning with Generative AI in Teacher Education via the Concept Prompt. Proceedings of the International Conference on Learning and Teaching 2025 (ICLT 2025), pg. 37.
[4] Chlebovec, C., Ting, F.S.T., (2025) Exploring Collaborative Generative AI Pedagogies and their Impact on Student Learning", Symposium for Scholarship of Teaching and Learning, Banff, Alberta, Canada.
[5] San Pedro, J. R., & Ting, F.S.T. (2025). Integration of Collaborative AI in improving student learning outcomes in Data Analytics, Submitted to the International Journal of Educational Technology in Higher Education, July 2025 (please email us if you want to a copy of the submitted mnscpt)
[6] San Pedro, J.R., wins both the Best Session Presentation Award at ENCCULT XV and the Best Graduate Student Paper Award from the Philippine Statistical Research and Training Institute! for his work on "Integration of Collaborative Artificial Intelligence in Improving Student Learning Outcomes in Data Analytics".
Vaccaro, M., Almaatouq, A. & Malone, T. When combinations of humans and AI are useful: A systematic review and meta-analysis. Nat Hum Behav 8, 2293–2303 (2024). https://doi.org/10.1038/s41562-024-02024-1
Surprising finding: On average, human–AI combinations performed significantly worse than the best of humans or AI alone. The study found a critical difference: Decision Tasks (like approving AI output) led to performance losses; Creation Tasks (like brainstorming and writing) led to performance gains.
The Takeaway: This suggests that "single-player" model that forces teams into simple decision roles is less effective.