This project focuses on developing a humanoid robot capable of performing a variety of tasks in real-world environments. The current phase involves constructing machine learning models that enable the robot to learn from and adapt to its surroundings effectively using LLM.
This project aims to improve robot navigation through advanced visual Simultaneous Localization and Mapping (SLAM). In its early stages, the work involves integrating SLAM systems into a legged-wheel robot prototype to enhance its mapping and localization capabilities in both indoor and outdoor settings.
Knitting, a cornerstone of textile manufacturing, is uniquely challenging to automate, particularly in terms of converting fabric designs into precise, machine-readable instructions. This research bridges the gap between textile production and robotic automation by proposing a novel deep learning-based pipeline for reverse knitting to integrate vision-based robotic systems into textile manufacturing. The pipeline employs a two-stage architecture, enabling robots to first identify front labels before inferring complete labels, ensuring accurate, scalable pattern generation. By incorporating diverse yarn structures, including single-yarn (sj) and multi-yarn (mj) patterns, this study demonstrates how our system can adapt to varying material complexities. Critical challenges in robotic textile manipulation, such as label imbalance, underrepresented stitch types, and the need for fine-grained control, are addressed by leveraging specialized deep-learning architectures. This work establishes a foundation for fully automated robotic knitting systems, enabling customizable, flexible production processes that integrate perception, planning, and actuation, thereby advancing textile manufacturing through intelligent robotic automation.
Electronics 2025, 14(8), 1605; https://doi.org/10.3390/electronics14081605
Entertainment robotics has garnered significant attention in recent years, with researchers focusing on developing robots capable of performing a variety of tasks, including magic, drawing, dancing, and music. This article presents our research on forming a musical band that includes both humanoid robots and human musicians, with the goal of achieving natural synchronization and collaboration during musical performances. We utilized two of our humanoid robots for this project: Polaris, a mid-sized humanoid robot, as the drummer, and Oscar, a Robotis-OP3 humanoid robot, as the keyboardist. The technical implementation incorporated essential components such as visual servoing, human-robot interaction, and Robot Operating System (ROS), enabling seamless communication and coordination between the humanoid robots and the human musicians. The success of this collaborative effort can be both seen and heard through the following YouTube link: https://youtu.be/pFOyt1KKCfY?feature=shared.
PeerJ Computer Science 11:e2632 https://doi.org/10.7717/peerj-cs.2632
The widespread adoption of large language models (LLMs) marks a transformative era in technology, especially within the educational sector. This paper explores the integration of LLMs within learning management systems (LMSs) to develop an adaptive learning management system (ALMS) personalized for individual learners across various educational stages. Traditional LMSs, while facilitating the distribution of educational materials, fall short in addressing the nuanced needs of diverse student populations, particularly in settings with limited instructor availability. Our proposed system leverages the flexibility of AI to provide a customizable learning environment that adjusts to each user's evolving needs. By integrating a suite of general-purpose and domain-specific LLMs, this system aims to minimize common issues such as factual inaccuracies and outdated information, characteristic of general LLMs like OpenAI's ChatGPT. This paper details the development of an ALMS that not only addresses privacy concerns and the limitations of existing educational tools but also enhances the learning experience by maintaining engagement through personalized educational content.
arXiv:2502.08655 [cs.AI]
https://doi.org/10.48550/arXiv.2502.08655
This project enhances human-robot interaction (HRI) by incorporating cutting-edge sensory feedback systems, real-time processing, and an intuitive architecture. A modular, scalable system architecture, developed using ROS2, integrates voice and vision systems to enable robots to execute commands precisely and interact seamlessly with users. Key contributions of this research include the fusion of voice and vision capabilities, the creation of a real-time audio segmentation algorithm, and the design of a flexible state machine controller. These outcomes highlight the potential of advanced HRI technologies and lay a strong foundation for future innovations.
2024 3rd International Conference on Automation, Robotics and Computer Engineering (ICARCE)
DOI: 10.1109/ICARCE63054.2024.00011
Journals (refereed):
Lau M, Anderson J, Baltes J. (2025). Integrating humanoid robots with human musicians for synchronized musical performances. PeerJ Computer Science 11:e2632 https://doi.org/10.7717/peerj-cs.2632 - SCI: Q1 (0.719)
Koczkodaj, W. W., Kułakowski, K., Lau, M. C., Pedrycz, W., Pigazzini, A., Song, Y. ... Żądło, T. (2025). Monte Carlo validation of the pairwise comparisons method accuracy improvement for 3D objects. Advances in Science and Technology Research Journal, 19(6), 194-202. https://doi.org/10.12913/22998624/202891 - SCI: Q3 (0.299)
Sheng, H., Cai, S., Zheng, X., & Lau, M. (2025). Knitting Robots: A Deep Learning Approach for Reverse-Engineering Fabric Patterns. Electronics, 14(8), 1605. https://doi.org/10.3390/electronics14081605 - SCI: Q2 (0.615)
Sheng, Haoliang and Lau, Meng-Cheng (2024). Optimising Real-Time Facial Expression Recognition with ResNet Architectures. Journal of Machine Intelligence and Data Science (JMIDS) 5 (1), 33-45. DOI: https://jmids.avestia.com/2024/005.html
Zhang, Jing and Lau, Meng-Cheng and Zhu, Ziping (2024). Hybrid CNN-GRU Model for Exercise Classification Using IMU Time-Series Data. Journal of Machine Intelligence and Data Science (JMIDS) 5 (1), 54-64. DOI: https://jmids.avestia.com/2024/007.html
Journals (non-refereed):
Kyle Spriggs and Meng Cheng Lau and Kalpdrum Passi. (2025) Personalizing Education through an Adaptive LMS with Integrated LLMs. arXiv preprint arXiv:2502.08655.https://doi.org/10.48550/arXiv.2502.08655
Book Chapter (refereed):
Lau, M. C., Anderson, J., Baltes, J., Gallegos, C. M., Diaz, M. M., Li, B., Chen, W., Nguyen, A., Yap, E.-H., Liu, P., Huang, X., Abdul Majeed, A. P. P., & Kim, U.-H. (2024). Collaborative Music Performances with Humanoid Robots and Humans. In Robot Intelligence Technology and Applications 8 (Vol. 1132, pp. 165–175). Springer. https://doi.org/10.1007/978-3-031-70684-4_14
Proceedings(refereed):
X. Zheng, H. Sheng, S. Cai, M. Cheng Lau and K. Zhao. (2024). Automated Knitting Instruction Generation from Fabric Images Using Deep Learning. 2024 3rd International Conference on Automation, Robotics and Computer Engineering (ICARCE), China, 2024, pp. 197-201, doi: 10.1109/ICARCE63054.2024.00044. -indexed: IEEE Xplore
W. Luo and M. Lau. (2024). A Dynamic Control System for Robotic Goalkeeping Using YOLOv9 and Path Planning. 2024 3rd International Conference on Automation, Robotics and Computer Engineering (ICARCE), China, 2024, pp. 45-49, doi: 10.1109/ICARCE63054.2024.00016. - indexed: IEEE Xplore
G. Gong and M. Lau. (2024). Implementation of Rao-Blackwellized Particle Filter on LIMO ROS2. 2024 3rd International Conference on Automation, Robotics and Computer Engineering (ICARCE), China, 2024, pp. 30-35, doi: 10.1109/ICARCE63054.2024.00013. - indexed: IEEE Xplore
J. Zhang and M. Lau. (2024). Human-Robot Interaction: A ROS2-Based Approach with Voice and Vision Integration," 2024 3rd International Conference on Automation, Robotics and Computer Engineering (ICARCE), China, 2024, pp. 21-25, doi: 10.1109/ICARCE63054.2024.00011. - indexed: IEEE Xplore
Haoliang Sheng, MengCheng Lau. (2024). Optimising Facial Expression Recognition: Comparing ResNet Architectures for Enhanced Performance. 11th International Conference of Control, Dynamic Systems, and Robotics (CDSR 2024), Totonto, Canada. Peer-reviewed
Zhang, Jing; Meng Cheng, Lau; ZiPing, Zhu. (2024). Advanced Exercise Classification with a Hybrid CNN GRU Model: Utilising IMU Data from Cell Phones. 11th International Conference of Control, Dynamic Systems, and Robotics (CDSR 2024), Totonto, Canada. Peer-reviewed. URL
Meng Cheng Lau, John Anderson, Jacky Baltes, Christian Melendez Gallegos, Mario Mendez Diaz, and Borui Li. (2023). Collaborative Music Performances with Humanoid Robots and Humans. 11th International Conference on Robot Intelligence Technology and Applications, Xi'an, China. URL
System converts fabric images into complete machine-readable knitting instructions - techxplore.com,
Fully automated robots can knit garments from fabric images alone - electronics360.com
Humanoid robots join human musicians for synchronized musical performances - techxplore.com