Open Positions at the National University of Singapore (NUS):
Two China Scholarship Council visiting opportunities per year (CSC联合培养博士生项目,12–24 months)
CSC funded 4-year PhD (CSC攻读博士学位项目)
Self-funded 4-year PhD
CSC funded postdoc positions (国家公派博士后项目)
Research direction: control theory, multi agent control, learning control, robot control, reinforcement learning, and related topics
Work location: Department of Electrical & Computer Engineering, National University of Singapore, Singapore 117583.
Contact: xiaocong@nus.edu.sg
Open Positions at Eastern Institute of Technology, Ningbo:
Ph.D.: Our PhD program is in collaboration with top universities such as Shanghai Jiao Tong University (上海交通大学, ranked 54th globally by U.S. News), University of Science and Technology of China (中国科学技术大学, ranked 82nd globally by U.S. News), The Hong Kong Polytechnic University (PolyU, ranked 67th globally by U.S. News), and University of Warwick (ranked 69th globally by QS World University Rankings). Upon graduation, your degree will be awarded by the respective collaborating university. Additional details can be found at the bottom of the page.
Two-Year Postdoc Fellowship (2-3 positions available, annual salary up to $118,000 USD, including government subsidy): The Postdoc Fellowship is not tied to specific ongoing projects, allowing the candidate to propose their own research project within the lab’s scope. Applicants should possess a doctoral degree and demonstrate good academic potential and vision. Candidates must be available to work full-time in Ningbo, China. Ningbo is a thriving coastal city, located near the places where DeepSeek and Unitree were founded. Additional details can be found at the bottom of the page.
Research Assistant Professor: Applicants must hold a doctoral degree and demonstrate exceptional academic achievement. The successful candidate will continue conducting impactful research, assist the PI in advising students and postdocs, and contribute to the preparation of grant proposals.
Research Assistants/Interns (with the possibility of conversion to a Ph.D. position): These positions are best suited for candidates with a strong desire to pursue a PhD. Comprehensive resources will be provided to support independent or guided research aligned with the candidates’ interests, preparing them for PhD applications at EIT or other universities. Additional details can be found at the bottom of the page.
Please email me (xiaocongli@eitech.edu.cn) your CV, research plan (highlighting the research challenges and your methodology) and 1-3 representative research papers if you are interested in robotics, control theory and machine learning, including but not limited to the following topics:
Contact-Rich Robot Manipulation
Safe Learning-based Control
Agile Robot Control
Robot Learning (Embodied AI)
Robot Compliance Control
Multi-Agent Learning Control
Physical Human-Robot Interaction
Some good reference papers to take a look if you are applying for PhD, Research Assistant or Research Intern positions:
Agile Robot Control
Kaufmann, E., Bauersfeld, L., Loquercio, A., Müller, M., Koltun, V., & Scaramuzza, D. (2023). Champion-level drone racing using deep reinforcement learning. Nature, 620(7976), 982-987.
Song, Y., Romero, A., Müller, M., Koltun, V., & Scaramuzza, D. (2023). Reaching the limit in autonomous racing: Optimal control versus reinforcement learning. Science Robotics, 8(82), eadg1462.
Luo, S., Jiang, M., Zhang, S., Zhu, J., Yu, S., Dominguez Silva, I., ... & Su, H. (2024). Experiment-free exoskeleton assistance via learning in simulation. Nature, 630(8016), 353-359.
Romero, A., Sun, S., Foehn, P., & Scaramuzza, D. (2022). Model predictive contouring control for time-optimal quadrotor flight. IEEE Transactions on Robotics, 38(6), 3340-3356.
Cheng, S., Kim, M., Song, L., Yang, C., Jin, Y., Wang, S., & Hovakimyan, N. (2024). Difftune: Auto-tuning through auto-differentiation. IEEE Transactions on Robotics.
Wei, M., Zheng, L., Wu, Y., Mei, R., & Cheng, H. (2025). Meta-Learning Enhanced Model Predictive Contouring Control for Agile and Precise Quadrotor Flight. IEEE Transactions on Robotics.
Richards, S. M., Azizan, N., Slotine, J. J., & Pavone, M. (2023). Control-oriented meta-learning. The International Journal of Robotics Research, 42(10), 777-797.
Saied, H., Chemori, A., Bouri, M., El Rafei, M., & Francis, C. (2023). Feedforward super-twisting sliding mode control for robotic manipulators: Application to PKMs. IEEE Transactions on Robotics, 39(4), 3167-3184.
Jia, J., Zhang, W., Guo, K., Wang, J., Yu, X., Shi, Y., & Guo, L. (2023). Evolver: Online learning and prediction of disturbances for robot control. IEEE Transactions on Robotics, 40, 382-402.
O’Connell, M., Shi, G., Shi, X., Azizzadenesheli, K., Anandkumar, A., Yue, Y., & Chung, S. J. (2022). Neural-fly enables rapid learning for agile flight in strong winds. Science Robotics, 7(66), eabm6597.
Nazeer, M. S., Laschi, C., & Falotico, E. (2024). RL-based adaptive controller for high precision reaching in a soft robot arm. IEEE Transactions on Robotics.
Coulson, Jeremy, John Lygeros, and Florian Dörfler. "Data-enabled predictive control: In the shallows of the DeePC." In 2019 18th European Control Conference (ECC), pp. 307-312. IEEE, 2019.
Jia, J., Yang, Z., Wang, M., Guo, K., Yang, J., Yu, X., & Guo, L. (2024). Feedback Favors the Generalization of Neural ODEs. arXiv preprint arXiv:2410.10253.
Wei, L., Feng, H., Yang, Y., Feng, R., Hu, P., Zheng, X., ... & Wu, T. (2024). Closed-loop diffusion control of complex physical systems. arXiv preprint arXiv:2408.03124.
Zhou, G., Swaminathan, S., Raju, R. V., Guntupalli, J. S., Lehrach, W., Ortiz, J., ... & Murphy, K. (2024). Diffusion model predictive control. arXiv preprint arXiv:2410.05364.
He, G., Choudhary, Y., & Shi, G. (2024). Self-Supervised Meta-Learning for All-Layer DNN-Based Adaptive Control with Stability Guarantees. arXiv preprint arXiv:2410.07575.
Ye, N., Zeng, Z., Zhou, J., Zhu, L., Duan, Y., Wu, Y., ... & Zhou, C. (2024). OoD-Control: Generalizing Control in Unseen Environments. IEEE Transactions on Pattern Analysis and Machine Intelligence.
Kalaria, D., Xue, H., Xiao, W., Tao, T., Shi, G., & Dolan, J. M. (2024). Agile Mobility with Rapid Online Adaptation via Meta-learning and Uncertainty-aware MPPI. arXiv preprint arXiv:2410.06565.
Robot Learning (Embodied AI)
Zhao, T., Kumar, V., Levine, S., & Finn, C. (2023). Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware. Robotics: Science and Systems (RSS 2023).
Chi, C., Xu, Z., Feng, S., Cousineau, E., Du, Y., Burchfiel, B., ... & Song, S. (2023). Diffusion policy: Visuomotor policy learning via action diffusion. The International Journal of Robotics Research, 02783649241273668.
Hou, Y., Liu, Z., Chi, C., Cousineau, E., Kuppuswamy, N., Feng, S., ... & Song, S. (2024). Adaptive Compliance Policy: Learning Approximate Compliance for Diffusion Guided Control. arXiv preprint arXiv:2410.09309.
Xue, H., Ren, J., Chen, W., Zhang, G., Fang, Y., Gu, G., ... & Lu, C. (2025). Reactive Diffusion Policy: Slow-Fast Visual-Tactile Policy Learning for Contact-Rich Manipulation. arXiv preprint arXiv:2503.02881.
Li, T., Sun, S., Aditya, S. S., & Figueroa, N. (2025). Elastic Motion Policy: An Adaptive Dynamical System for Robust and Efficient One-Shot Imitation Learning. arXiv preprint arXiv:2503.08029.
Mark, M. S., Gao, T., Sampaio, G. G., Srirama, M. K., Sharma, A., Finn, C., & Kumar, A. (2024). Policy agnostic RL: Offline RL and online RL fine-tuning of any class and backbone. arXiv preprint arXiv:2412.06685.
Luo, J., Xu, C., Wu, J., & Levine, S. (2024). Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning. arXiv preprint arXiv:2410.21845.
Chen, Y., Tian, S., Liu, S., Zhou, Y., Li, H., & Zhao, D. (2025). ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy. arXiv preprint arXiv:2502.05450.
Yu, J., Liu, H., Yu, Q., Ren, J., Hao, C., Ding, H., ... & Zhang, W. (2025). ForceVLA: Enhancing VLA Models with a Force-aware MoE for Contact-rich Manipulation. arXiv preprint arXiv:2505.22159.
Xu, Z., & She, Y. (2024). LeTac-MPC: Learning Model Predictive Control for Tactile-Reactive Grasping. IEEE Transactions on Robotics.
Ankile, L., Simeonov, A., Shenfeld, I., Torne, M., & Agrawal, P. (2024). From Imitation to Refinement--Residual RL for Precise Assembly. arXiv preprint arXiv:2407.16677.
Xu, X., Hou, Y., Liu, Z., & Song, S. (2025). Compliant Residual DAgger: Improving Real-World Contact-Rich Manipulation with Human Corrections. arXiv preprint arXiv:2506.16685.
He, T., Gao, J., Xiao, W., Zhang, Y., Wang, Z., Wang, J., ... & Shi, G. (2025). Asap: Aligning simulation and real-world physics for learning agile humanoid whole-body skills. arXiv preprint arXiv:2502.01143.
Zhang, C., Cui, S., Hu, J., Jiang, T., Zhang, T., Wang, R., & Wang, S. (2025). TacFlex: Multi-Mode Tactile Imprints Simulation for Visuotactile Sensors with Coating Patterns. IEEE Transactions on Robotics.
Zhang, C., Hao, P., Cao, X., Hao, X., Cui, S., & Wang, S. (2025). VTLA: Vision-tactile-language-action model with preference learning for insertion manipulation. arXiv preprint arXiv:2505.09577.
Lyu, J., Li, Z., Shi, X., Xu, C., Wang, Y., & Wang, H. (2025). Dywa: Dynamics-adaptive world action model for generalizable non-prehensile manipulation. arXiv preprint arXiv:2503.16806.
Safe Control
He, T., Zhang, C., Xiao, W., He, G., Liu, C., & Shi, G. (2024). Agile but safe: Learning collision-free high-speed legged locomotion. arXiv preprint arXiv:2401.17583.
Brunke, L., Greeff, M., Hall, A. W., Yuan, Z., Zhou, S., Panerati, J., & Schoellig, A. P. (2022). Safe learning in robotics: From learning-based control to safe reinforcement learning. Annual Review of Control, Robotics, and Autonomous Systems, 5(1), 411-444.
Dawson, C., Gao, S., & Fan, C. (2023). Safe control with learned certificates: A survey of neural lyapunov, barrier, and contraction methods for robotics and control. IEEE Transactions on Robotics, 39(3), 1749-1767.
Berkenkamp, F., Schoellig, A. P., & Krause, A. (2016). Safe controller optimization for quadrotors with Gaussian processes. In 2016 IEEE international conference on robotics and automation (ICRA) (pp. 491-496).
Ames, A. D., Xu, X., Grizzle, J. W., & Tabuada, P. (2016). Control barrier function based quadratic programs for safety critical systems. IEEE Transactions on Automatic Control, 62(8), 3861-3876.
Wabersich, Kim P., et al. "Data-driven safety filters: Hamilton-jacobi reachability, control barrier functions, and predictive methods for uncertain systems." IEEE Control Systems Magazine 43.5 (2023): 137-177.
Compton, W. D., Cohen, M. H., & Ames, A. D. (2024). Learning for Layered Safety-Critical Control with Predictive Control Barrier Functions. arXiv preprint arXiv:2412.04658 (L4DC 2025 Best Paper Award)
Xiao, W., Wang, T. H., Hasani, R., Chahine, M., Amini, A., Li, X., & Rus, D. (2023). Barriernet: Differentiable control barrier functions for learning of safe robot control. IEEE Transactions on Robotics, 39(3), 2289-2307.
Wang, G., Ren, K., Morgan, A. S., & Hang, K. (2025). Caging in time: A framework for robust object manipulation under uncertainties and limited robot perception. The International Journal of Robotics Research, 02783649251343926.
Robot Compliance Control
Haddadin, S., & Shahriari, E. (2024). Unified force-impedance control. The International Journal of Robotics Research, 43(13), 2112-2141.
Zeng, C., Yang, C., Jin, Z., & Zhang, J. (2024). Hierarchical impedance, force, and manipulability control for robot learning of skills. IEEE/ASME Transactions on Mechatronics.
Li, Y., Zheng, L., Wang, Y., Dong, E., & Zhang, S. (2025). Impedance Learning-based Adaptive Force Tracking for Robot on Unknown Terrains. IEEE Transactions on Robotics.
Martín-Martín, Roberto, et al. "Variable impedance control in end-effector space: An action space for reinforcement learning in contact-rich tasks." 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, 2019.
Hou, Y., Liu, Z., Chi, C., Cousineau, E., Kuppuswamy, N., Feng, S., ... & Song, S. (2024). Adaptive Compliance Policy: Learning Approximate Compliance for Diffusion Guided Control. arXiv preprint arXiv:2410.09309.
Noseworthy, M., Tang, B., Wen, B., Handa, A., Kessens, C., Roy, N., ... & Akinola, I. (2025). Forge: Force-guided exploration for robust contact-rich manipulation under uncertainty. IEEE Robotics and Automation Letters.
Xu, X., Hou, Y., Liu, Z., & Song, S. (2025). Compliant Residual DAgger: Improving Real-World Contact-Rich Manipulation with Human Corrections. arXiv preprint arXiv:2506.16685.
Aerial Manipulation
Wang, M., Chen, Z., Guo, K., Yu, X., Zhang, Y., Guo, L., & Wang, W. (2023). Millimeter-level pick and peg-in-hole task achieved by aerial manipulator. IEEE Transactions on Robotics, 40, 1242-1260.
He, G., Guo, X., Tang, L., Zhang, Y., Mousaei, M., Xu, J., ... & Shi, G. (2025). Flying Hand: End-Effector-Centric Framework for Versatile Aerial Manipulation Teleoperation and Policy Learning. arXiv preprint arXiv:2504.10334.
Li, G., Liu, X., & Loianno, G. (2024). Human-aware physical human-robot collaborative transportation and manipulation with multiple aerial robots. IEEE Transactions on Robotics.
Zeng, J., Gimenez, A. M., Vinitsky, E., Alonso-Mora, J., & Sun, S. (2025). Decentralized Aerial Manipulation of a Cable-Suspended Load using Multi-Agent Reinforcement Learning. arXiv preprint arXiv:2508.01522.
Additional Details for Postdoc Fellowship
The annual salary starts from 350,000+ RMB, with a 200,000 RMB supplement for Outstanding Postdoctoral Fellows selected by EIT and a 300,000 RMB supplement for those in the Excellent Overseas Postdoc Program. It is possible to apply for the Excellent Overseas Postdoc Program prior to onboarding, but applications must be submitted within specific time windows. Interested candidates are encouraged to contact me before graduation.
We offer a monthly allowance to cover lunch, transportation, and phone expenses. Cash vouchers are also provided for birthdays and major Chinese holidays.
All PhD graduates are eligible to apply for the Ningbo city government Relocation Allowance for High-Level Talents. PhD graduates from universities ranked among the top 200 globally are eligible for an additional government living allowance.
Highly subsidized housing options are available (200 USD per month for a one-bedroom unit after subsidy).
Successful applicants for NSFC projects and provincial research projects will receive 1:1 matching funding support from the Ningbo government.
博士后任职要求
(1)年龄原则上不超过35周岁(如超出可申请破格),并在近3年内已获得或即将获得博士学位 (如超出可申请破格)。
(2)具有良好的英文阅读、交流及写作能力,良好的团队协作与沟通协调能力。
工作和生活待遇
(1)年薪35万+起,入选东方理工卓越博士后年薪增加20万, 入选海外优秀博士引进计划年薪增加30万, 合计最高可达85万元。其中海外优秀博士引进计划可提前申请,确认入选后再入职。提供每月津贴,涵盖午餐、交通、电话费用等。此外, 可申请领取宁波市高层次人才安家补助与购房补贴 (至少15万+20万起)。世界排名前200高校的博士可申请政府额外生活补助10万元。
(2)缴纳五险一金,享受带薪年假、节假日福利。根据政府人才公寓政策条件及房源情况,优先支持请政府人才公寓(可拎包入住)。
(3)在站期间可向地方政府申请20万元的科研经费资助。
(4)博士后出站后,若留在宁波企事业单位工作的,可享受40-60万元的相关补贴。
(5)对获得中国博士后科学基金资助和省级博士后科研项目资助的,宁波市给予各类项目1:1配套经费支持。
(6) 表现优秀者可长聘为研究员序列
(7) 如有需要, 可选择与清华大学或中国科学技术大学联培, 出站颁发清华大学或中国科学技术大学博士后证书
Additional Details for Ph.D. and Research Assistants/Interns
本实验室秉承世界一流实验室的管理理念,为学生提供充足的助研津贴和实验设备支持,关注学生身心健康,重视学生的长期职业发展。
学生主导的科研工作,导师绝不抢占一作。
课题组刚成立,暂无横向项目,学生可专心从事基础研究。
课题组有专职行政助理,学生无需处理实验器材购买、差旅报销及其他行政相关事务。
倡导劳逸结合,尤其鼓励学生进行充足的体育锻炼。
为希望继续深造的学生推荐合适的国外博士与博后机会,例如NUS、NTU、Harvard等。
课题组博士生名额将优先考虑实验室中表现优秀的科研助理和实习生,具体的名额分配将根据学校的安排,同时也会参考学生的意见(例如是否希望出境深造或在国内攻读博士学位)。2025年秋季及之后入学的博士生可入住学校的单人宿舍。
上海交通大学、中国科学技术大学博士生联培
东方理工自2022年起与上海交通大学(2023年起与中科大)开展联合培养博士生招生,实行双导师制,学生在上海交大(中科大)注册学籍,第一年在上海交大(中科大)进行课程学习,毕业后获得上海交大(中科大)博士学位和毕业证书。本科直博生的基本学习年限为5年,其他博士生的基本学习年限为4年,录取类别为非定向就业,学习形式为全日制。博士生在上海交大(中科大)学习期间,提供与该校博士生同等助研津贴;在东方理工工作期间,将为学生提供非常具有竞争力的助研津贴,特别优秀者将视情况给予额外补助。招生简章请见: https://www.eitech.edu.cn/?admission_category=graduate
香港理工大学博士生联培
东方理工自2022年起与香港理工大学开展联合培养博士生招生,实行双导师制,学生前两年在香港理工,后两年在东方理工,毕业后获得香港理工大学博士学位,所获学位与全程在香港理工完成学业的学生完全一致。在香港理工期间,获得与该校博士生同等的奖助学金;在东方理工工作期间,将为学生提供非常具有竞争力的助研津贴,特别优秀者将视情况给予额外补助。招生简章请见: https://www.eitech.edu.cn/?admission_category=graduate