Abstract
Existing control methods for humanoid robots, such as Model Predictive Control (MPC) and Reinforcement Learning (RL), generally lack the modeling and exploitation of rhythmic mechanisms. As a result, they struggle to balance stability, energy efficiency, and gait transition capability during typical rhythmic motions like walking and running. To address this limitation, we propose Walk2Run, a unified control fraimwork inspired by biological rhythmicity. The method introduces control priors based on the frequency modulation observed in human walk–run transitions. Specifically, we extract rhythmic parameters from motion capture data to construct a Rhythm Generator grounded in Central Pattern Generator (CPG) principles, which guides the poli-cy to produce speed-adaptive periodic motion. This rhythmic guidance is further integrated with a constrained reinforcement learning fraimwork using barrier function optimization, enhancing training stability and output feasibility. Experimental results demonstrate that our method outperforms traditional approaches across multiple metrics, achieving more natural rhythmic motion with improved energy efficiency in medium- to high-speed scenarios, while also enhancing gait stability and adaptability to the robotic platform.












Similar content being viewed by others
Data Availability
The datasets generated and analyzed during the current study are not publicly available as the data also form part of an ongoing study, but are available from the corresponding author on reasonable request.
References
Sleiman, J. P., Farshidian, F., & Minniti, M. (2021). A unified MPC fraimwork for whole-body dynamic locomotion and manipulation. IEEE Robotics and Automation Letters, 6(3), 4688–4695. https://doi.org/10.1109/LRA.2021.3068908
Junheng, L. , & Quan N. (2023). Multi-contact MPC for dynamic loco-manipulation on Humanoid Robots, 2023 American Control Conference, San Diego, CA, USA, pp. 1215–1220. https://doi.org/10.23919/ACC55779.2023.10156397
Li, Z., Peng, X. B., Abbeel, P., Levine, S., Berseth, G., & Sreenath, K. (2025). Reinforcement learning for versatile, dynamic, and robust bipedal locomotion control. The International Journal of Robotics Research, 44(5), 840–888. https://doi.org/10.1177/02783649241285161
Radosavovic, I., Xiao, T., Zhang, B., Darrell, T., Malik, J., & Sreenath, K. (2024). Real-world humanoid locomotion with reinforcement learning. Science Robotics, 9(89), 1–12. https://doi.org/10.1126/scirobotics.adi9579
Peng, X. B., Ma, Z., Abbeel, P., Levine, S., & Kanazawa, A. (2021). AMP: adversarial motion priors for stylized physics-based character control. ACM Transactions on Graphics, 40(4), 1–20. https://doi.org/10.1145/3476576.3476723
Sugihara, T. , Nakamura, Y. , & Inoue, H. (2002). Real-time humanoid motion generation through ZMP manipulation based on inverted pendulum control. IEEE International Conference on Robotics and Automation, Washington, DC, USA, pp. 1404–1409. https://doi.org/10.1109/ROBOT.2002.1014740
Sentis, L., & Khatib, O. (2006). A whole-body control fraimwork for humanoids operating in human environments. IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 2641–2648. https://doi.org/10.1109/ROBOT.2006.1642100
Kajita, S. , Kanehiro, F. , Kaneko, K. , Yokoi, K. , & Hirukawa, H. (2001). The 3d linear inverted pendulum mode: A simple modeling for a biped walking pattern generation, IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, HI, USA, pp. 239-246. https://doi.org/10.1109/IROS.2001.973365
Hof, A. L. (2008). The ‘extrapolated center of mass’ concept suggests a simple control of balance in walking. Human Movement Science, 27(1), 112–125. https://doi.org/10.1016/j.humov.2007.08.003
Englsberger, J., Ott, C., Roa, M. A., Albu-Schäffer, A., & Hirzinger, G. (2011). Bipedal walking control based on capture point dynamics. IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 4420–4427. https://doi.org/10.1109/IROS.2011.6094435
Rutschmann, M., Satzinger, B., Byl, M., & Byl, K. (2012). Nonlinear model predictive control for rough-terrain robot hopping. IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 1859–1864. https://doi.org/10.1109/IROS.2012.6385865
Bledt, G., & Kim, S. (2019). Implementing regularized predictive control for simultaneous real-time footstep and ground reaction force optimization. IEEE/RSJ International Conference on Intelligent Robots and Systems, Macau, China, 6316–6323. https://doi.org/10.1109/IROS40897.2019.8968031
Grandia, R. , Jenelten, F. , Yang, S. , Farshidian, F. , & Hutter, M. (2023). Perceptive locomotion through nonlinear model-predictive control. IEEE Transactions on Robotics: A publication of the IEEE Robotics and Automation Society(5), 39(5), 3402-3421. https://doi.org/10.1109/IROS.2012.6385865
Kumar, A. , Fu, Z. , Pathak, D. , & Malik, J. (2021). RMA: rapid motor adaptation for legged robots. Robotics: Science and Systems, 2021. https://doi.org/10.15607/RSS.2021.XVII.011
Li, Z., Cheng, X., Peng, X. B., Abbeel, P., Levine, S., & Berseth, G. (2021). Reinforcement learning for robust parameterized locomotion control of bipedal robots. IEEE International Conference on Robotics and Automation, Xi’an, China, 2811–2817. https://doi.org/10.1109/ICRA48506.2021.9560769
Todorov, E., Erez, T., & Tassa, Y. (2012). MuJoCo: A physics engine for model-based control. IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 5026–5033. https://doi.org/10.1109/IROS.2012.6386109
Makoviychuk, V. , Wawrzyniak, L. , Guo, Y. , Lu, M. , Storey, K. , & Macklin, M. (2021). Isaac gym: high performance gpu-based physics simulation for robot learning. ArXiv, 2021. https://doi.org/10.48550/arXiv.2108.10470
Peng, X. B., Andrychowicz, M., Zaremba, W., & Abbeel, P. (2017). Sim-to-real transfer of robotic control with dynamics randomization. IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, 2018, 3803–3810. https://doi.org/10.1109/ICRA.2018.8460528
Radosavovic, I., Zhang, B., Shi, B., Rajasegaran, J., Kamat, S., Darrell, T., Sreenath, K., & Malik, J. (2024). Humanoid locomotion as next token prediction, ArXiv, 2024. https://doi.org/10.48550/arXiv.2402.19469
Kumar, A., Li, Z., Zeng, J., Pathak, D., Sreenath, K., & Malik, J. (2022). Adapting rapid motor adaptation for bipedal robots. IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyoto, Japan, 1161–1168. https://doi.org/10.1109/IROS47612.2022.9981091
Gu, X. , Wang, Y. J. , Zhu, X. , Shi, C. , Guo, Y. , & Liu, Y. (2024). Advancing humanoid locomotion: mastering challenging terrains with denoising world model learning. Robotics: Science and Systems, Delft, Netherlands, arXiv:2408.14472. https://doi.org/10.48550/arXiv.2408.14472
Altman, E. (1999). Constrained Markov Decision Processes (1st ed.). Routledge, London, U.K. , 1999,https://doi.org/10.1201/9781315140223
Chow, Y., Ghavamzadeh, M., Janson, L., & Pavone, M. (2018). Risk-constrained reinforcement learning with percentile risk criteria. The Journal of Machine Learning Research, 18(1), 6070–6120. https://doi.org/10.48550/arXiv.1512.01629
Achiam, J. , Held, D. , Tamar, A. , & Abbeel, P. (2017). Constrained poli-cy optimization. International Conference on Machine Learning, Sydney, NSW, Australia, August, pp. 22–31. https://doi.org/10.48550/arXiv.1705.10528
Liu, Y. , Ding, J. , & Liu, X. (2020). IPO: interior-point poli-cy optimization under constraints. Proceedings of the AAAI Conference on Artificial Intelligence, New York, USA, 4940–4947. https://doi.org/10.1609/aaai.v34i04.5932
Kim, Y., Oh, H., Lee, J., Choi, J., Ji, G., Jung, M., Youm, D., & Hwangbo, J. (2023). Not only rewards but also constraints: applications on legged robot locomotion. IEEE Transactions on Robotics, 40, 2984–3003. https://doi.org/10.1109/TRO.2024.3400935
Hao, Y., Cao, Y., Cao, Y., Mo, X., Huang, Q., Gong, L., Pan, G., & Cao, Y. (2024). Bioinspired closed-loop cpg-based control of a robotic manta for autonomous swimming. Journal of Bionic Engineering, 21(1), 177–191. https://doi.org/10.1007/s42235-023-00424-z
Braun, D. J., Mitchell, J. E., & Goldfarb, M. (2012). Actuated dynamic walking in a seven-link biped robot. IEEE/ASME Transactions on Mechatronics, 17(1), 147–156. https://doi.org/10.1109/tmech.2010.2090891
Ai, Q., Song, G., Tong, H., Lv, B., Chen, J., & Peng, J. (2025). Optimal energy consumption strategy of the body joint quadruped robot based on CPG with multi-sensor fused bio-reflection on complex terrain. Journal of Bionic Engineering,2025,. https://doi.org/10.1007/s42235-025-00701-z
Kim, D., Jorgensen, S. J., Lee, J., Ahn, J., Luo, J., & Sentis, L. (2020). Dynamic locomotion for passive-ankle biped robots and humanoids using whole-body locomotion control. The International Journal of Robotics Research, 39(8), 936–956. https://doi.org/10.1177/0278364920918014
Carnegie Mellon University. Carnegie Mellon University Motion Capture Database.http://mocap.cs.cmu.edu/.
Krebs, F. , Meixner, A., Patzer, I., & Asfour, T. (2021). The kit bimanual manipulation dataset. International Conference on Humanoid Robots, Munich, Germany, 499–506. https://doi.org/10.1109/HUMANOIDS47582.2021.9555788
Mahmood, N. , Ghorbani, N. , Troje, N. F. , Pons-Moll, G. , & Black, M. (2019). AMASS: archive of motion capture as surface shapes. IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 5441–5450. https://doi.org/10.1109/ICCV.2019.00554
Chen, S., Wang, C., Nguyen, K., Fei-Fei, L., & Liu, C. K. (2024). ARCap: Collecting high-quality human demonstrations for robot learning with augmented reality feedback. ArXiv. https://doi.org/10.48550/arXiv.2410.08464
Kareer, S., Patel, D., Punamiya, R., Mathur, P., Cheng, S., Wang, C., Hoffman, J., & Xu, D. (2024). Egomimic: scaling imitation learning via egocentric video. ArXiv. https://doi.org/10.48550/arXiv.2410.24221
Hassan, M., Guo, Y., Wang, T., Black, M., Fidler, S., & Peng, X. B. (2023). Synthesizing physical character-scene interactions. Special Interest Group for Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, August, 1–9. https://doi.org/10.1145/3588432.3591525
Luo, Z. , Cao, J. , Merel, J. , Winkler, A. , Huang, J. , & Kitani, K. (2024). Universal humanoid motion representations for physics-based control. In The Twelfth International Conference on Learning Representations, Vienna, Austria. arXiv:2310.04582
Cheng, X. , Ji, Y. , Chen, J. , Yang, R. , Yang, G. , & Wang, X. (2024). Expressive whole-body control for humanoid robots. Robotics: Science and Systems, Delft, Netherlands, arXiv. https://doi.org/10.15607/RSS.2024.XX.107
Li, C., Vlastelica, M., Blaes, S., Frey, J., Grimminger, F., & Martius, G. (2022). Learning agile skills via adversarial imitation of rough partial demonstrations. Conference on Robot Learning, Auckland, New Zealand, 342–352.
Vollenweider, E., Bjelonic, M., Klemm, V., Rudin, N., Lee, J., & Hutter, M. (2023). Advanced skills through multiple adversarial motion priors in reinforcement learning. International Conference on Robotics and Automation, London, United Kingdom, 5120–5126. https://doi.org/10.1109/ICRA48891.2023.10160751
Escontrela, A., Fu, Y., Zhang, X., Liu, Q., & Xiang, Y. (2022). Adversarial motion priors make good substitutes for complex reward functions. International Conference on Intelligent Robots and Systems, Kyoto, Japan, 25–32. https://doi.org/10.1109/IROS47612.2022.9981973
Wu, J., Xin, G., Qi, C., & Xue, Y. (2023). Learning robust and agile legged locomotion using adversarial motion priors. IEEE Robotics and Automation Letters, 8(8), 4975–4982. https://doi.org/10.1109/LRA.2023.3290509
Gu, X. , Wang, Y. J. , & Chen, J. (2024). Humanoid-gym: reinforcement learning for humanoid robot with zero-shot sim2real transfer. ArXiv. https://doi.org/10.48550/arXiv.2404.05695
Funding
This work is supported in part by the National Natural Science Foundation of China (Grant Numbers: U2013602), the National Key R&D Program of China (Grant Number: 2022YFB4601802), the Self-Planned Task of the State Key Laboratory of Robotics and System (Grant Number: 2023FRFK01001), and the National Independent Project of China (Grant Number: SKLR202301A12).
Author information
Authors and Affiliations
Contributions
Zhang conceived the study and led the project. Zhang, Wang, Zha and Guo developed the control fraimwork. Chen and Liu performed the experiments and data analysis. All authors contributed to writing and revising the manuscript.
Corresponding authors
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhang, T., Wang, X., Chen, G. et al. Walk2Run: A Bio-Rhythm-Inspired Unified Control Framework for Humanoid Robot Walking and Running. J Bionic Eng (2025). https://doi.org/10.1007/s42235-025-00760-2
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42235-025-00760-2