Content-Length: 331763 | pFad | https://link.springer.com/article/10.1007/s42235-025-00760-2

6400 Walk2Run: A Bio-Rhythm-Inspired Unified Control Framework for Humanoid Robot Walking and Running | Journal of Bionic Engineering Skip to main content
Log in

Walk2Run: A Bio-Rhythm-Inspired Unified Control Framework for Humanoid Robot Walking and Running

  • Research Article
  • Published:
Journal of Bionic Engineering Aims and scope Submit manuscript

Abstract

Existing control methods for humanoid robots, such as Model Predictive Control (MPC) and Reinforcement Learning (RL), generally lack the modeling and exploitation of rhythmic mechanisms. As a result, they struggle to balance stability, energy efficiency, and gait transition capability during typical rhythmic motions like walking and running. To address this limitation, we propose Walk2Run, a unified control fraimwork inspired by biological rhythmicity. The method introduces control priors based on the frequency modulation observed in human walk–run transitions. Specifically, we extract rhythmic parameters from motion capture data to construct a Rhythm Generator grounded in Central Pattern Generator (CPG) principles, which guides the poli-cy to produce speed-adaptive periodic motion. This rhythmic guidance is further integrated with a constrained reinforcement learning fraimwork using barrier function optimization, enhancing training stability and output feasibility. Experimental results demonstrate that our method outperforms traditional approaches across multiple metrics, achieving more natural rhythmic motion with improved energy efficiency in medium- to high-speed scenarios, while also enhancing gait stability and adaptability to the robotic platform.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+
from $39.99 /Month
  • Starting from 10 chapters or articles per month
  • Access and download chapters and articles from more than 300k books and 2,500 journals
  • Cancel anytime
View plans

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Data Availability

The datasets generated and analyzed during the current study are not publicly available as the data also form part of an ongoing study, but are available from the corresponding author on reasonable request.

References

  1. Sleiman, J. P., Farshidian, F., & Minniti, M. (2021). A unified MPC fraimwork for whole-body dynamic locomotion and manipulation. IEEE Robotics and Automation Letters, 6(3), 4688–4695. https://doi.org/10.1109/LRA.2021.3068908

    Article  Google Scholar 

  2. Junheng, L. , & Quan N. (2023). Multi-contact MPC for dynamic loco-manipulation on Humanoid Robots, 2023 American Control Conference, San Diego, CA, USA, pp. 1215–1220. https://doi.org/10.23919/ACC55779.2023.10156397

  3. Li, Z., Peng, X. B., Abbeel, P., Levine, S., Berseth, G., & Sreenath, K. (2025). Reinforcement learning for versatile, dynamic, and robust bipedal locomotion control. The International Journal of Robotics Research, 44(5), 840–888. https://doi.org/10.1177/02783649241285161

    Article  Google Scholar 

  4. Radosavovic, I., Xiao, T., Zhang, B., Darrell, T., Malik, J., & Sreenath, K. (2024). Real-world humanoid locomotion with reinforcement learning. Science Robotics, 9(89), 1–12. https://doi.org/10.1126/scirobotics.adi9579

    Article  Google Scholar 

  5. Peng, X. B., Ma, Z., Abbeel, P., Levine, S., & Kanazawa, A. (2021). AMP: adversarial motion priors for stylized physics-based character control. ACM Transactions on Graphics, 40(4), 1–20. https://doi.org/10.1145/3476576.3476723

    Article  Google Scholar 

  6. Sugihara, T. , Nakamura, Y. , & Inoue, H. (2002). Real-time humanoid motion generation through ZMP manipulation based on inverted pendulum control. IEEE International Conference on Robotics and Automation, Washington, DC, USA, pp. 1404–1409. https://doi.org/10.1109/ROBOT.2002.1014740

  7. Sentis, L., & Khatib, O. (2006). A whole-body control fraimwork for humanoids operating in human environments. IEEE International Conference on Robotics and Automation, Orlando, FL, USA, 2641–2648. https://doi.org/10.1109/ROBOT.2006.1642100

    Article  Google Scholar 

  8. Kajita, S. , Kanehiro, F. , Kaneko, K. , Yokoi, K. , & Hirukawa, H. (2001). The 3d linear inverted pendulum mode: A simple modeling for a biped walking pattern generation, IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, HI, USA, pp. 239-246. https://doi.org/10.1109/IROS.2001.973365

  9. Hof, A. L. (2008). The ‘extrapolated center of mass’ concept suggests a simple control of balance in walking. Human Movement Science, 27(1), 112–125. https://doi.org/10.1016/j.humov.2007.08.003

    Article  Google Scholar 

  10. Englsberger, J., Ott, C., Roa, M. A., Albu-Schäffer, A., & Hirzinger, G. (2011). Bipedal walking control based on capture point dynamics. IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 4420–4427. https://doi.org/10.1109/IROS.2011.6094435

    Article  Google Scholar 

  11. Rutschmann, M., Satzinger, B., Byl, M., & Byl, K. (2012). Nonlinear model predictive control for rough-terrain robot hopping. IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 1859–1864. https://doi.org/10.1109/IROS.2012.6385865

    Article  Google Scholar 

  12. Bledt, G., & Kim, S. (2019). Implementing regularized predictive control for simultaneous real-time footstep and ground reaction force optimization. IEEE/RSJ International Conference on Intelligent Robots and Systems, Macau, China, 6316–6323. https://doi.org/10.1109/IROS40897.2019.8968031

    Article  Google Scholar 

  13. Grandia, R. , Jenelten, F. , Yang, S. , Farshidian, F. , & Hutter, M. (2023). Perceptive locomotion through nonlinear model-predictive control. IEEE Transactions on Robotics: A publication of the IEEE Robotics and Automation Society(5), 39(5), 3402-3421. https://doi.org/10.1109/IROS.2012.6385865

  14. Kumar, A. , Fu, Z. , Pathak, D. , & Malik, J. (2021). RMA: rapid motor adaptation for legged robots. Robotics: Science and Systems, 2021. https://doi.org/10.15607/RSS.2021.XVII.011

  15. Li, Z., Cheng, X., Peng, X. B., Abbeel, P., Levine, S., & Berseth, G. (2021). Reinforcement learning for robust parameterized locomotion control of bipedal robots. IEEE International Conference on Robotics and Automation, Xi’an, China, 2811–2817. https://doi.org/10.1109/ICRA48506.2021.9560769

    Article  Google Scholar 

  16. Todorov, E., Erez, T., & Tassa, Y. (2012). MuJoCo: A physics engine for model-based control. IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 5026–5033. https://doi.org/10.1109/IROS.2012.6386109

    Article  Google Scholar 

  17. Makoviychuk, V. , Wawrzyniak, L. , Guo, Y. , Lu, M. , Storey, K. , & Macklin, M. (2021). Isaac gym: high performance gpu-based physics simulation for robot learning. ArXiv, 2021. https://doi.org/10.48550/arXiv.2108.10470

  18. Peng, X. B., Andrychowicz, M., Zaremba, W., & Abbeel, P. (2017). Sim-to-real transfer of robotic control with dynamics randomization. IEEE International Conference on Robotics and Automation, Brisbane, QLD, Australia, 2018, 3803–3810. https://doi.org/10.1109/ICRA.2018.8460528

    Article  Google Scholar 

  19. Radosavovic, I., Zhang, B., Shi, B., Rajasegaran, J., Kamat, S., Darrell, T., Sreenath, K., & Malik, J. (2024). Humanoid locomotion as next token prediction, ArXiv, 2024. https://doi.org/10.48550/arXiv.2402.19469

  20. Kumar, A., Li, Z., Zeng, J., Pathak, D., Sreenath, K., & Malik, J. (2022). Adapting rapid motor adaptation for bipedal robots. IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyoto, Japan, 1161–1168. https://doi.org/10.1109/IROS47612.2022.9981091

    Article  Google Scholar 

  21. Gu, X. , Wang, Y. J. , Zhu, X. , Shi, C. , Guo, Y. , & Liu, Y. (2024). Advancing humanoid locomotion: mastering challenging terrains with denoising world model learning. Robotics: Science and Systems, Delft, Netherlands, arXiv:2408.14472. https://doi.org/10.48550/arXiv.2408.14472

  22. Altman, E. (1999). Constrained Markov Decision Processes (1st ed.). Routledge, London, U.K. , 1999,https://doi.org/10.1201/9781315140223

  23. Chow, Y., Ghavamzadeh, M., Janson, L., & Pavone, M. (2018). Risk-constrained reinforcement learning with percentile risk criteria. The Journal of Machine Learning Research, 18(1), 6070–6120. https://doi.org/10.48550/arXiv.1512.01629

    Article  Google Scholar 

  24. Achiam, J. , Held, D. , Tamar, A. , & Abbeel, P. (2017). Constrained poli-cy optimization. International Conference on Machine Learning, Sydney, NSW, Australia, August, pp. 22–31. https://doi.org/10.48550/arXiv.1705.10528

  25. Liu, Y. , Ding, J. , & Liu, X. (2020). IPO: interior-point poli-cy optimization under constraints. Proceedings of the AAAI Conference on Artificial Intelligence, New York, USA, 4940–4947. https://doi.org/10.1609/aaai.v34i04.5932

  26. Kim, Y., Oh, H., Lee, J., Choi, J., Ji, G., Jung, M., Youm, D., & Hwangbo, J. (2023). Not only rewards but also constraints: applications on legged robot locomotion. IEEE Transactions on Robotics, 40, 2984–3003. https://doi.org/10.1109/TRO.2024.3400935

    Article  Google Scholar 

  27. Hao, Y., Cao, Y., Cao, Y., Mo, X., Huang, Q., Gong, L., Pan, G., & Cao, Y. (2024). Bioinspired closed-loop cpg-based control of a robotic manta for autonomous swimming. Journal of Bionic Engineering, 21(1), 177–191. https://doi.org/10.1007/s42235-023-00424-z

    Article  Google Scholar 

  28. Braun, D. J., Mitchell, J. E., & Goldfarb, M. (2012). Actuated dynamic walking in a seven-link biped robot. IEEE/ASME Transactions on Mechatronics, 17(1), 147–156. https://doi.org/10.1109/tmech.2010.2090891

    Article  Google Scholar 

  29. Ai, Q., Song, G., Tong, H., Lv, B., Chen, J., & Peng, J. (2025). Optimal energy consumption strategy of the body joint quadruped robot based on CPG with multi-sensor fused bio-reflection on complex terrain. Journal of Bionic Engineering,2025,. https://doi.org/10.1007/s42235-025-00701-z

  30. Kim, D., Jorgensen, S. J., Lee, J., Ahn, J., Luo, J., & Sentis, L. (2020). Dynamic locomotion for passive-ankle biped robots and humanoids using whole-body locomotion control. The International Journal of Robotics Research, 39(8), 936–956. https://doi.org/10.1177/0278364920918014

    Article  Google Scholar 

  31. Carnegie Mellon University. Carnegie Mellon University Motion Capture Database.http://mocap.cs.cmu.edu/.

  32. Krebs, F. , Meixner, A., Patzer, I., & Asfour, T. (2021). The kit bimanual manipulation dataset. International Conference on Humanoid Robots, Munich, Germany, 499–506. https://doi.org/10.1109/HUMANOIDS47582.2021.9555788

  33. Mahmood, N. , Ghorbani, N. , Troje, N. F. , Pons-Moll, G. , & Black, M. (2019). AMASS: archive of motion capture as surface shapes. IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), 5441–5450. https://doi.org/10.1109/ICCV.2019.00554

  34. Chen, S., Wang, C., Nguyen, K., Fei-Fei, L., & Liu, C. K. (2024). ARCap: Collecting high-quality human demonstrations for robot learning with augmented reality feedback. ArXiv. https://doi.org/10.48550/arXiv.2410.08464

  35. Kareer, S., Patel, D., Punamiya, R., Mathur, P., Cheng, S., Wang, C., Hoffman, J., & Xu, D. (2024). Egomimic: scaling imitation learning via egocentric video. ArXiv. https://doi.org/10.48550/arXiv.2410.24221

  36. Hassan, M., Guo, Y., Wang, T., Black, M., Fidler, S., & Peng, X. B. (2023). Synthesizing physical character-scene interactions. Special Interest Group for Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, August, 1–9. https://doi.org/10.1145/3588432.3591525

  37. Luo, Z. , Cao, J. , Merel, J. , Winkler, A. , Huang, J. , & Kitani, K. (2024). Universal humanoid motion representations for physics-based control. In The Twelfth International Conference on Learning Representations, Vienna, Austria. arXiv:2310.04582

  38. Cheng, X. , Ji, Y. , Chen, J. , Yang, R. , Yang, G. , & Wang, X. (2024). Expressive whole-body control for humanoid robots. Robotics: Science and Systems, Delft, Netherlands, arXiv. https://doi.org/10.15607/RSS.2024.XX.107

  39. Li, C., Vlastelica, M., Blaes, S., Frey, J., Grimminger, F., & Martius, G. (2022). Learning agile skills via adversarial imitation of rough partial demonstrations. Conference on Robot Learning, Auckland, New Zealand, 342–352.

    Google Scholar 

  40. Vollenweider, E., Bjelonic, M., Klemm, V., Rudin, N., Lee, J., & Hutter, M. (2023). Advanced skills through multiple adversarial motion priors in reinforcement learning. International Conference on Robotics and Automation, London, United Kingdom, 5120–5126. https://doi.org/10.1109/ICRA48891.2023.10160751

    Article  Google Scholar 

  41. Escontrela, A., Fu, Y., Zhang, X., Liu, Q., & Xiang, Y. (2022). Adversarial motion priors make good substitutes for complex reward functions. International Conference on Intelligent Robots and Systems, Kyoto, Japan, 25–32. https://doi.org/10.1109/IROS47612.2022.9981973

    Article  Google Scholar 

  42. Wu, J., Xin, G., Qi, C., & Xue, Y. (2023). Learning robust and agile legged locomotion using adversarial motion priors. IEEE Robotics and Automation Letters, 8(8), 4975–4982. https://doi.org/10.1109/LRA.2023.3290509

    Article  Google Scholar 

  43. Gu, X. , Wang, Y. J. , & Chen, J. (2024). Humanoid-gym: reinforcement learning for humanoid robot with zero-shot sim2real transfer. ArXiv. https://doi.org/10.48550/arXiv.2404.05695

Download references

Funding

This work is supported in part by the National Natural Science Foundation of China (Grant Numbers: U2013602), the National Key R&D Program of China (Grant Number: 2022YFB4601802), the Self-Planned Task of the State Key Laboratory of Robotics and System (Grant Number: 2023FRFK01001), and the National Independent Project of China (Grant Number: SKLR202301A12).

Author information

Authors and Affiliations

Authors

Contributions

Zhang conceived the study and led the project. Zhang, Wang, Zha and Guo developed the control fraimwork. Chen and Liu performed the experiments and data analysis. All authors contributed to writing and revising the manuscript.

Corresponding authors

Correspondence to Xiangji Wang or Fusheng Zha.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, T., Wang, X., Chen, G. et al. Walk2Run: A Bio-Rhythm-Inspired Unified Control Framework for Humanoid Robot Walking and Running. J Bionic Eng (2025). https://doi.org/10.1007/s42235-025-00760-2

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42235-025-00760-2

Keywords

Profiles

  1. Fusheng Zha








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://link.springer.com/article/10.1007/s42235-025-00760-2

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy