Our paper, Exciting Action: Investigating Efficient Exploration for Learning Musculoskeletal Humanoid Locomotion, was accepted at the International Conference on Humanoid Robots. In this work, we address the challenges of learning a locomotion controller for musculoskeletal systems, particularly the issues of over-actuation and high-dimensional action spaces. While reinforcement learning methods have struggled to generate human-like gaits due to the difficulty of designing effective reward functions, we show that adversarial imitation learning can offer a promising solution.
Our approach uses a combination of insights from the latest literature and novel techniques, and we validate it by learning walking and running gaits on a simulated humanoid model with 16 degrees of freedom and 92 Muscle-Tendon Units. Impressively, we achieve natural-looking gaits with only a few demonstrations.