Embodied Self-Identification and Damage Recovery in Reconfigurable Robotics
At the ELIXIR Lab, we investigate how robots can autonomously develop an internal understanding of their own bodies—what we call embodied self-identification—and use this self-awareness to recover from physical damage, adapt their behavior, and continue functioning effectively in dynamic environments.
Our research is motivated by the vision of lifelong adaptive robotic systems that can remain operational despite wear, unexpected damage, or reconfiguration. This is especially critical for robots operating in remote or extreme settings, such as extraterrestrial exploration, disaster zones, or deep-sea environments, where human intervention is limited or impossible.
This work combines concepts from embodied cognition, robotics, and AI, using a blend of geometric mechanics, Bayesian inference, neural network-based self-modeling, and sim-to-real reinforcement learning. It lays the foundation for future robotic systems that are not only autonomous, but also self-aware, self-healing, and inherently resilient.
Our experimental platforms include both soft and rigid-body systems (e.g., multi-legged robotic and manipulator systems) that support physical reconfiguration and damage injection. These platforms enable us to test theories of embodied intelligence and validate real-world recovery strategies across ground-based and potentially space-deployable robots.
We develop algorithms and learning frameworks that allow a robot to:
Our approach enables robots to self-discover the parameters of their own body—including link lengths, joint orientations, and inertial properties—through a combination of onboard sensing and embodied interaction with the environment. This process eliminates the need for manual calibration or pre-defined models, making it ideal for modular or reconfigurable platforms. The inference pipeline incorporates sensor fusion techniques and is refined through multimodal fine-tuning, where learned body representations are updated based on discrepancies between predicted and observed outcomes across multiple sensing modalities.
Robots in unstructured or long-duration deployments must be able to detect and adapt to changes in their own physical structure—whether due to degradation, hardware failure, or dynamic reconfiguration. We develop algorithms that continuously monitor sensory signals for signs of morphological change, perform real-time hypothesis testing, and update the internal body schema accordingly. This dynamic self-modeling capability is essential for fault tolerance, modular self-assembly, and mission resilience in unknown environments.
Once the updated body model is established, the robot must adjust its behavior accordingly. We design adaptive control and motion planning frameworks that can reconfigure task strategies—such as gait generation, manipulation trajectories, or obstacle avoidance—based on the new capabilities and limitations of the damaged or restructured system. This adaptation is performed online, enabling seamless transition from nominal to degraded modes of operation without manual intervention.
Our methodology integrates RL with model-based control to form a robust perception-action loop that is both data-efficient and generalizable. RL enables the robot to autonomously explore and learn from its environment, while model-based control provides safety and stability guarantees. The synergy between these approaches ensures that changes in morphology or environment lead to updated behavior that is grounded in both learned experience and formal control theory. This holistic loop is at the core of what we call embodied intelligence.
Selected Publications
To come