Embodied AI

Robotics Security Division  ·  ETH Robotics Club

Centralized hub for Embodied AI experiments within the Robotics Security Division : foundation model evaluation, adversarial robustness, and generalization research across robot policies and simulation environments.

Resources

Experiments

Experiment 1
Test Scene Overfitting in Robot Foundation Models
Do models overfit to the specific scenes, backgrounds, and objects present in their evaluation benchmarks? We compare zero-shot and fine-tuned performance on in-distribution vs out-of-distribution scenes across multiple models.
In progress
Experiment 2
Latent Representation Analysis in Hierarchical VLAs
First concrete result: on a clean 6-layout, 3-object pick suite, linear probes on frozen System 2 → System 1 boundary latents decode both target object and target position under leave-one-layout-out evaluation. Next steps are prompt-control and clutter / OOD follow-ups.
First result
Experiment 3
Adversarial Visual Patches on Generalist Robot Policies
Can a fixed RGB patch placed in the policy’s input image override its language conditioning? We optimise patch pixels by action-sequence distillation against a frozen GR00T N1.6 and test cross-prompt redirection in simulation. First result: 5 / 10 cross-prompt bottle pickups when the prompt names the orange.
First result — 50%
Experiment 4
DAgger Steering at the Representation Boundary
Can a fixed latent steering vector trained by DAgger redirect a frozen GR00T policy across prompts? The current vector was trained only on mirrored left/right scenes; it shows strong qualitative seen-scene steering, but no simulator success yet on a validated up/down holdout axis.
Intermediate result