GT Logo RIPL Logo

Sim2real Image Translation Enables Viewpoint-Robust Policies from Fixed-Camera Datasets

Jeremiah Coholich, Justin Wit, Robert Azarcon, Zsolt Kira

📌 International Conference on Robotics & Automation (ICRA) 2026

Paper Code Dataset BibTex

Video

Abstract

Vision-based policies for robot manipulation have achieved significant recent success, but are still brittle to distribution shifts such as camera viewpoint variations. Robot demonstration data is scarce and often lacks appropriate variation in camera viewpoints. Simulation offers a way to collect robot demonstrations at scale with comprehensive coverage of different viewpoints, but presents a visual sim2real challenge. To bridge this gap, we propose MANGO -- an unpaired image translation method with a novel segmentation-conditioned InfoNCE loss, a highly-regularized discriminator design, and a modified PatchNCE loss. We find that these elements are crucial for maintaining viewpoint consistency during sim2real translation. When training MANGO, we only require a small amount of fixed-camera data from the real world, but show that our method can generate diverse unseen viewpoints by translating simulated observations. In this domain, MANGO outperforms all other image translation methods we tested. Imitation-learning policies trained on data augmented by MANGO are able to achieve success rates as high as 60% on views that the non-augmented policy fails completely on.

Slide 1

Image Translation Architecture

Translation Examples

Slide 1

Image Gallery

Robot Experiments

Successes

Failures

BibTeX

        
        
            @article{coholich2026sim2real,
              title={Sim2real Image Translation Enables Viewpoint-Robust Policies from Fixed-Camera Datasets},
              author={Coholich, Jeremiah and Wit, Justin and Azarcon, Robert and Kira, Zsolt},
              journal={arXiv preprint arXiv:2601.09605},
              year={2026}
            }