RTGaze: Real-Time 3D-Aware Gaze Redirection from a Single Image
Abstract
Gaze redirection methods aim to generate realistic human face images with controllable eye movement. However, recent methods often struggle with 3D consistency, efficiency, or quality, limiting their practical applications. In this work, we propose RTGaze, a real-time and high-quality gaze redirection method. Our approach learns a gaze-controllable facial representation from face images and gaze prompts, then decodes this representation via neural rendering for gaze redirection. Additionally, we distill face geometric priors from a pretrained 3D portrait generator to enhance generation quality.
We evaluate RTGaze both qualitatively and quantitatively, demonstrating state-of-the-art performance in efficiency, redirection accuracy, and image quality across multiple datasets. Our system achieves real-time, 3D-aware gaze redirection with a feedforward network (~0.06 sec/image), making it 800× faster than the previous state-of-the-art 3D-aware methods.
Method Overview
Quantitative and qualitative results
Paper
BibTeX
@inproceedings{wang2026rtgaze,
title={RTGaze: Real-Time 3D-Aware Gaze Redirection from a Single Image},
author={Hengfei Wang, Zhongqun Zhang, Yihua Cheng, Hyung Jin Chang},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)},
year={2026},
}