- Dec 2025: DeepForcing, MV-TAP, our papers on autoregressive video world model, point tracking model has been released on arxiv.
- Mar 2025: Chrono, our paper on Point Tracking, has been accepted to CVPR 2025.
I am MS Student at KAIST CVLAB, advised by Seungryong Kim. I am interested in 4D Computer Vision (Autoregressive Video World Model, Feed-Forward 4D Reconstruction, Robot Perception through Point Tracking). I received my BS from the Yonsei University in 2024.
Long-video generation by combining Deep Sink and Participative Compression.
Multi-view point tracker that uses camera and cross-view attention.
Feature backbone specifically designed for point tracking with built-in temporal awareness.
Outdoor scene relighting for Neural Radiance Fields (NeRF).