TY - JOUR
T1 - Controllable Free Viewpoint Video Reconstruction Based on Neural Radiance Fields and Motion Graphs
AU - Zhang, He
AU - Li, Fan
AU - Zhao, Jianhui
AU - Tan, Chao
AU - Shen, Dongming
AU - Liu, Yebin
AU - Yu, Tao
N1 - Publisher Copyright:
© 1995-2012 IEEE.
PY - 2023/12/1
Y1 - 2023/12/1
N2 - In this paper, we propose a controllable high-quality free viewpoint video generation method based on the motion graph and neural radiance fields (NeRF). Different from existing pose-driven NeRF or time/structure conditioned NeRF works, we propose to first construct a directed motion graph of the captured sequence. Such a sequence-motion-parameterization strategy not only enables flexible pose control for free viewpoint video rendering but also avoids redundant calculation of similar poses and thus improves the overall reconstruction efficiency. Moreover, to support body shape control without losing the realistic free viewpoint rendering performance, we improve the vanilla NeRF by combining explicit surface deformation and implicit neural scene representations. Specifically, we train a local surface-guided NeRF for each valid frame on the motion graph, and the volumetric rendering was only performed in the local space around the real surface, thus enabling plausible shape control ability. As far as we know, our method is the first method that supports both realistic free viewpoint video reconstruction and motion graph-based user-guided motion traversal. The results and comparisons further demonstrate the effectiveness of the proposed method.
AB - In this paper, we propose a controllable high-quality free viewpoint video generation method based on the motion graph and neural radiance fields (NeRF). Different from existing pose-driven NeRF or time/structure conditioned NeRF works, we propose to first construct a directed motion graph of the captured sequence. Such a sequence-motion-parameterization strategy not only enables flexible pose control for free viewpoint video rendering but also avoids redundant calculation of similar poses and thus improves the overall reconstruction efficiency. Moreover, to support body shape control without losing the realistic free viewpoint rendering performance, we improve the vanilla NeRF by combining explicit surface deformation and implicit neural scene representations. Specifically, we train a local surface-guided NeRF for each valid frame on the motion graph, and the volumetric rendering was only performed in the local space around the real surface, thus enabling plausible shape control ability. As far as we know, our method is the first method that supports both realistic free viewpoint video reconstruction and motion graph-based user-guided motion traversal. The results and comparisons further demonstrate the effectiveness of the proposed method.
KW - Controllable free viewpoint video
KW - motion graph
KW - NeRF
KW - surface-guided volumetric rendering
UR - https://www.scopus.com/pages/publications/85135745434
U2 - 10.1109/TVCG.2022.3192713
DO - 10.1109/TVCG.2022.3192713
M3 - 文章
C2 - 35914057
AN - SCOPUS:85135745434
SN - 1077-2626
VL - 29
SP - 4891
EP - 4905
JO - IEEE Transactions on Visualization and Computer Graphics
JF - IEEE Transactions on Visualization and Computer Graphics
IS - 12
ER -