While the ready availability of 3D scan data has influenced research throughout computer vision, less attention has focused on 4D data; that is 3D scans of moving non-rigid objects, captured over time. To be useful for vision research, such 4D scans need to be registered, or aligned, to a common topology. Consequently, extending mesh registration methods to 4D is important. Unfortunately, no ground-truth datasets are available for quantitative evaluation and comparison of 4D registration methods. To address this we create a novel dataset of high-resolution 4D scans of human subjects in motion, captured at 60 fps. We propose a new mesh registration method that uses both 3D geometry and texture information to register all scans in a sequence to a common reference topology. The approach exploits consistency in texture over both short and long time intervals and deals with temporal offsets between shape and texture capture. We show how using geometry alone results in significant errors in alignment when the motions are fast and non-rigid. We evaluate the accuracy of our registration and provide a dataset of 40,000 raw and aligned meshes. Dynamic FAUST extends the popular FAUST dataset to dynamic 4D data, and is available for research purposes.


News & Updates

Friday, Jul 21st 2017

You can now get the code/data for DFAUST. Check our Downloads page for more info.

Monday, May 22nd 2017

Welcome to the Dynamic FAUST website. We will make downloads available very soon.

More Information

Referencing the Dataset

Here are the Bibtex snippets for citing MPI Dynamic FAUST in your work.

        title = {Dynamic {FAUST}: {R}egistering Human Bodies in Motion},
        author = {Bogo, Federica and Romero, Javier and Pons-Moll, Gerard and Black, Michael J.},
        booktitle = {IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
        month = jul,
        year = {2017},
        month_numeric = {7}