UMONS-TAICHI is a large 3D motion capture dataset of Taijiquan martial art gestures (n = 2200 samples) that includes 13 classes (relative to Taijiquan techniques) executed by 12 participants of various skill levels. Participants levels were ranked by three experts on a scale of [0-10]. The dataset was captured using two motion capture systems simultaneously: 1) Qualisys system, a sophisticated motion capture system of 11 Oqus cameras that tracked 68 retroreflective markers at 179 Hz, and 2) Microsoft Kinect V2, a low-cost markerless sensor that tracked 25 locations of a person’s skeleton at 30 Hz. Data from both systems were synchronized manually. Qualisys data were manually corrected, and then processed to complete any missing data. Data were also manually annotated for segmentation. Both segmented and unsegmented data are provided in this database. This article details the recording protocol as well as the processing and annotation procedures. The data were initially recorded for gesture recognition and skill evaluation, but they are also suited for research on synthesis, segmentation, multi-sensor data comparison and fusion, sports science or more general research on human science or motion capture. A preliminary analysis has been conducted by Tits et al. (2017) [1] on a part of the dataset to extract morphology-independent motion features for gesture skill evaluation. Results of this analysis are presented in their communication: “Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation” (https://doi.org/10.1145/3077981.3078037) [1].
Qualisys data were processed manually with Qualisys Track Manager.
Missing data (occluded markers) were then recovered with an automatic recovery method: MocapRecovery.
Data were annotated for gesture segmentation, using the MotionMachine framework (C++ openFrameworks addon). The code for annotation can be found here. Annotations were saved as ".lab" files (see Download section).
Data were then segmented using the Matlab code provided in this repository (see folder "Segmentation"). This code requires the MoCap Toolbox for Matlab, and optionally the MoCap Toolbox Extension for data visualization.
The Kinect data were recorded with Kinect Studio. Skeleton data were then extracted with Kinect SDK and saved into “.txt” files which contain several lines corresponding to each captured frame. Each line contains one integer number (ms), relative to the moment when the frame was captured, followed by 3 x 25 float numbers corresponding to the 3-dimentional locations of the 25 body joints.
Kinect data were then manually synchronized with Qualisys data. Synchronized data, as well ad labels can be visualized using the code provided in thie repository (see folder "Visualization"). This code requires the MotionMachine framework.
Unsegmented: https://zenodo.org/record/2784581/files/C3D.zip?download=1
Segmented: https://zenodo.org/record/2784581/files/Segmented_C3D.zip?download=1 (note: segmented c3d files were converted from tsv files using Visual3D).
Unsegmented: https://zenodo.org/record/2784581/files/TSV.zip?download=1
Segmented: https://zenodo.org/record/2784581/files/Segmented_TSV.zip?download=1
Unsegmented (.txt): https://zenodo.org/record/2784581/files/Kinect.zip?download=1
Segmented (.txt): https://zenodo.org/record/2784581/files/Segmented_Kinect.zip?download=1
(.lab): https://zenodo.org/record/2784581/files/Labels.zip?download=1
All files can be used with the MotionMachine framework. Please use the parser provided in this repository for kinect (.txt) data.
A video is provided as supplementary information on this link: https://youtu.be/OVJ4PWYJxnY
To cite this dataset:
Tits, M., Laraba, S., Caulier, E., Tilmanne, J., & Dutoit, T. (2018). UMONS-TAICHI: A Multimodal Motion Capture Dataset of Expertise in Taijiquan Gestures. Data in Brief. https://doi.org/10.1016/j.dib.2018.05.088
[1] M. Tits, J. Tilmanne, T. Dutoit, Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation, Proc. 4th Int. Conf. Mov. Comput. - MOCO ’17. (2017) 1–8. https://doi.org/10.1145/3077981.3078037